Sharkfood said:
I'd say that's a very biased perception of what was presented.
It explains that the makers of Rydermark benchmark have mentioned there is a forced lower precision occuring with NVIDIA hardware and/or drivers.
If you don't consider relaying information as "evidence".. then I'd expect the same harsh-line be applied in the future also be applied similarly. That was my point.
Relaying information like this is certainly not evidence. There are easy ways to prove this with a relative degree of certainty, and L'Inq did no such thing. Having a pretty significant accusation that could easily be verified and not doing so makes me approach this with serious skepticism.
This is the very double-standard that I strived to emphasize... this stance has changed dramatically.
The point being, a forum post by some anonymous source ("SlayahODeath") of liguistics attuned towards "(IHVC) is suxorz doodz!" can spur front-page, widespread website link attention.. comprehensive screenshot analysis, side-by-side or java applet tests, sample programs, code snippets and white-paper quality definitive resources.
A pretty specific accusation from a website, in reference to a benchmark, and with substantially more information from which to get a feel of the accusation is now getting the brush-off, character defamation attempts, and arguments that deflect away from the main accusation.
Eh? "Significantly more information?" Fudo said, "Rydermark guys say they can't get FP32 from NVIDIA cards. At all." Now, there are plenty of ways to test for this, but the claim is so outlandish and so easily noticed without any special tests that the burden of proof is on them. If somebody wants to find something with a Big Lighting Shader and compare it against an X1900, feel free, but I don't buy these accusations right now.
I'd start with the accusation that wasn't touched or disproved, but instead precisely the deflection tactics described above- what's the deal with FP32? Is FP16 forced, defaulting or incorrectly being applied when referencing/requesting FP32? And if so, under what conditions?
Yes, I agree the burden of proof lies on the accuser.. but this has *never* been the case here or anywhere else in the past.
A good change in ethic? Perhaps.. I'm just interested why the "good change" has suddenly occurred... and not too hopeful as I've only seen differing application of ethics in the past 10 years that vary only by the IHV in question.
Burden of proof has always been on the accuser except in questionable situations. E.g., NVIDIA 3DMark performance suddenly doubles with a single driver revision to a level that isn't really theoretically possible--yeah, I'd definitely say that's worth investigating. When Dave broke the UT2003 AF optimization thing, he did so with screenshots. ExtremeTech did the same with 3DMark. Nothing has changed here. We've got no evidence of widescale wrongdoing (like the article suggests), no bizarre benchmark results or IQ problems. Every other time you mentioned later in the thread, those conditions existed. So no, this isn't "Hey guys, we're a bunch of NV fanbois!" This is a "we're not going to sit here and take another ridiculous bit of rumormongering and speculation by the Inq seriously" thread. Having a domain name doesn't mean people are or should ever be listening to you--that's the case with Fudo. He simply doesn't have a clue.
(PS: you don't request FP32. You always get FP32 with ps_3_0 unless you specifically flag a shader as _pp. ps_2_0 is the same, except you can get FP24 as well.)
But fine! Let's take the damn thing seriously just for the sake of fucking argument, in the hopes that this puts it to bed once and for all.
Fudo said:
Nvidia doesn’t leave you any choice, it's claimed. You simply cannot turn 24 or 32 bit precision on, you are always locked at 16 bit. From a developer and artistic perspective this is really unacceptable but will buy you a few frames here and there.
Developers have also informed us that they have no way to stop Nvidia doing this. The only way is to make the community aware, and that can change some minds. There is more to come and we will try to get you screenshots to really see the difference. µ
Both paragraphs heavily imply ("no way to stop NVIDIA" and "always locked at 16 bit") that this is on in all applications. Let's call the hypothetical forcing of FP16 for all pixel shaders situation A. But, just to stretch the argument even further, let's look at a second possible situation, where NV devrel got their hands on a version of Rydermark and had the driver team implement cheats for it (to force FP16). This is situation B.
To confirm situation A, we can do one of two things. If we think it really is in all applications and isn't being overly sneaky, we can do a simple IQ comparison between a G71 and an R580 and look for artifacting resulting from rounding in the G71 shots. That takes all of... uhh... an hour or two. If we're really worried about NVIDIA being sneaky and not forcing FP16 with some shaders in order to preserve IQ (e.g., instead of "some shaders to use FP16" like in the days of yore, they have "specific shaders to use FP32" and everything else gets FP16), we can write a really basic GPGPU application and look for reduced precision there. That will take slightly longer--say, six hours, just for laughs--but it's still not a big deal, especially that there's so much GPGPU code out there today. And considering a great deal of GPGPU research is being done on NVIDIA hardware and GPGPU applications, as a whole, are incredibly concerned with precision, we probably would have heard about this from then a long time ago.
Situation B: We know how NVIDIA performs application-specific optimization/cheats/whatever-the-hell-you-want-to-call-them. I assume that Unwinder's statement back in... 2003? 2004? still holds true, and NVIDIA is performing a hash on the Direct3D window name and the filename in order to detect the application (and prevent another Antidetect from being written, and defeat Quack3.exe-style tricks). So, if you're a developer and you have the source code... uhhhh... change it and recompile, dumbass. Do you have FP32? If B was true in the first place, yeah, you would.
Of course, there's situation C: this only happens on NV3x hardware. In fact, I think it might force FP16 (or FX12 for NV3x<5), and I don't really have a problem with that. Nothing is going to run with FP32 because of the register penalties and such and still have any sort of reasonable performance, regardless of res, so the only people you're hurting are possibly GPGPU people. Then again, if you call up NV and tell them you're a GPGPU developer, they'll probably just send you new stuff. And you want DX10 anyway.
So yeah. In the absence of evidence, can this
please go away. If you want to write your own tests, feel free. If it helps you sleep at night, go for it. If you prove me wrong, well, I'll happily eat my hat and call NVIDIA a bunch of bastards. I've just been given a ridiculous accusation (in terms of how broad it is) with no evidence; ergo, I have no reason to believe it.