FutureMark will not be accepting NON "WHQL" driver

DaveBaumann said:
Oh, and FYI, Beyond3D are now a full member of the Beta group.
Oh goody... so should I bombard you or Aki or Patric with my list of suggestions for the development framework for the next new 3DMark? :) 8)

PS. Grammar lesson request : Should it be "Beyond3D are..." or "Beyond3D is..." ??
 
DaveBaumann said:
WTF is all this talk of backlashes against NVIDIA, or ATI? This is very simple, look at what Futuremark states:

The reason for this is that the drivers have been officially stated as optimized for 3DMark03, and we can not verify the purity and integrity of the drivers. We are investigating the drivers and their effect on 3DMark03 - both performance and the rendering quality (ie. image quality).

Thats what this is about. Nobody is talking about optimistations being a bad thing, what Futuremark is talking about is whether these opimisations come at the cost of IQ, which is not a good thing, is it?

Nvidia claims that the latter (and possibly the former in the case of optimizations that would have been found anyway) are bad because they waste development resources that could be spent elsewhere. People can complain about other motives being involved but there is truth to this. IMO, optimizations to 3DMark that could possibly help other applications is good. Optimizations that are exclusive to 3DMark is nuetral.

Optimizations that come at the cost of IQ are not always bad either, but it they should be judged accordingly. Ex. DXT in QuakeIII. Defaulting to a 16 bit z buffer under 32 bit color.

I don't recall rendering quality being judged before by 3DMark as far the non-functionality tests. When other companies such as Matrox, Sis, etc. were benchmarked the reviewer took the responsibility of showing the IQ and the performance figures.

Requiring WHQL certified drivers might help but would it solve the issue completely? Is 3Dmark going to take the initiative on this for all cards now and possibly go beyond WHQL?(i.e. SIS potentially may not be able to produce cards/drivers that could qualify at all...)

I'm just not so sure that it matters so far as 3DMark itself is concerned and I think they should let reviewers have at the IQ issue and deal with any political issues that result.
 
Joe D said:
I don't care how "optimized" something is, as long as it does not impact image quality. That's the point. If you can find an ATI driver where it doesn't represent what the reference image does, then that driver shouldn't be used. The particular nVidia drivers in question have an impact on image quality.

Just curious, but how do settings like trilinear filtering, AF and AA correspond to your litmus test. Are there refrast tests for these?
 
Hi guys,

We were a bit surprised by all the attention regarding the driver versions and critisim related to that. Based on the feedback we got it seems very clear that doing 'one off' solutions as in this case is not good enought. :)

In order to do a long lasting positive solution we will only support WHQL drivers in our databases moving forward. This means that we'll separate all results to two different categories WHQL and non-WHQL and by default show only the WHQL results. We will enable comparisons in the separate non-WHQL category so that people who get their kicks out from trying beta drivers will have an opportunity to do so, but there will not be included to our official lists or default ORB results.

Here's the link to the statement:
http://www.futuremark.com/news/?newsarticle=200303/2003032605#200303/2003032605

While WHQL certainly does not solve all driver issues, we do hope that this in part helps to ensure at least some level of consistency.

Cheers,

AJ
 
Fair enough I guess, but would somebody please tell just WTF nVidia is supposed to have done in terms of optimizations with the Detonator 42.67, 42.68 42.69?

How can anybody judge if anything was ever done 'wrong' if we dont know the plain facts for crying out loud :!:
 
LeStoffer said:
Fair enough I guess, but would somebody please tell just WTF nVidia is supposed to have done in terms of optimizations with the Detonator 42.67, 42.68 42.69?

How can anybody judge if anything was ever done 'wrong' if we dont know the plain facts for crying out loud :!:
I think we will need to wait for the report of Futuremark, if it ever comes out.
 
Joe DeFuria said:
Deflection said:
Has it been confirmed that the Doom3 engine takes advantage of 24bit to give better image quality than 16bit?

No, and I personally don't think 24/16 bit would make a difference with Doom3 either. It's designed to look "good" with interger math. However, it's other aspects of the ARB2 rendering path that might make some visual difference, as carmack had suggested in his .plan IIRC. Probably minor though.

IMO, any significant difference will likely be between cards that can render in a single pass, vs. multiple passes. I expect there to be a bigger difference between the R200 and NV2x, than there is between the R300 and NV30.

Also, to be clear, we don't even know if the NV30 path in doom 3 is using much floating point at all. It might be more analogous to the R200 than the ARB2 path: R200 path is integer based, but one pass per light.

Quoted from B3D's interview:

Code:
REV: Your .plan indicates that the NV30-path that you use implements only 16-bits floating-point (FP), i.e. half precision FP, for most computation, which should be sufficient for most pixel shading. The ARB2-path does not have 16-bits FP, and so all computation are done with 32-bits FP on the NV30. With regards to the R300, there shouldn't be a difference since it is always 24-bits FP on the R300. According to your .plan, NV30 is twice as slow on 32-bits FP - that is why the NV30 is slower than the R300 on the ARB2-path, but faster on the NV30-path. The question is what sort of quality difference are we talking about (in DOOM3) for such a difference between FP formats?

JC: There is no discernable quality difference, because everything is going into an 8 bit per component framebuffer. Few graphics calculations really need 32 bit accuracy. I would have been happy to have just 16 bit, but some texture calculations have already been done in 24 bit, so it would have been sort of a step back in some cases. Going to full 32 bit will allow sharing the functional units between the vertex and pixel hardware in future generations, which will be a good thing.

REV: My interpretation from your .plan :

In terms of Performance :
NV30+NV30-path is faster than NV30+ARB2
NV30+NV30-path is faster than R300+ARB2
R300+ARB2 is faster than NV30+ARB2
R300+R200-path is faster than R300+ARB2

In terms of Quality :
NV30+ARB2 is better than NV30+NV30-path
NV30+ARB2 is better than R300+ARB2
R300+ARB2 is better than NV30+NV30-path
R300+ARB2 is better than R300+R200-path

Am I correct?

JC: Correct.

http://www.beyond3d.com/interviews/jcnv30r300/index.php?p=2
 
What I'm trying to figure out is why people might want a so-called "3D Mark-optimized" driver for in the first place...;) I can see why the various graphics-chip companies might make them, of course (free publicity, etc.), but can't see why an end user would want them, especially, as they most often always translate to no additional performance when running real 3D software, and can sometimes degrade real-world performance. Why not just run your own benchmarks and make up your own scores? What's the difference?
 
nvidia 's non-whql 43.45 is much stable than the ati's 7.84 . 7.84 have problems in CS and Realone. :LOL:
 
cho said:
nvidia 's non-whql 43.45 is much stable than the ati's 7.84 . 7.84 have problems in CS and Realone. :LOL:

Really? I just trashed that very same driver set today, because not one AA sampling method I tried on numerous of games worked as it should. The good cases were where I didn't get AA at all; the worst where I got the usual performance hit but in hybrid modes the MSAA parts missing completely having one axis perfectly AA'd, while the other a complete jagfest.

I waited for that official driver set for 4 months now (before anyone asks: winXP/SP1, dx9.0a, Ti4400).
 
Back
Top