Luminescent
Veteran
Wavey, can you run pcchen's precision application?
Uttar said:And I'd suggest you to use my Dawn patches with FRAPS Yes, yes, I know, I'm annoying.
But IMO, it's a better test than all those DX9 things, because of all the doubts about precision with the DetFX in DX9.
With Dawn, however, it's all proprietary extensions. And nVidia obviously wouldn't cheat in their own demos.
MDolenc said:Dave: Yes this test does use 2 texture lookups (and 3 registers)...
MDolenc said:What does PS_2_0 - Simple test say?
kid_crisis said:Dave, were the 5800 and 5900 tested at stock clocks, i.e. 500 and 450 MHz respectively?
Tridam said:Dave -> it seems that by default NVIDIA decrease precision with GeForce FX 5800/600/200 but not with FX5900. Maybe you have not benchmarked the same thing ? So the difference between FX5800 and FX5900 could be bigger ?
Tridam said:Antoher possibility : maybe that the new units of the FX5900 have some limitations. So the shader of mdolenc's fillrate tester could not be able to use these new units.
Tridam said:I think you should use the mandelbrot demo of humus to make sure that the precision is the same
__________5800 ultra_____5900 ultra__% Difference (stock)__% Difference (clock for clock)
FP PS 2.0____121.043259____149.738754____ 23.7068096____33.70680965
PP PS 2.0____163.160095____181.771698____ 11.40695769___21.40695769
DaveBaumann said:Tridam said:Dave -> it seems that by default NVIDIA decrease precision with GeForce FX 5800/600/200 but not with FX5900. Maybe you have not benchmarked the same thing ? So the difference between FX5800 and FX5900 could be bigger ?
From a marketting stand point that makes zero sense. NV30 is dead, burried and they are trying their hardest to forget it - if you want to show that 5900 is 2X better than its predecessor then you would want to do entirely the opposite.
DaveBaumann said:Tridam said:Antoher possibility : maybe that the new units of the FX5900 have some limitations. So the shader of mdolenc's fillrate tester could not be able to use these new units.
Well, I can think of no other architecture where you add more units and you hardly get much of a performance increase - would there be much point in implementing units that are limited siuch that applications can't take advantage of them?
If the precisin app specifies 10 bits, would that mean that, even when full precision is specified (i.e. MDolnec's fillrate app) the precision remains fp16?
DaveBaumann said:From a marketting stand point that makes zero sense. NV30 is dead, burried and they are trying their hardest to forget it - if you want to show that 5900 is 2X better than its predecessor then you would want to do entirely the opposite.
Uttar said:Frankly, it's hard to say exactly what nVidia is doing for DX9 / OpenGL ARB Paths. Heck, they could be doing insane stuff like forcing FX12 when in Performance and forcing FP16 in Quality. There are a LOT of possibilities. I insist that to discover the true performance of FX12/FP16/FP32 on the NV30 & NV35, you've got to use nVidia's proprietary OpenGL extensions.