GFFX Benchmarks - Which rendering mode is used?

boobs

Newcomer
Does anybody know which internal rendering mode is being used in the GFFX reviews?

32 bit color (does it even have native 32 bit color)? 64 bit color? 128 bit color?

Can you select these in registry or do these need to be written into the program?

Would this make a difference at all in the numbers?
 
Doomtrooper said:
I would imagine 32-bit... :?:

No, no. The output is always 32 bit, but I had thought that neither the 9700 nor GFFX use 32 bit internally. 9700, IIRC, uses 96 bit rendering while GFFX can use either 64 bit or 128 bit.

If that's the case, then wouldn't the rendering mode make a huge difference in performance in bandwidth limited cases?
 
No..

Using that thinking a 9700 would have a huge disadvantage vs. a Geforce 4 ??

That kind of precision is only in DX 9, none of the games tested were DX9..in fact half of them were DX7.
 
Doomtrooper said:
No..

Using that thinking a 9700 would have a huge disadvantage vs. a Geforce 4 ??

That kind of precision is only in DX 9, none of the games tested were DX9..in fact half of them were DX7.

Well, the 9700 has much more bandwidth and fillrate power so the disadvantage might not show up. Furthermore, this problem may only manifest itself in tests that make heavy use of pixel shaders. I find it unlikely that either Nvidia or ATi would retain their old 32 bit PS and VS along side new floating point ones, and the GFFX scores on some of the pixel shader tests in the extreme tech review look fishy, so I think this is worth checking out.
 
i believe the internal rendering precision is not dependent on the dx version. Pretty much every card(r85,gf4,v5) does higher internal precision .
 
boobs said:
No, no. The output is always 32 bit, but I had thought that neither the 9700 nor GFFX use 32 bit internally. 9700, IIRC, uses 96 bit rendering while GFFX can use either 64 bit or 128 bit.

If that's the case, then wouldn't the rendering mode make a huge difference in performance in bandwidth limited cases?

Regardless of what it's actually doing, neither the Radeon 9700 Pro nor the GeForce FX will ever show any quality difference (putting gamma correctness aside), compared to each other or somewhat older cards, when it comes to color accuracy. They won't until games start supporting the 64-bit and 128-bit storage formats.
 
As it has been said before:
GeForce FX runs all current shaders in integer precision (11 bit per component). Perhaps 1.4 shaders would be run with half float precision.
 
In part I am fairly sure the FX yields a higher framerate futuremark's pixel shader benchmark due to direct legacy support for PS 1.1 and 1.3, utilizing register combiners for backwards compatibility (up to 2 per pipe at around 12-bit per component precision). The advanced pixel shader test, with PS 1.4, probably uses the fragment shader processor for computation, which is a bit slower.
 
Doomtrooper said:
Democoder can answer this one

If he was here.. but I guess he really has decided to call it a day. What a terrible shame!

OpenGL Guy or Dio should be able to answer this! However you have never seen internal rendering mode options in a driver app (nor have I seen it in the registry in any tweaking article) so I guess it happens automatically depending on the application?
 
I'm not 100% certain what the question was :) and I'm far too nervous to answer it anyway :D

I'm just here to make bad jokes and play the piano. Well, I say play the piano...
 
misae said:
... However you have never seen internal rendering mode options in a driver app (nor have I seen it in the registry in any tweaking article) so I guess it happens automatically depending on the application?

umm.... Matrox is the only one providing higher than 8bits per RGB for DX8 as an option? I am pretty sure their driver panel has option for GigaColor...
 
Why does internal precision require bandwidth?

I thought it was utilizing greater precision for cases of multipass that required bandwidth?

Which are we talking about? I thought both the GF FX and 9700/9500 cards could handle many applications (those that don't automatically multipass) at increased precision automatically?

Hmm...except the GF FX is said to have a dedicated integer pipeline for increased performance as well...so the answer might be a bit different for it?
 
I don't have the hair. Then again neither does he nowadays.

I met him once. I had to ask him to f*** off so I could clear the stage because he was insisting on sitting on the monitors talking to the band.
 
boobs said:
Doomtrooper said:
I would imagine 32-bit... :?:

No, no. The output is always 32 bit, but I had thought that neither the 9700 nor GFFX use 32 bit internally. 9700, IIRC, uses 96 bit rendering while GFFX can use either 64 bit or 128 bit.
Someone from nvidia said that the GeForce FX had an integer pipeline, too, so it could be 32-bit internally.
If that's the case, then wouldn't the rendering mode make a huge difference in performance in bandwidth limited cases?
Internal precision of 64- or 128-bit doesn't take any more bandwidth than 32-bit.
 
Back
Top