inherent advantage in FX?

DOOM III

Newcomer
i think you may notice that FX scored nearly twice of 9700PRO in 3dmark2001 8 lights scene.does it mean that there's sth. reside in FX give it an inherent advantage? if so,what's it and what's it most use?
 
Its not just the 8 lights test, but also the static T&L test in shader mark.

I'm really beginning to wonder if not only do they have their legacy integer pipes in there but also the legacy T&L unit. Operating at 500MHz will make it very fast for this sort of processing, however most games will be using Vertex Shaders from now on.

(This could also be a potential reason why the Quadro FX is very fast as most of the OpenGL apps will be using fixed geomtry processing rather than vertex shading)
 
DaveBaumann said:
I'm really beginning to wonder if not only do they have their legacy integer pipes in there but also the legacy T&L unit. Operating at 500MHz will make it very fast for this sort of processing, however most games will be using Vertex Shaders from now on.

Hmm, interesting. From the GeForce FX launch interview:

nVidia/Geoff Ballew said:
As we moved to more programmability, instead of implementing a smaller Vertex Shader and replicating it we tackled the problem from a different direction. And we have a pool of calculating units and an intelligent scheduling mechanism at the front of it. So instead of an entire Vertex Shader what we have made is a sea of vertex math engines that are all glued together with this interface.

I guess that the higher flexibility that comes from this way for doing will have a bit of a performance impact. With that in mind they may indeed have kept a legacy T&L unit - mainly for professional OpenGL apps, as you point out.

Quite the behemoth, this GF FX.
 
DaveBaumann said:
I'm really beginning to wonder if not only do they have their legacy integer pipes in there but also the legacy T&L unit. Operating at 500MHz will make it very fast for this sort of processing, however most games will be using Vertex Shaders from now on.
I have two theories to offer here:

1. nVidia's unified driver architecture demands that the emulation of previous architectures is at high performance from the start, particularly with regards to the fixed-function pipeline. This is making the assumption that nVidia's earlier drivers (TNT, GeForce 256) didn't offer very much in the way of programmability, and thus low-level hardware support. I guess what I'm trying to say is that the engineers are forced to optimize legacy functionality first, so that it can be built directly into the UDA portion of the chip. For higher-level programmability functions (>=GeForce3), the hardware is likely much more flexible on the driver side of things (more hardware tweaking can be done through drivers...).

2. nVidia's new units for the FX are making use of 16-bit floats when possible. Specifically, these could be used for OpenGL lighting to great effect. This may explain the high 8-light score.

I really don't believe that they'd carry over the entire pipeline from previous chips, and just add a new one onto that. The vertex processing portion just didn't change that much, and the old pipeline was forced to use 32-bit floating point for accurate rendering anyway.
 
Sorry for the OT post, but Chalnoth's signature reminded me of the following quote:

Engineering is like sex. Make one mistake and you'll be supporting it for the rest of your life.
 
Simon F said:
Sorry for the OT post, but Chalnoth's signature reminded me of the following quote:

Engineering is like sex. Make one mistake and you'll be supporting it for the rest of your life.

I thought it was Programming instead of Engineering. :)

later,
 
Back
Top