Some clarifications needed...

alexsok

Regular
On the following Siggraph 2002 summary page:
http://www.flipcode.com/misc/siggraph2002.shtml

The following was said (i bolded the important things):

Like the ATI Radeon 9700, the NVIDIA CineFX cards offer a 128-bit frame buffer and floating point pipeline. NVIDIA raises the bar with 32-bit floats (vs. 24-bit) in the pipeline and blows away ATI’s vertex and pixel shaders. Instead of offering a few hundred instructions, they went for 1k (that’s a thousand!) instructions in pixel shaders and 64k instructions in vertex shaders. These include branches and loop instructions. As has been the case in the past, the NVIDIA card promises higher raw performance and longer pipelines but gives less texture support. These vertex and pixel shaders may eclipse the ATI Radeon 9700’s limits when the CineFX cards ship, but the ATI card has twice the texture access rate. However, the cards are so difficult to program that it may be the tools and not the hardware that makes the difference in the end.

And this:
The full specs for the NVIDIA CineFX cards have not yet been publicly released. Ideally, the card will ship with as many texture units as they can cram onto it, support for two-sided stencil testing and GL_DEPTH_CLAMP (proposed in Everitt and Kilgard’s shadow paper), and nView multi-monitor support without needing a dongle like the GeForce4 cards. NVIDIA should also get those extensions into drivers for older hardware like the GeForce3 so game developers can start using them right away. Losing alpha blending on the Radeon 9700 when operating in 128-bit mode is really unfortunate. Hopefully NVIDIA’s card will make up for trailing ATI’s next generation card by four months by having full alpha blending support or a hardware accumulation buffer for the same purpose.

So my question is, do u guys think the things I bolded are true? Especially that "but the ATI card has twice the texture access rate"?
 
How could it have less texture access support? That would be kind of wierd...imagine the NV30 being able to access fewer textures per clock than the GeForce4.

I imagine it might be possible if there was only one texture access port for every two pixel pipelines, but that seems like a very odd situation indeed.

Maybe it has to do with the card not being able to access the same texure as many times?

Or maybe he misspoke, and it the exact opposite is true (Many now think that the NV30 has an 8x2 architecture).

Another possibility that it's not the texture accesses that the NV30 can do fewer of, but that it has half the texture filtering power. This would probably mean significantly lower anisotropic filtering performance.
 
Back
Top