You said that they wouldn't get near their theoretical triangle limit. Yet both do, in fact, GeForce is even closer, it's just that the Radeon's limit is that much higher, so it's still far ahead.
And I still stand to it in the content I mentioned it. Maximum theoretical triangle rates are just that a theoretical number. They'll never reach those rates in real time gaming conditions, unless we're talking about a damn simplistic case scenario, which my former rather weird example attempted to illustrate. If I'd get in excess of 150-200M Triangles/sec from each of those accelerators out in today's real time gaming conditions, then I guess the initial GF2 MX was in the same consensus reaching it's advertised 20M Triangles/sec in real time gaming conditions of the past too.
What does that have to do with anything?
Everything since those VS units get used from simple T&L up to VS2.0 or VS2.0+ depending on their maximum compliance. Kindly take under consideration all case scenarios and not those that serve your own point best. There are tons of recent games out there that still utilize simple T&L code, some use VS1.1 calls and the more advanced ones VS2.0., last one obviously being the fewest.
Then you say they don't get near their theoretical limit, so I show some figures that indicate that they do.
Those links I provided before you, included those very same numbers. You showed me zilk, nada, that I hadn't seen or linked to before.
Who cares if they use VS1.1, 2.0 or fixed T&L? Especially on the Radeon it doesn't matter, since everything uses the same unit. I'm not too sure how the GeForce does it, I believe it had a separate fixed unit still.
ROFL NOT SINCE THE NV20. You were saying? And yes I obviously care because of the simple fact what a unit gets used for and not what it's compliance exactly is. For me it means that in all fairness I'd have to evaluate it's usability in all possible scenarios and for what it mostly gets used for in games.
We WERE concentrating solely on the vertexprocessing aspect yes.
Again today's vertex processors take on from simple T&L functions to the highest vertex shader calls of their compliance.
You mean the completely horrible PS2.0 implementation? Yes, but that's not what we are discussing now.
Yes that one and it was truly close to horrible. Irrelevant to that the NV3x line's weaknesses relied mostly there and NOT in the vertex processing department (gladly take the static flow control performance of the NV3x as an exeption).
If it is a fact, there must be proof. I don't have to acknowledge anything that isn't proven. So come on, present the transistor counts of all the parts then. Else it's your word against mine, and I will continue to say that since pixelshader units are less complex than vertexshader units, they require less transistors, which ofcourse makes perfect sense.
When I say comparative terms then I obviously do not only mean Pixel Shader SIMDs, but NV4x's Vertex Shader MIMDs too. Texture samplers are expensive in hardware. Even more MIMDs. You need proof for what? Even if I had exact transistor counts I wouldn't be as foolish to hand them out. Feel free on the other hand to think that the ~60M Transistor difference between R420 and NV40 is exclusively due to the latter's PS3.0 support and nothing else. And it's by far not my word against yours. You better have a closer look around into what most had to say in this thread.
Gee, interesting. Also the fact that they don't test against any gaming cards from ATi or NVIDIA, and don't bother to test any games.
Apparently that card is nothing more than a low-budget professional card. I wouldn't be surprised if it didn't actually have Direct3D drivers.
In short, this is NOT a mainstream card, but aimed solely at professional applications, quite unlike the ATi and NVIDIA cards.
Of course did it have D3D drivers. Here's the entire family of former generation 3DLabs accelerators:
http://www.3dlabs.com/products/family.asp?fami=6
Here the VP560:
http://www.3dlabs.com/products/product.asp?prod=264
The new, unified 3Dlabs Acuity Driver Suite runs across the entire Wildcat VP family and includes highly optimized OpenGL and Direct3D drivers, a customized driver for 3D Studio Max and the new 3Dlabs Acuity Window Manager that provides precision window control
over multiple displays.
There was in fact one review with games I can remember from the P9, from a guy named Modulor, yet the site sadly no longer exists. Of course did it perform really bad in games and it was obviously aimed at professional applications only.
Current price ranges of the older generation products (lowest):
VP560= 140$
VP760= 246$
VP870= 329$
VP880 Pro= 329$
VP990 Pro= 544$
That's a complete former generation product line from top to bottom and while the 560 is the lowest player of them all, other offerings apparently aren't. Digit-Life compared it to the same generation products of ATI/NV back then, for which it did more than just well, especially it's straight competitors the Fire GL 8700 and the Quadro 550 (if my memory serves me well the GL 8700 was some sort of "8500LE" for the professional market).
ATI today:
http://www.ati.com/products/workstation/fireglmatrix.html#agp
http://www.xbitlabs.com/news/video/display/20040806084441.html
The FireGL Visualization series includes the entry-level FireGL V3100 with 128MB of memory, 4 pixel pipelines and 2 vertex processors; the FireGL V3200 which adds stereo 3D capabilities; the mid-range FireGL V5100 with 128MB of memory, 12 pixel pipelines, 6 vertex processors and stereo 3D output and the high end FireGL V7100 with 256MB of GDDR3 memory, 16 pixel pipelines, 6 geometry engines, stereo 3D and dual link capabilities.