The argument here is Geforce 1 v.s. Ps2 given the gf1 has no pixel shaders, we should discuss the other non pixel shader areas... I brought the xbox because it seems that in many of those other areas the ps2 is not far behind what is basically a Gf4... thus if it's not too far behind a Gf4 in non-feature-pixel shader related areas... and thus compared to gf1... u get the idea.[/qoute]
What are the major differences between the GF4 and the GF3? Another vertex shader and LMAII. What are the major differences between a GF2 and a GF1? Build process and another TMU per pixel pipe. What is the fill difference between the NV2A and the NV15(GF2)? Comparing to the particular NV15 that I run, 1.86GTexels v 1.76GTexels. You take away vertex shaders and pixel shaders and the GF3/GF4 aren't all that different from the GF1/GF2(there are some differences particularly pertaining to efficiency of the architecture, and supported filtering modes, but overall....). If you say outside the pixel shaders the PS2 isn't that far behind the XBox, you are saying that it is comparable to a GF2 with a strong vertex unit which in itself is marginal over the GF1.
Anyways the cpu side of this argument is kind of funny so since the gf1 will always be cpu limited we can use cpus from yrs, decades, or centuries later....
No, it won't. Once the framerate ceases to rise running 640x480 with faster CPUs then the limit is the GeForce chip. Load it up with enough T&L work, or stressful enough conditions in terms of fillrate and it will happen even with current CPUs. I haven't seen it yet though.
we can take benchmarks out of games that do part of their T&L on the cpu and will massively benefit from such upgrades, and attribute the perf. to the gf1....
I haven't seen a game that has an edge running soft T&L over hard T&L @640x480 yet, no matter which CPU it is.