ANova said:
The R420 may not be as big of a difference architecturally to the R300 as the NV40 is to the NV30, however that does not mean the NV40 is better then the R420. Anyone with any knowledge will tell you the R300 was a big step ahead of the NV30; the NV40 corrects many of these issues. In order for the NV40 to do this nvidia had to drastically rehaul their architecture. ATI had it right the first go around, so naturally they didn't have to try as hard to achieve similar results to that of the NV40.
I do not want to cast judgement on any of the architectures (except for my personal usage, as that is the only one, i may consider myself the authority for).
I also do not say, that any of those two Chips, nV40 and R420, is better or worse than the other. They were designed just with different emphasis' on what was to be achieved.
You say, R300 was a big step ahead of NV30. Performance-wise you're correct. Feature-wise one could argue, which features are more important than other.
Additionally, these two were also designed under different circumstances and ideas as to what DirectX9 would eventually become. nV was proposing FP16 as "full precision" (with FP32 as an added bonus for the Pro-User doing scientific stuff in offline-Rendering, hence the "speed" penalty) with a fall-back option to INT12 as what is known as partial-precision. ATi finally got their idea of FP24 as "full-precision" cemented in DX9 and such nV30 was forced to use their FP32 intended for offline-rendering to render every DX9-Shader which did not include pp-hints. Also, the delay in nV30 did not help much.
In the following year ATi did wipe the floor with everything "green" DX9-wise.
Today the situation has changed, nV40s final design phase was late enough to be adapted to the final DX9-Specs and as such there's no more 250% performance delta both Chips could be designed to an already given spec and of course nV had to do their homework more thoroughly than ATi, who already had a very fast and efficient DX9-architecture.
Maybe ATi was not sure, if their 10% lead out of a 30% fillrate advantage would not convince enough consumers to buy their products - i don't know. But that's maybe why they decided to stretch that lead artificially to 30% or more - welcome to the wonders of undefined texture filtering definitions (blame M$ - openGL uses a formula for trilinear filtering).
ANova said:
If you compare the NV40 to the R300 it's nothing more then speed enhancments as well along with SM3 support.
If you compare the R300 to the R200 it's nothing more than speed enhancments as well along with SM2.0 support.
edit:
Damn, Bjorn beat me to this....
ANova said:
It is also unfair to ATI to overlook the fact that both the R420 and R300 are smaller and consume less power then nvidia's offerings. In fact the X800 XT, while being superior to the R300 speedwise as well as offering new features such as 3dc, TAA and SM2.0b, draws less power then the 9800 XT. The same can hardly be said for nvidia.
Well, according to my measurements, ATI did indeed a wonderful job at keeping the power consumption and thus heat generation low. In fact, they did such a good job in this regard, that i did not believe it, when i first measured power consumption on R420-Chips and went to the store to get my test-device checked.
But, to an extent, the same can be said about nVidia. Of course not in comparison with the R420 line of chips, but compared to FX5800U and FX5950, the 6800u as a whole uses less power in 3D-Applications, albeit having almost doubled on transistor count, added SM3, RGAA, FP-Texture Filtering, Tone-Mapping via RAMDAC, quadrupled the number of pipelines etc.