Yeah, obviously. For a fairer comparaison, it would be required to
They tripled the register file as well? Interesting if true. Do you have a reference for that?
They definitely did, according to
Eric Demers. They didn't triple the control logic though, and it'd be hard to estimate how much silicon that represents.
If R580 is 27.3% then G80 is likely 30-40%.
I fail to see how you get to 30-40% for G80. The ALU ratio is most likely a fair bit lower than R580's, IMO, no matter how we count the VS pipelines. The R580 has 16 TMUs and 16 ROPs, while the G80 has 32 or 64 TMUs (depending on how you count) and 24 ROPs. Both units are a fair bit more capable than R580's (4x MSAA zixel rates should illustrate that nicely for the ROPs
). Furthermore, if you exclude the MUL (which the R580 very arguably also has, they just don't advertise it), the R580 has more PS-only GFlops than G80 for VS+PS.
G80 likely set the ratio for the next generations. [...] Seriously, I believe G80 already pushes the ALU/TEX ratio to the extreme. If increasing it even more did effectively improve effective performance, I'm sure they would have.
NVIDIA's strategy is and has always been the same: optimize for the games that will be benchmarked on the card release date's, not the ones that will be benchmarked 6-12 months later. If you want to give yourself a rough idea of why that makes sense, look at Anandtech's review of G80. Out of the 7 games they used, I can only see 2 that might kinda sorta stress G80's ALU-TEX ratio. And even then, those two games (F.E.A.R. and Oblivion) benefit a lot more from other attributes of G80's architecture such as the extremely fast Z and Stencil rates for F.E.A.R., and cheap FP16 filtering/blending for Oblivion.
Given G8x's apparent architectural flexibility, and future workloads, it is extremely safe to say that G80 did NOT set NVIDIA's ratio for the next generations. I would be extremely surprised if that ratio didn't go up within the next 6 months.
Finally, it becomes increasingly more interesting to produce only dual-core dies (currently packaging two together to form quad-core). So once they produce quad-core on a single die, dual-core will dissapear from the budget market as well.
Yup, that's kinda true. Intel doesn't seem to be planning a Conroe-architecture 4-cores chip though, so their first 4-cores will be Nehalem. Who knows how much bigger (or smaller?) each of those dies will be. But then, the budget CPU might be 2 wider cores with 4 threads total, so applications would definitely have to be ready to handle more than 2 threads anyway.
Going multi-threaded or doing extra work on the GPU is both an investment. But going multi-threaded has to happen sooner or later anyway, and avoids the risk of bogging down cheaper GPUs.
That's definitely true for the next 2 years or so for core gameplay elements imo. However, for things like effects physics (which you can easily scale down without affecting gameplay), I don't think it really matters if you'd bog down a low-end GPU; if it didn't make sense to do that, it wouldn't make sense to design any feature that requires more performance than low-end GPUs (or CPUs) can offer.
In the end, I ponder how much of this arguement makes sense, since it's very debatable at this point if we'll even see a "many-core" future. AMD seems to be want to max-out at 4 cores for the desktop, while Intel seems to favour 4 cores and 8 threads for the 2008 timeframe. It's likely they'll try 8 cores and 16 threads with two chips, but how likely is that to give any boost whatsoever to applications in the 2009 timeframe? It would certainly be interesting if Intel also took the APU road by 2010 with Gesher... (in fact, it looks like they're aiming at (differentiated?) micro-cores... hmm. Scheduling a single thread's instructions accross those cores would certainly come in handy for them!)
Uttar