Something doesn't quite add up with that. I'd expect the higher clocks and especially boost clock of the 660 Ti to have a higher TDP than the 670 even taking the missing shader block into consideration.
You will never find a 670 who really work at 980mhz .. they all run at 1054+ mhz.. ( minimum ) ... ( reference cards used in press review was nearly all run at 1084+ mhz in all games ). And ofc the 1033mhz can be from this particular card.
Yes but I'm assuming the same applies to the 660 Ti. The clock increase is enough to make me think there is a voltage increase there.
Can 15% higher clocks and 15% less shaders also lead to 15% lower TDP? The really annoying thing about this is, Nvidia can seed the absolute best of the chips (ie chips that would normally make 680's) to the press as "660 Ti's", when in reality the average 660 Ti chip could be an awful lot worse.
Nvidia's gpu on next xbox? http://www.eurogamer.net/articles/digitalfoundry-the-curious-case-of-the-durango-devkit-leak
And I'm sure every devoted 3DMark 11 player is thrilled, but otherwise it looks to be 90~95% as fast as the 670, which puts it roughly on the same level as the 7950.
I dare suggest that it's supremely easy to judge actual performance for the games that he tested. For the non-ludicrous games he selected (that means AMD friendly?) such as Metro 2033, the performance degradation vs a GTX 670 is pretty much the same as for other, ludicrous, games. So it's hard to see what's so hard about extrapolation these results to other games...jimbo75 said:So hard to judge its actual performance due to Baxter's ludicrous choice of games.
I dare suggest that it's supremely easy to judge actual performance for the games that he tested. For the non-ludicrous games he selected (that means AMD friendly?) such as Metro 2033, the performance degradation vs a GTX 670 is pretty much the same as for other, ludicrous, games. So it's hard to see what's so hard about extrapolation these results to other games...
I didn't realize the 670 was EOL.
Looking below you can see we're dealing with 1344 CUDA cores and while not shown, we're on a 28nm GPU.
It seems they used Catalyst 12.4 for all Radeons except 7970 GE, which was tested on 12.7 (see Dirt 3, HD 7970 GE is 35% faster than original HD 7970...).