aaronspink
Veteran
NVIDIA whitepapers
Nvidia white papers say G100 has 512 cores too. And that GT200 has 240 cores. and that G80 has 128 cores. Nvidia white papers generally aren't reliable sources of information unfortunately.
NVIDIA whitepapers
And the dark secret of GT200 is that it's the kludge that NVidia used, because G100 (now called GF100) was too ambitious for the end of 2007Interesting take overall.
Nvidia white papers say G100 has 512 cores too. And that GT200 has 240 cores. and that G80 has 128 cores. Nvidia white papers generally aren't reliable sources of information unfortunately.
You don't think that AMD wrote pretty much new RTL when they went from K6 to K7? Or Intel from P5 to P6 or from P6 to P4? Or DEC from EV5 to EV6? Or IBM from Power6 to Power7? or from Power4/5 to Power6?
Oh ok, because NVIDIA calls them "CUDA Cores", that automatically invalidates everything else that was mentioned about well more than 2x improvement in terms of texture filtering performance, high AA performance, compute performance, etc vs GT200? What a convenient way to side-step the question. This is getting Beyond Silly (no pun intended). If a new graphics card has far higher efficiency than an older graphics card in several key areas, then how in the world can one claim that the architecture is "basically" the same? That makes no sense. Common sense would dictate as such, irrespective of what a snobby "computer architect" may think.
Apparently because you see a similar SIMD structure you think that this is the same architecture - given that Fermi has the same shader execuction mechanism as G80 and a similar mechanism for the shader clustering, do you not see Fermi as the same architecture?
ECC support
So NVidia took G80, tacked-on DP, tweaked the crappy TMUs, increased ALU:TEX, fixed the MCs to coalesce and a few other minor details.
And the CUDAholics loved it.
It's curious that you picked only the things that didn't change from G80 to Fermi (which in this discussion, most of us already agreed that some things rarely change) and curiously neglected to mention the major changes in cache hierarchy, geometry processing, ECC support, the GPC modules, the SP / DP units, etc...
If you go by only the things that didn't change, then you'll have an even harder time finding the differences from RV670 to RV770...
Why wouldn't they? The result of those "minor tweaks" was still better than what the competition could produce. Or are we doing this in a vacuum now?
Why wouldn't they? The result of those "minor tweaks" was still better than what the competition could produce. Or are we doing this in a vacuum now?
But is it more efficient? Is it better than 1½ x Cypress?
Chill out guys. GF100 is not a revolution. But this can be: http://www.marketwatch.com/story/su...et-in-q2-2010-2010-03-10?reflink=MW_news_stmp
OK, lets make wireless energy transmitor.......... Wow, we've got it! ......... It's revolution! Nobody will use it coz it's not so wireless as we thought, but it's revolutionary!!!
LOL! I wonder why that is revolutionary... Oh! I know! Because its AMD/ATI.. It has to be!
I've long maintained that the only people NVidia was interested in with GT200 was the CUDAholics. DP was tacked-on for a reason.Why wouldn't they?
No, its not curious, I did that because that is exactly what you are doing where AMD architectures are concerned. I can give you an equally long list of things that changed from R6xx->RV7xx (and again from RV7xx->Evergreen), so why do you view RV7xx as not a new architecture and Fermi a new one?It's curious that you picked only the things that didn't change from G80 to Fermi (which in this discussion, most of us already agreed that some things rarely change) and curiously neglected to mention the major changes in cache hierarchy, geometry processing, ECC support, the GPC modules, the SP / DP units, etc...
If you go by only the things that didn't change, then you'll have an even harder time finding the differences from RV670 to RV770...
That would greatly depend on what tests you throw at it. In DP-math, it could very well be more efficient, whereas in raw, bilinear texturing throughput I don't see much chances for Fermi.
I can give you an equally long list of things that changed from R6xx->RV7xx (and again from RV7xx->Evergreen), so why do you view RV7xx as not a new architecture and Fermi a new one?
The graphics specific changes are colossal compared to the compute specific changes in fermi.GF100 was designed much more about compute than graphic.