Its probable, but AMD in this case would relocate some of those DrMOS from 6800 to 6900. Initial shortages (which are inevitable in any case) are better than postponement of the launch IMO.
Might not make sense if 6800 is where the money is.
Its probable, but AMD in this case would relocate some of those DrMOS from 6800 to 6900. Initial shortages (which are inevitable in any case) are better than postponement of the launch IMO.
6900 have clearly higher margins, and if there are lets say 100.000 DrMOS chips, would you rather sell 100k 6800, or share DrMOS between 6800/6900 and get better profits and keep up the launch date? Its not brainier really, the big question is how fast TI can solve this shortage issue (if there is one in the first place, too many fake rumors).Might not make sense if 6800 is where the money is.
Come on, it almost been a 'known' thing that XT would be 1920SP with 30 SIMDs and Pro 1536SP and 24 SIMDs. Rumours have been speculating for ages. Also, it's VLIW4
My speculation:
XT specs:
900Mhz
5Ghz
VLIW4
1920SP
30SIMD
Pro Specs:
850Mhz
4.8Ghz
VLIW4
1536SP
24SIMD
They both end up with 64 which is required for Wavefront.
Just like previous designs, I doubt it will scale anywhere close to linear with simd count. So with 20% less simds but only 6% clock you could end up with a similar difference than between HD5870 and HD5850, whose 10% less simds and 15% less clock resulted in about 15% less performance (this is of course a bit simplified but you get the idea). That said I'm not sure why AMD wanted to disable so many simds, for power reasons at least it would be preferable to disable less and make the clock difference larger instead.Sounds reasonable, yes, but that would introduce a 30% difference from one another. Still, it would give AMD the option to price it 30% higher as well while actually having an excuse.
Sounds reasonable, yes, but that would introduce a 30% difference from one another. Still, it would give AMD the option to price it 30% higher as well while actually having an excuse.
Where did 30 SIMDs come from? Admittedly I haven't been following Cayman all that closely, but I was under the assumption that Cayman XT had less vector lanes in total than Cypress XT.
And where does your assumption come from?Sorry, terminology fail, I mean ALUs.
Just like previous designs, I doubt it will scale anywhere close to linear with simd count. So with 20% less simds but only 6% clock you could end up with a similar difference than between HD5870 and HD5850, whose 10% less simds and 15% less clock resulted in about 15% less performance (this is of course a bit simplified but you get the idea). That said I'm not sure why AMD wanted to disable so many simds, for power reasons at least it would be preferable to disable less and make the clock difference larger instead.
Maybe they can deactivate SIMDs independent of TMUs / tessellators etc.?
If you just cut one SIMDs per module, but keep the rest of the module "intact", then the overall performance delta will stay within the "usual" ~ 20% - even if clock speeds are cut ...
Pretty terrible rebuttal. Nothing you said had anything to do with his point.
He wasn't talking about yields or power consumption. He was talking about measured performance relative to available resources - i.e. efficiency. And yes, you can count aggregate bandwidth since the 5970 is working on multiple frames in parallel.
Hmm... if it's 30 SIMDs for XT and 24 for PRO, how are they arranging it?
Any time you use dual cards you're going to suffer in efficiency - so its pointless to argue about available the raw # of resources in that context. My point was that his use of those numbers is pretty damn flawed and wrong from a how-GPUS-actually-function perspective and from a product management perspective - having 2 x dies doesn't automatically mean you're spending more money on your particular card. It's all marketing too to say 2x2GB = 4GB when we all know that's not how it necessarily works. And besides, I didn't even have to point out that having more memory bandwidth total doesn't mean you automatically perform faster - there's plenty of cards out there that are nice examples.
No, the numbers I quoted are crucial to the HD5970's performance and how it actually functions. It really does take 670 mm^2 of silicon from TSMC, and with all the hubbub about AMD's GPU business being wafer throughput limited, this fact is not to be overlooked. If AMD were to manufacture the 5970 in quantities sufficient to compete with GTX580, they would be severely limiting the overall amount of cards they can produce. The 5970 really does consume 256 GB/s of bandwidth - if you gave each GPU 80 GB/s so that aggregate bandwidth was equal to a 5870, I guarantee performance would suffer drastically. The 5970 really does have 2.63x the raw floating point throughput of a GTX580. With the amount of resources AMD has thrown at the 5970, to make it work business wise, it should completely dominate the GTX580. And yet it doesn't, especially on DX11 games.
No, the numbers I quoted are crucial to the HD5970's performance and how it actually functions. It really does take 670 mm^2 of silicon from TSMC, and with all the hubbub about AMD's GPU business being wafer throughput limited, this fact is not to be overlooked. If AMD were to manufacture the 5970 in quantities sufficient to compete with GTX580, they would be severely limiting the overall amount of cards they can produce. The 5970 really does consume 256 GB/s of bandwidth - if you gave each GPU 80 GB/s so that aggregate bandwidth was equal to a 5870, I guarantee performance would suffer drastically. The 5970 really does have 2.63x the raw floating point throughput of a GTX580. With the amount of resources AMD has thrown at the 5970, to make it work business wise, it should completely dominate the GTX580. And yet it doesn't, especially on DX11 games.
Additionally, the HD5970 is very expensive to produce - it probably costs AMD more than 2x the cost of a 5870, since they had to sandwich all those components into a more sophisticated enclosure, with a better cooler, etc. I would guess they also have to use the very best Cypress dies in terms of power characteristics in order to fit their power envelope. "Addressing" the GTX580 with the 5970 would be an economic disaster for AMD.
For all the ink spilled about Nvidia's gargantuan die size, leading to economic doom and gloom for the whole company, I find it remarkable that people here champion the 5970 as a worthy answer to GTX580. As I said in my post, AMD itself doesn't want to "address" the GTX580 with the 5970 - instead Cayman will perform that job. I expect Cayman to perform rather well and give the GTX580 stiff competition - with a tremendously better business case than the 5970 ever had.
You fell, or deliberately walked into, the 'its one card so the numbers are cumulative' trap of multi-GPU-on-a-stick. Your conclusion is flawed, because your premises are invalid.
My hair is a bird.
You fell, or deliberately walked into, the 'its one card so the numbers are cumulative' trap of multi-GPU-on-a-stick. Your conclusion is flawed, because your premises are invalid.
My hair is a bird.
I disagree. Rather than flatly stating that I'm wrong, perhaps you could try persuading me to see things differently.
I have never said that multi-GPU scaling should be linear. I'm just pointing out that physically, multi-GPU setups are very demanding. It's an indisputable fact that HD5970 consumes 256 GB/s of bandwidth.