AMD: R9xx Speculation

=NathansFortune said:
I fully expect the GTX680 to come in August of next year on the 40nm process node and be a massively optimised with high clocks, better density, better thermals and all round better performer.

Good point. He does mention that there will be no gouging, which helps customer goodwill, unlike 5870 where we saw prices 20% above the recommended price. :(


I have to give you props for working so hard at spinning things :) Don't stop tho, it's entertaining to watch.
 
Indeed. If we aren't going to include 5970 we shouldn't include GTX295, and 5870 was around 20% faster than GTX285.

Well I meant per generation. I dont remember details of launch date but 480 was faster than 5870.

When WAS the last gen ATI had the fastest single card? X1950?
 
Well I meant per generation. I dont remember details of launch date but 480 was faster than 5870.

When WAS the last gen ATI had the fastest single card? X1950?

X1950 if we consider each generation as a whole, though there times when ATI held the single fastest card title (4870X2 before GT200b's GTX295 was released, 5870 til GF100, 5970 til around now) etc.
 
X1950 if we consider each generation as a whole, though there times when ATI held the single fastest card title (4870X2 before GT200b's GTX295 was released, 5870 til GF100, 5970 til around now) etc.

I know Rangers said card but I think Rangers meant single GPU card. Could be wrong though.
 
I know Rangers said card but I think Rangers meant single GPU card. Could be wrong though.

Ya but whats the time limit on a new generation? I'm sure if AMD had waited 6 months on the 5870 it would have beat 480 pretty easily. Comparing generations only works if they are on a similar timeline.
 
I would say that SLI is worth 10-15% over Crossfire which is why it is not a done deal for Cayman. Nvidia dual chip cards tend to be better than their ATi ones, but I don't really follow the sector as much so my knowledge may be outdated.

In plain english, crossfire scale better than sli.
6800 series, likely thee same happens on 6900 series and if so, nvidia is deep into the hole.
 
So does that mean it is 25% smaller than Gf110(err Gf100b sorry ;)) which if we assume GF100b to be 530mm2 is ~400mm2 for Cayman(Dont let Dave catch you say RV970 again)
OR
Does it mean GF100b is 25% larger than Cayman, in which case assuming the same 530mm2 for Gf100b, we get ~425mm2 for Cayman

As for the specs, right let me take a jab at it. How does this sound?

6970 = 1536 SP's, 880 mhz core, 5.5ghz mem

6950 = 1408 SP's, 800 mhz core, 5 ghz mem

specs look good, but those are on a lot of sites already, right? and about the die size, I meant the first one <400mm2.
 
Can't say I see the lack of XT in the wild as positive. Even if the PRO is very close to GTX580 (which would be great of course).

Reminds me of the whole 512 -> 480 Fermi thing.

Lack of XT engineering samples doesn't by any means point to low volume of XT retail chips at launch. Besides, latest rumors have stated the opposite - that Pro will be in short supply and XT in abundance.

I'm a bit shocked at the 1536 SP. Now I understand why there was rumours that 6950(Pro) will have a short supply, because the XT and Pro have only a SIMD difference of two.

6870 must have had few bad yields if this is true or AMD are stocking Pro chips for Antilles.

Good misinformation from AMD. Seems they are really good at mastering misinformation.

Now for the reviews. I'm still expecting Nvidia 580 to trump the 6970, especially if the 1536 rumours are true.

However, if the 1536 really is what AMD were aiming at and they stay within a few % of the 580 then they have done well.

Question: Could there be a possibility of a 6980 with 1920 or is 30 SIMD's just a pipe dream atm?
 
Now for the reviews. I'm still expecting Nvidia 580 to trump the 6970, especially if the 1536 rumours are true.

However, if the 1536 really is what AMD were aiming at and they stay within a few % of the 580 then they have done well.

Question: Could there be a possibility of a 6980 with 1920 or is 30 SIMD's just a pipe dream atm?

The more I think about it, the more impressive it is if a 6970 can hang with a GTX 580 despite being < 400mm^2 and consuming < 200W TDP. Its funny, the 3800's and 4800's made a dual GPU a necessity to compete at the high end. With the 5970, it stood alone as a shiny "woo look what we can do card" - Antilles might push that even further

As for a 1920 card, it's possible but I doubt the GPU has it already in there.That being said, if the 28nm process really got delayed into 2012, AMD might have room for another refresh in which case a 30 SIMD beast is plausible.

The issue though is if 2x15 is too deep as some thought with Cypress or if there's diminishing returns with that idea
 
specs look good, but those are on a lot of sites already, right? and about the die size, I meant the first one <400mm2.

Yep i just compiled all the specs together :smile: So then its not gonna be the biggest chip AMD ever made(which would have been the case if it was option 2)

The more I think about it, the more impressive it is if a 6970 can hang with a GTX 580 despite being < 400mm^2 and consuming < 200W TDP. Its funny, the 3800's and 4800's made a dual GPU a necessity to compete at the high end. With the 5970, it stood alone as a shiny "woo look what we can do card" - Antilles might push that even further

As for a 1920 card, it's possible but I doubt the GPU has it already in there.That being said, if the 28nm process really got delayed into 2012, AMD might have room for another refresh in which case a 30 SIMD beast is plausible.

The issue though is if 2x15 is too deep as some thought with Cypress or if there's diminishing returns with that idea

Well even 5870 was just 10-15% off a GTX 480 and the due size difference was even more pronounced then(330mm2 vs 530 mm2). And 4870/4890 competed in the upper mid range as well(against GTX 260/275) so its not terribly surprising. AMD's last two chips(RV770 and Cypress) have been ~80% of the performance of Nvidia's high end at roughly 60% of the die size
 
Well even 5870 was just 10-15% off a GTX 480 and the due size difference was even more pronounced then(330mm2 vs 530 mm2). And 4870/4890 competed in the upper mid range as well(against GTX 260/275) so its not terribly surprising. AMD's last two chips(RV770 and Cypress) have been ~80% of the performance of Nvidia's high end at roughly 60% of the die size

What I meant is that the gap is getting closer for the single GPU's. The dual GPUs traded blows with the high end Nvidia single GPU cards - but by the 5800's, the dual GPU stood well above the single GPU offering - and now it looks like the single GPU for AMD will be awfully close to the single GPU for Nvidia, making the dual GPU even more absurdly ahead
 
The supposed switch to 4D saved them 10% die area and if the Cayman XT has less lanes than Cypress then why is its die 15-20% larger? The other arch improvements dont sound like they were anything radical to consume a greater die area.

Something doest add up..
If I understand it well, one 4D group (with related register space etc.) is 10% smaller than one 5D group. Not that one SP is 10% smaller. And we are talking about 24 SIMDs - that means 16 TMUs more than in Cypress.
 
I would say that SLI is worth 10-15% over Crossfire which is why it is not a done deal for Cayman. Nvidia dual chip cards tend to be better than their ATi ones, but I don't really follow the sector as much so my knowledge may be outdated.

Quite the contrary!

See this news from 3DCenter.de:

http://www.3dcenter.org/news/2010-12-02


Scaling of the HD6870: 83%

Scaling of the GTX580: 64%


So a Crossfire System scales up to ~10% better than a SLi System. Yep AMD has improved Crossfire a lot over the years.
 
If I understand it well, one 4D group (with related register space etc.) is 10% smaller than one 5D group. Not that one SP is 10% smaller. And we are talking about 24 SIMDs - that means 16 TMUs more than in Cypress.

that and whatever it costs them for DP.
 
If I understand it well, one 4D group (with related register space etc.) is 10% smaller than one 5D group. Not that one SP is 10% smaller. And we are talking about 24 SIMDs - that means 16 TMUs more than in Cypress.

Yep - if 385mm^2 is correct, then it is roughly 15% larger than Cypress

We have 20% more SIMDs but 10% smaller, so we have about 8% more in SIMDs alone - add in 16 more TMUs and other tweaks to the core and we're talking about filling up the rest of the 15% right there
 
Quite the contrary!

See this news from 3DCenter.de:

http://www.3dcenter.org/news/2010-12-02


Scaling of the HD6870: 83%

Scaling of the GTX580: 64%


So a Crossfire System scales up to ~10% better than a SLi System. Yep AMD has improved Crossfire a lot over the years.
Question is:
What happens when there is no game-specific driver optimisation/profile?
testing 7-8 of most popular games is fine, but who plays ONLY these most popular games?
What about playing a new, just released title?
AFAIK Nvidia usually gets much better results at release, with AMD catching up 2-3 catalyst releases later.
IF I had dual-gpu card, the choice would be obvious, don't you think?

The supposed switch to 4D saved them 10% die area and if the Cayman XT has less lanes than Cypress then why is its die 15-20% larger? The other arch improvements dont sound like they were anything radical to consume a greater die area.

Something doest add up..
Imagine they increased something else. Not SIMDs but XXXx
 
I think all the talk about efficiency and performance is a little bit premature.

We still don't know the effect of the "adjustable TDP" on the performance and efficiency of the HD6970.

neliz said:
Like the part with the slide, he talks about 20W + or - on the performance, he completely FAILS there.
The -20 to 20 slider is a %, you can adjust the power of your card in the overdrive tab by 20% up or down.
the 250W is the maximum board consumption, but the non-OC power consumption is 190/200W.
Cayman can actually clock almost as flexible as any modern CPU, so it can run at 650 MHz or 800 MHz during benchmarking if you enable Powertune

Is the HD6970 as fast as the GTX580 with 190W or with 240W ?

Scenarios:

With the TDP set to 190W the HD6970 is 10% slower than the GTX580 and with the TDP set to 240W the HD6970 is as fast as the GTX 580 (mostly due to bandwidth limitations)

or

With the TDP set to 190W the HD6970 is as fast as the GTX580 and with the TDP set to 240W the HD6970 is 25% faster than the GTX 580 (no bandwidth limitations etc...)

[edit]
and also how noisy will the HD6970 be at 240W? Dustbuster 2.0 or low noise ???
 
Last edited by a moderator:
Back
Top