AMD: R9xx Speculation

These slides are marked AMD confidential, which doesn't seem to indicate they are intended for mass advertising.
It's there to inform press so they can throw more than boring graphs in their reviews, but it's clearly not the "selling point" they seem to consider it to be (engineering details are always good to read, as long as they're not almost entirely down to economics... hell, it's almost one full slide about "look at how we tweaked our architecture to be economically more efficient", wtf?).

Of course, a smaller die allows for a lower price, but then arises the naming issue, with little architectural changes and lower specs than its "predecessor", which in fact isn't as Barts should clearly have been called "HD6700".


The product is good (at least, it will be one of the best available for the coming months), but everything around it is a disaster... let's hope availability won't.
 
It's there to inform press so they can throw more than boring graphs in their reviews, but it's clearly not the "selling point" they seem to consider it to be (engineering details are always good to read, as long as they're not almost entirely down to economics... hell, it's almost one full slide about "look at how we tweaked our architecture to be economically more efficient", wtf?).
Did you saw the main presentation? It begins by "design goals" like "5800-like performance at lower power and price..." and then they explain, how it was achieved. Smaller die is one of the keys, so die-size is very important in this context. I don't understand why are you complaining, that AMD is informing press and partners about parameters of their new GPU? My opinion is directly opposite, I think they could provide more detailed info.

Anyway, if you care only about price and performance, you are probably reading wrong forum... ;)
 
In my opinion, AMD is talking about die size for two reasons:

A — get good press, get reviewers to praise their technical prowess;
B — show investors that their products are profitable.

Those are valid reasons. And A is the same reason for which NVIDIA was gloating about GF100 having 3.2 billion transistors even if 200 million of those aren't working… :p
 
meh...all this is pretty ho hum until we see independent Cayman benches against 5970 and 480 :)

I wouldn't be surprised if 58xx (evergreen) -> 69xx (cayman) had a lower performance increase than 9700 -> 9800 or 9800 -> X800 or 1950 -> 2900.

It most certainly will be worse than 38xx -> 48xx or 48xx -> 58xx.

And that small performance increase justifies moving into a whole new performance category basically telling consumers that Cayman is greater than 2xEvergreen. Even though I find it unlikely 6970 will be faster than 5970 except in games that don't scale.

The only reason it won't be viewed as a larger failure than R600 is that Nvidia probably won't have anything to beat it.

Yay for marketing. :p

Regards,
SB
 
Somebody should buy it, next day air it, and put a full suite of benchmarks up well ahead of NDA :p

Anyways, seems well overpriced currently against that $245 GTX 260 TOP newegg is selling, judging by those xbit benches. But, the early adopter tax is in effect.
 
They did a good job convincing a legion of Internet jehovas that this stuff is something end users should care about with the last gen, so I'm not really sure how it is disappointing that this is the major focus again.
Yes, when NVidia pulled that trick with G71 it was the second coming.

http://www.techreport.com/articles.x/9529

The G71 is more than just a die shrink, however. Get this: the transistor count estimate is actually down from the G70's 302 million to 278 million for the G71. Why so? NVIDIA says it has replumbed the G71's internal pipelines throughout the chip, making them shorter, because those longer pipes and extra transistors aren't needed to help the G71 achieve acceptable clock speeds—the faster-switching transistors of the 90nm process will suffice for that purpose. How's that for confidence?

Shorter pipelines typically make for higher clock-for-clock performance, and we may see some of that from G71, but this isn't a radical change. I wouldn't expect anything revolutionary on that front.

NVIDIA has made much of the fact that they have a more efficient GPU architecture than ATI right now, and it's true that NVIDIA's GeForce 7-series desktop GPUs generally achieve higher performance per watt and more performance per die area than ATI's current desktop graphics processors. That's undeniable. Whether and how much this fact matters to you is something you'll have to decide.

[...]

NVIDIA's smaller chips might also make for less expensive products from NVIDIA and its partners. I would be surprised if the GeForce 7600 GT doesn't make the same migration over its lifetime that the GeForce 6600 GT did, from $199 down to $149 and below. With its much larger die and 256-bit memory interface, the Radeon X1800 GTO isn't likely to make the same transition. ATI will have to replace it with something else, and the Radeon X1600 XT certainly isn't up to the task.

ATI disputes the importance of arcane issues like GPU die size, and at a pretty basic level, they're right to do so. Most folks just want to buy the best graphics card for the money. But ATI wasn't talking down the importance of die size during the Radeon X800 era when people were asking them why they chose to limit their GPU's precision to 24 bits per color channel rather than 32; they were talking up efficient architectures and best use of die area quite eloquently.
In the end it's tit for tat.
 
Can someone explain this slide? Is the improvement dependent on the tessellation factor?
My interpretation is that lower tessellation factors tend to produce relatively bigger triangles - and some tweak they've made to the architecture can benefit greatly, because they've eliminated a specific bottleneck.

But it stands no chance once the tessellation is ramped up, when other, more fundamental, bottlenecks come into play.
 
AMD has a point with their "good enough" approach to tessellation. Civ5 seems to be the first game that shows a marked difference in the performance hit between Fermi and Evergreen with tessellation enabled. Don't think anyone has accused it of being over-tessellated so it will be an interesting title to watch.
 
AMD has a point with their "good enough" approach to tessellation. Civ5 seems to be the first game that shows a marked difference in the performance hit between Fermi and Evergreen with tessellation enabled. Don't think anyone has accused it of being over-tessellated so it will be an interesting title to watch.

What benchmark are you referring to? The only one I've seen of Civ 5 is [H] showing up a big SLI failure in that title.

1286442355tCXafRvsMc_1_5.gif


The numbers on Fermi are sufficiently whacked to suggest deeper problems though.
 
Download Catalyst 10.10 beta and see for yourself?
I did and I don't see that feature. Is the ability to disable it exclusive to the 6x00 series? It's kinda sad that my 5870 smokes every game I throw at it but AF looks worse than on my old 8800GT.
 
Back
Top