AMD: R9xx Speculation

Yes I think any driver improvement from ATi will be mirrored by Nvidia at this point.

Unlikely given that they've already had a year to do that with GF100/GF110. The low hanging fruit has already been picked by Nvidia.

I think there has been a serious targeting problem at ATi. They seem to have made 6970 to compete with GTX480, but in doing so they made the assumption that Nvidia would stagnate, it's almost as if they started believing their (Charlie's to be specific) own hype that Nvidia were done for and their big die was never going to produce the goods. It has resulted that ATi's top single chip card just about matches the performance of the mid-high range single chip card from Nvidia leaving the highest end unchallenged until a dual chip card appears at some point in Q1.

I think you're wrong. We've seen a continuation of AMDs previous strategy, where the top end is successfully challenged by their two-chip Antilles card. Then comes the 580, and underneath that with a small performance gap is 6970 at a significantly cheaper price and still better yields, profit margins and market share.

AMD is no longer about slugging it out toe-to-toe with Nvidia in seeing who can build a bigger monolithic chip that has all kind of issues and problems. AMD is building smarter, more profitable chips that address their markets. You just have to look at the size/price/performance of Barts, or Powertune on the higher end cards to see how AMD is building stuff for OEMs and platform builders, as well as the public.

I just checked prices a couple of days back, and GTX 580 was £430, where 6970s are £275, and 6950 is £220. The small performance bonus of the 580 is out of all proportion to it's price difference. AMD will sell a lot of chips at this kind of price difference, which is what they want because they make profit on every chip. Nvidia makes a lot less, or if rumours are to be believed, almost nothing outside of the professional market.

I don't see how Cayman has been a failure against 580, when Cayman sell a lot at profit, and Nvidia likely sells a lot less with less profit.
 
I think you're wrong. We've seen a continuation of AMDs previous strategy, where the top end is successfully challenged by their two-chip Antilles card. Then comes the 580, and underneath that with a small performance gap is 6970 at a significantly cheaper price and still better yields, profit margins and market share.

Yes, we all know AMD strategy to dead. Problem is, their new name scheme seemed to imply a change on that strategy. Since 5xxx series HDx9xx series is supposed to be the top end, made of Dual GPUs. Hell, Cayman even violates that supposed strategy as it is close to 400mm2 than anything. So no, sorry, but Cayman does not fulfill that strategy. Cayman might have a smaller die than GF110, but its neverthless big, and THAT, following AMD strategy and the new naming scheme, should indicate a far more powerfull part. Thats why it is a sort of fail, while I have no doubt it will sell quite nice.
 
Yes I think any driver improvement from ATi will be mirrored by Nvidia at this point.

I think there has been a serious targeting problem at ATi. They seem to have made 6970 to compete with GTX 480, but in doing so they made the assumption that Nvidia would stagnate, it's almost as if they started believing their (Charlie's to be specific) own hype that Nvidia were done for and their big die was never going to produce the goods.
That doesn't make any sense. First, everybody saw the flaws of GF100, and it would have been foolish to assume they can't be fixed (which is exactly what nvidia did - no more no less). Second, 6970 was certainly designed before GTX 480 came to market.

Their strategy has also made the assumption that Nvidia would never be able to get a dual chip solution going, but with GTX570 and 6970 having similar power consumption the dual chip strategy isn't one you can rule out just yet, especially since GTX570X2 would come with 2x1.25GB RAM vs 2x2GB RAM. The ATi chip could conceivably consume more power and have to sacrifice more performance to stay within the 300W limit, though powertune will make that very easy.
I've said that already in another context, I do not think 2gbit vs 1gbit ram chips makes more than a tiny difference in power consumption. In fact I would expect a 320bit interface using 10 1gbit chips to draw more power than a 256bit interface using 8 2gbit chips, if it were running at the same clock/voltage (which it is not if you compare the GF110 cards to Cayman cards, but you get the idea).
And powertune is exactly what will enable a higher performing HD 6990 - with a GTX570X2 nvidia would have no choice than to further sacrifice SMs and/or clock if they want to stay within 300W. With powertune, AMD can still use relatively high clock and simds - it will downclock at times but the average performance should be far higher that way (I would, however, therefore expect the HD6990 to draw a pretty constant 300W due to that, no matter what you throw at it, unless you change the power allocation).

I think with the GF110 Nvidia have surprised ATi and because Cayman was already so far into the design and even manufacturing stage when performance indications of GF110 started to leak they had to pump up the clocks really high to stay competitive and they still fell short of GTX580.
The clocks for HD 6970 are right there where I'd expect them - very similar to Evergreen chips (Cypress, Juniper both have 850Mhz) and Barts (900 Mhz). This is apparently the design target for these chips.
I'll grant you that the HD 6950 clock is quite high, and consequently the difference between the Cayman cards is small - but this has nothing to do with nvidia, with slower clock it would have just barely outperformed HD 6870.
 
This resolution and settings are totally irrelevant for 95% of gamers.
Just like both GTX580 and HD6970 are irrelevant for 95% of gamers, because majority of them won't buy anything better/more expensive than GTX460 or HD6800.

But anybody, who buys >$500 GPU every year (including CF/SLI users) won't likely use it with $150-200 LCD panel. Not mentioning, that this advantage is quite well scaled by CrossFire, where HD6970 matches or slightly beats SLIed GTX580, while being $300 cheaper.
 
You can be front end limited without being geometry bound, Dirt 2 is one such title in fact (we looked at this quite a lot). Like I say, the dual geomtry engines are doing their stuff - look at any triangle or vertex test.
How so? The front end, AFAIK, is geometry, unless you're talking about the command processor and/or state changes.

I am looking at geometry tests. Look at the SubD11 test, hardware.fr's tessellation test and DXSDK tests, and NVidia's water demo.
 
Last edited by a moderator:
People here really think that the 6970 launch isn´t a failure?

A card launched more than one year than its predecessor (5870), with almost the same MSRP, but only 15% faster.

If it´s not fanboyism, I don´t know what is. The worst: the same people praising 6970 (a new architecture barely faster) were the ones smashing the GTX580 last month because it "was just 20% faster than GTX480".
 
People here really think that the 6970 launch isn´t a failure?

A card launched more than one year than its predecessor (5870), with almost the same MSRP, but only 15% faster.

If it´s not fanboyism, I don´t know what is. The worst: the same people praising 6970 (a new architecture barely faster) were the ones smashing the GTX580 last month because it "was just 20% faster than GTX480".

I'm not sure what the problem is.

The 5870 is about 30% slower than the 6970. The 6970 cost less and comes with more ram.

The 580 was about 20-30% faster tha the 480 and cost the same. I don't see a problem with either launch except mabye that the gtx 580 should have launched this past march instead of a few weeks ago.
 
In future workloads with tessellation it is much faster than 15%, while the die-size was only increased by 20%. ;)
And the MSRP of HD 5870 may was $379, but in the market reality it was >$400.
With the competition of GTX 570, we might see both card below <$350 in early 2011.

The worst thing is that they did not fixed the TMUs to reach the quality of NV 2006 HQ. With 96 TMUs there would be enough raw power even for a poor implementation.
 
Yes, we all know AMD strategy to dead. Problem is, their new name scheme seemed to imply a change on that strategy. Since 5xxx series HDx9xx series is supposed to be the top end, made of Dual GPUs. Hell, Cayman even violates that supposed strategy as it is close to 400mm2 than anything. So no, sorry, but Cayman does not fulfill that strategy. Cayman might have a smaller die than GF110, but its neverthless big, and THAT, following AMD strategy and the new naming scheme, should indicate a far more powerfull part. Thats why it is a sort of fail, while I have no doubt it will sell quite nice.
I kind of agree with that. Cayman as well as Bart should have been tinier.
 
Pica84, that's all because 32nm process got cancelled, so obviously they had to change things a bit and end up with bigger than planned chip, or we wouldn't have gotten any performance increases
 
Good results considering no die shrink. Hopefully drivers will sort out the results that are causing head scratches. For the rest, it is what it is.

Anyway, how do things stand regarding mainstream parts based on the 69xx series? Or will the 68xx series be the basis?

Is it possible to easily split the architecture of a 6970 in half and gut the double precision and other professional features?

Something more powerful than a 5770, in other words, but designed using the same principles.
 
From that point of view, yes.

However, the fact that Cayman and Barts have the same per-frame time tells you that it could easily be a driver issue. Your graph from the DXSDK sample shows Cayman performing slower than Cypress at high tesselation levels. My numbers show parity between Cayman and Barts for Dirt 2.

It could be drivers or it could be a hardware bug. Either way, I don't think ATI is hitting architectural limitations.

Just to be clear the tesselation in the Dirt2 benchmark is just few seconds from the whole run. So majority of those frames run without to much geometry. Certainly a easy task for 800 Mtriangles/s and 1600 Mtriangles/s.
 
Pica84, that's all because 32nm process got cancelled, so obviously they had to change things a bit and end up with bigger than planned chip, or we wouldn't have gotten any performance increases

True, but still they are not entire innocent either. With Cayman launch, we can CLEARLY see that Cypress was designed for time to market, with a DX11 and specially tesselation implementation which was not up to par. Yes, Cypress had a smaller die, but at what cost? Its kinda hypocrit to promote DX11 and tesselation (specially tesselation was heavely promoted by AMD like the second coming, to shush months after, when Fermi launched) with a solution that seemed like not being ideal for it. In short, IMO, AMD is reaping the tempest they sow, no more no less. Kinda ironic also how we all made fun of nVIDIA supposed lack of "Plan B".

In the end, the luck of AMD with its small die strategy, was that the first iteration of Fermi did went really bad. If GF100 would have been GF110, it would have run circles around Cypress, and we would not have bashed nVIDIA to dead and questioned if it was a good architecture or not. As it turns out, its not a bad architecture and it is the way to go, looking at Cayman with its Dual Engines.

One last point I havent seen anyone touch yet is R&D expenses. AMD had to design two different architectures for two different generations, Cypress and Cayman. nVIDIA only had to design one architecture to deliver two "generations". You might say that nVIDIA really has one generation, but the gap between both AMD and nVIDIA generations is pretty close on average: 20%.

Ultimately, going with the small die strategy might not have really been the advantage some people like to tout.
 
I wonder about one thing. Why on Earth HD 4890 was able to hit thi 200 USD price tag at the time it was the highest performing single GPU card from AMD, but neither 5870, nor 6970 can. :oops: And they are not even approaching such price levels. :oops:
 
I wonder about one thing. Why on Earth HD 4890 was able to hit thi 200 USD price tag at the time it was the highest performing single GPU card from AMD, but neither 5870, nor 6970 can. :oops: And they are not even approaching such price levels. :oops:

They can, just don't need to :devilish:
 
I wonder about one thing. Why on Earth HD 4890 was able to hit thi 200 USD price tag at the time it was the highest performing single GPU card from AMD, but neither 5870, nor 6970 can. :oops: And they are not even approaching such price levels. :oops:

Demand, supply, competition, that kind of stuff…
 
Back
Top