AMD: R9xx Speculation

If you are saying that 95% is not true then have a look. This is from Steam Survey, all video card percentage:

GTX285 - 1.20%
GTX280 - 0.80%
GTX480 - 0.60%

So it's 2.60% total for the last two generations.

The GTX480 cannot be directly compared to the first two as they were actually successful products that were worth a damn.

Not only that, but you're using the latest data. A lot of people would have moved on by now. Those cards came out two years ago. Try around mid 2009 and see what the results are then.
 
The GTX480 cannot be directly compared to the first two as they were actually successful products that were worth a damn.

Not only that, but you're using the latest data. A lot of people would have moved on by now. Those cards came out two years ago. Try around mid 2009 and see what the results are then.

Moved on to what? If you had a 280 or 285, the only thing you were looking to move up to was a 480 or a 580. Even the 5800 series is under 8%. Unfortunately that value includes the 5850 and 5830 which isn't the high end.

5% using top end cards is probably pretty close to the mark.
 
Yup, Tech Report shows an idle power difference between 5870 1 gig and 5870 2 gig of 12 watts. And at load (they use L4D2) of 39 watts. Definitely not a tiny difference.
You're comparing apples and oranges here. 5870 2GB is using 16 1gbit chips, not 8 2gbit chips. I believe they are using a different board layout too, making them not really comparable.
I guess we'll see if it makes a difference when HD 6950 1GB come out (well at least if they use same pcb), as said it's too bad the 2gbit gddr5 datasheet from hynix is not available.
 
PowerTune makes the comparison pretty interesting - if you're clock throttling then the extra power draw of higher powered RAM (I think the 2gbit chips use higher voltages than the 1gbit ones?) would take more of your power budget than the 1GB edition. That extra 5-6W could be two or three points in 3DMark11, averaged over 30 or 40 runs!
 
Moved on to what? If you had a 280 or 285, the only thing you were looking to move up to was a 480 or a 580. Even the 5800 series is under 8%. Unfortunately that value includes the 5850 and 5830 which isn't the high end.

5% using top end cards is probably pretty close to the mark.

The 5800 series was faster at launch than the 285, so... Why not move to it? I know people who moved from nvidia to AMD when that series was launched.
 
I think the 2gbit chips use higher voltages than the 1gbit ones?
No that's not true. Both 1gbit and 2gbit chips usually run at 1.5V. However, the 6gbps chips the HD 6970 is using are rated for 1.6V - the highest rated 1gbit chip from hynix is also rated at that (and yes this will definitely make a difference in power consumption). This is the same as was the case with gddr3 (or even ddr3 main memory for that matter) - highest performing parts are "factory overvolted".
Doesn't really matter however since both HD 6950 and HD 6970 use the same voltage, 1.6V - just like in the past I think I've only seen very very rare non-reference cards (using AMD chips) which used the nominal 1.5V for gddr5, no matter the voltage the chips were rated at.
 
The 5800 series was faster at launch than the 285, so... Why not move to it? I know people who moved from nvidia to AMD when that series was launched.

Sure, but you can see the numbers there. Under 8% using 58xx series cards. Do you really think the bulk of those are 5870? Even at launch the 5850 was considered the value card, and 5870 shipped in limited quantities for a long time. The 5830 joined the club later to fill a lower price point and while not seen as good value in general by consumers, it was in a lot of OEM machines. I'd be surprised if even 1/3 of that <8% was high end 5870.
 
People here really think that the 6970 launch isn´t a failure?

A card launched more than one year than its predecessor (5870), with almost the same MSRP, but only 15% faster.

If it´s not fanboyism, I don´t know what is. The worst: the same people praising 6970 (a new architecture barely faster) were the ones smashing the GTX580 last month because it "was just 20% faster than GTX480".


I dont disagree it's a failure, but the "year" thing is a red herring. By the same token it took Nvidia a "year" (8 months) to come out with the 580 that's only 15% faster than 480. Things always have to be viewed through the prism of what the competition is doing too.

I guess my disappointment and I think others, is that 580 is mostly a beefed up 480, while 69XX is something of a new architecture, at least much moreso than 580, yet it doesnt offer exciting performance gains. Overall the 480:5870 and 580:6970 relative performance difference are almost a wash, though. But again in the context of new architecture, disappointing for AMD imo.
 
I am confused...how much will updated drivers help Cayman "new" architecture...?

On one hand, you have people saying it would, while others benchers dont think so...
On another hand, AMD driver release notes usually claim 10-40% improvement with every new Catalyst....yet tests i read wrt to driver comparisons...yield virtually no big fps gains in games..but Cayman is like really "new" bro.
On yet another hand, i remember reading new gpus went from totally average at launch to pretty good gains over older gpus...aging well or just new games optimizing for these new gpus....as an example...i thought i was pretty happy with a 4870 1GB, after reading 5850 launch reviews..and behold at present i re-founded out that 5850 has been perf at a higher level than 4870 1GB with so many new games

Should i place my faith in Cayman architecture and anticipate new games will make use of it better....than current ones...than any driver updates will ever help?

I wish sites would come back and do a retrospective review of old gpus....atm with gpu tech slowing....make even more sense...i no longer see 4870 1GB in so many Cayman reviews.

On the final hand....how long more to DX12? Will 28nm gpus come with DX12..
 
I am confused...how much will updated drivers help Cayman "new" architecture...?

My hope is a lot, my guess is, not much.

Very rarely over the years have I seen driver gains amount to much or rescue a architecture. If ever.

Just by common sense, if there were massive driver gains to be had, AMD would have had them done for launch. My guess is sadly, Cayman is mostly tapped with current drivers.
 
People here really think that the 6970 launch isn´t a failure?

A card launched more than one year than its predecessor (5870), with almost the same MSRP, but only 15% faster.

If it´s not fanboyism, I don´t know what is. The worst: the same people praising 6970 (a new architecture barely faster) were the ones smashing the GTX580 last month because it "was just 20% faster than GTX480".

Cayman is faster where it counts, it considerably improves 3/4 of the major bottlenecks of Cypress:

1) Raster operations with >8 bit color channels (2-4x improvemet)
2) Tessalation (2x units + better buffering)
3) Triangle setup(2x units)
4) Memory bandwidth(only small improvement)

What are not bottleneck in Cypress
5) Shader power(very small improvement)
6) Texturing power(small improvement)


So Cayman should perform much better than Cypress in those situations where Cypress did badly, worst case FPS going from 20 to 30 is much more important than average FPS going from 60 to 90.
 
I've been playing with a 6970 today and I'm actually pretty impressed. While "conventional" brute-force rendering workloads are only moderately faster, it's actually a fair bit faster with more forward-looking "clever" rendering tasks.

I'm seeing up to 30% improvements vs 5870 in tile-based deferred rendering for instance whereas conventional bandwidth-heavy deferred rendering doesn't see much of a gain. Similarly the "clever" pack-based deferred MSAA implementation sees a nice little bump while the conventional branching one is no faster (although with MSAA things are still somewhat bandwidth bound).

Similarly in SDSM I'm seeing a 15% improvement almost across the board, with slightly higher improvements to the fancier filtering schemes (EVSM) vs the conventional brute force ones (PCF). AVSM also gets about 15% faster.

That's not too shabby for a part with the same theoretical throughput in a number of areas. I imagine things will continue to trend this way too... we've reached the point where just throwing more power at graphics problems is not going to help a lot - we need fundamentally more efficient algorithms and for that we need "fancier" programming features (better caches, better task management, etc). Stuff that's bandwidth bound or has bad asymptotic complexity just isn't going to get much faster than it is right now.
 
My hope is a lot, my guess is, not much.

Very rarely over the years have I seen driver gains amount to much or rescue a architecture. If ever.

Just by common sense, if there were massive driver gains to be had, AMD would have had them done for launch. My guess is sadly, Cayman is mostly tapped with current drivers.

x1800 XT saw massive increases a couple months after launch taking it from well behind 7800 GTX to ahead of 7800 GTX. Unfortunately, it didn't help it much as not many sites reviewed it again until 7900 GTX and x1900 XT launched.

5830 also had some massive increases a few months after launch.

Just a few ATI examples. Even 5870 had some fair incremental increases such that it's now noticeably faster than it was at launch.

Nvidia have also had a history of some rather drastic speed increases with later drivers for some products.

Whether Cayman follows in those shoes or not, we won't know until a few months from now.

Regards,
SB
 
x1800 XT saw massive increases a couple months after launch taking it from well behind 7800 GTX to ahead of 7800 GTX. Unfortunately, it didn't help it much as not many sites reviewed it again until 7900 GTX and x1900 XT launched.

5830 also had some massive increases a few months after launch.

Just a few ATI examples. Even 5870 had some fair incremental increases such that it's now noticeably faster than it was at launch.

Nvidia have also had a history of some rather drastic speed increases with later drivers for some products.

Whether Cayman follows in those shoes or not, we won't know until a few months from now.

Regards,
SB

Can you link me to any examples?
 
Unfortunately, it didn't help it much as not many sites reviewed it again until 7900 GTX and x1900 XT launched.

and that's the bottom line, what's the point of 10fps a year down the road when the card doesn't play games well enough as soon as they come out.

Whether Cayman follows in those shoes or not, we won't know until a few months from now.

I am pretty sure it will and then some, but who cares?
 
and that's the bottom line, what's the point of 10fps a year down the road when the card doesn't play games well enough as soon as they come out.

I am pretty sure it will and then some, but who cares?

It just worries me that you can say that. If someone at AMD /nV did say this, it would be understandable.

We are customers. Of curse we care a very big deal if we buy a product and it constantly improves over time. And of course we think highly of an IHV that does that.

Maybe if you buy a card/year what you say holds.
 
x1800 XT saw massive increases a couple months after launch taking it from well behind 7800 GTX to ahead of 7800 GTX.
Actually, I think you're confusing this with reviews were the card was included lateron and where games like NfS Carbon or Gothic 3 were benchmarked. In those games for example, Nvidias tightly integrated texturing showed its dark side, so that made X1800 look way better in comparison.

Unfortunately, it didn't help it much as not many sites reviewed it again until 7900 GTX and x1900 XT launched.
X1900 launch actually was only a couple of months after X1800. (Oct `05 - Jan `06, IIRC).

5830 also had some massive increases a few months after launch.
No. We're benchmarking the card with each update on our 10 games, two resolutions test parcours and it did not improve on it's main weakness: Anti-aliasing (probably the disabled ROPs taking down L2 cache efficiency with them). It's still only 13,5% faster than HD 5770 and HD 5850 in turn is another 26,8% faster than 5830. That ratio doesn't reflect the respective theoretical peaks and power consumption, so HD 5830 still is where it was at launch: too expensive, too power hungry and too underperforming compared to AMDs other products.
 
Back
Top