AMD: R9xx Speculation

That's low :LOL:
I was over-doing it a bit (matching may have been better) but the 6950 should easily match the 5870 in raw power (it would only take 1280 ALUs) and by the slide the arch has gone through plenty of enhancements.

My bet is that the difference between 6950 and the GT580 may not be significant (between 0-5% +/- is not significant for me when so many games already runs at crazy high fps, even 10% and more in some cases/games is completely transparent to the end user).

I'm confident that the 6970 will beat the GT580 ( I'd put the odds 80%). Antilles should offer a "useless" amount of power :devilish:.

There were some rumors putting HD6950 10% below GTX580. Time will tell.
 
What are the chances that AMD are actually low balling the performance estimates as closer to average or worst case compared to the HD 6870?
 
That would translate to 120 TMUs for 1920 SPs(?)

Yep, that's a 50% increase over Cypress: quite a bit of texturing power!

For 50% more ALUs, 20% more lanes (but higher efficiency), beefed up ROPs, 100% higher geometry throughput… Cayman looks like quite a beast.
 
Yep, that's a 50% increase over Cypress: quite a bit of texturing power!

For 50% more ALUs, 20% more lanes (but higher efficiency), beefed up ROPs, 100% higher geometry throughput… Cayman looks like quite a beast.
Agreed :smile:

Now I really hope AMD uses this increased texturing/filtering power 'headroom' to disable all those 'optimizations' on High Quality setting. I'd really like to see 6900's HQ AF match NVidias HQ AF.
 
There are none in HQ mode.
Okay, I even believe you. But the more important question is maybe where does the increased tendency for shimmering come from? Are the TMUs using the correct number of samples but place them just a bit too closely together which gives a sharper texture (is better on stills) but violates the Nyquist-Shannon theorem slightly (or is simply closer to that border as nvidia)?
Do we really have to reverse engineer the AF sampling patterns to get an answer?
 
From the power slide, is it my impression, or Cayman will have distributed clocks, just like nVIDIA has been doing?
You know, I was just thinking the same thing. It's possible of course that "clocks" refer to core clock and GDDR5 memory clock, but that doesn't give a lot of parameters to adjust... AMD going to a more asynchronous setup like NV would be a big move, so I kind of doubt it, but you never know.

"Outlier applications" btw, I guess that's AMD's diplomatic, unamused way of describing utilities like Furmark and the like. :LOL:

Now I really hope AMD uses this increased texturing/filtering power 'headroom' to disable all those 'optimizations' on High Quality setting.
WORD.

I'm sick and tired of buying high-end graphics hardware, setting all the sliders to max quality, and STILL have to look at crawling ants. :(

I'd really like to see 6900's HQ AF match NVidias HQ AF.
[Findlay]Heall... It's a-bout taym.[/Findlay]
 
Will this time AMD finally makes some effort in their GPGPU area, they seems so far behind Nvidia.
I'm hoping so too.

One of the slides mention 4x 128k L2 caches. Isn't that what Nvidia brought with Fermi (and AMD's been lacking all the time up until now)?

GF4/580's L2 is 768k, so a littlle larger than AMD's - assuming these slides are correct - but maybe not enough to make a difference. I suppose it's because the L2s are coupled to the memory controllers, so NV's wider memory bus would have meant 3x 128k which they may have deemed too small, or 3x 256k which is what they went with this time...

Surely this is an interesting time to be a PC nerd! :)
 
Will this time AMD finally makes some effort in their GPGPU area, they seems so far behind Nvidia.
Simple hardware refresh can't solve much by itself. A whole lot of progress has to happen:
  • NVidia is ahead on development support front: accelerated libraries and profiling/analyzing tools.
  • Optimal use of Radeons requires efficient VLIW programming, which in practice is a lot harder to achieve. Compilers can't reorganize the whole code, most of work has to be done on algorithmic and coding stages. And in order to that being easy for programmers, look at previous point.
  • Both AMD and NVidia are constrained by PCIe's bandwith (20x less than onboard RAM), along with restrictions of driver models.
Leaked slides mention progress on hardware level also, so GPGPU is not neglected ;).
 
Will this time AMD finally makes some effort in their GPGPU area, they seems so far behind Nvidia.
Hum on software surely, did you read beyond3d coverage of the fermi architecture? Because you may no longer trust the Nv marketing in regard to fermi and GPU computing.
 
Will this time AMD finally makes some effort in their GPGPU area, they seems so far behind Nvidia.
Really?

Look at DP throughput, and do some analysis on slide 70, it seems there's something more to come...

2 DP MUL or ADD, but only 1 DP MAD/FMA per clock? It seems I was right when speculating about VLIW4 = half-rate DP with semi-specialized, symmetrical units, disabled on Radeons only for products segmentation.
 
Back
Top