AMD: R9xx Speculation

I have done some tests on that ealier this year (in our print issue 02/2010, fwiw) with the transcoder also used by third party products like Power Director 8, which i used.

All Radeon cards were strongly accelerated using the „GPU transcoder”, but from what I've measured, the difference between a rather lowly 4670 to the mighty 5870 was 2 secs (63 seconds to 61 seconds total) for our test clip converted to the "Iphone best" preset. When encoding to H.264 AVCHD, the difference itself became larger, but also more variable. HD 5870 was 20 seconds faster than 4670 in a total time of 191 seconds vs. 211 seconds. I attribute much of the difference to core clock, because a 4830, which should be fast than 4670 in general, scored worse both times. 5770 and 5870 were identical.

So in conclusion, I'd say, there was either no or only an insignificant part of the work done by the shaders. The Geforce models from GT220 to GTS 285 (no fermi back then) scaled much better, but also more with clock rates than shadercount.

Also: The image quality produced was slightly different despite using the same presets - CPU output was best. Small differences in „optimizations” would help a great deal here.
 
So in conclusion, I'd say, there was either no or only an insignificant part of the work done by the shaders. The Geforce models from GT220 to GTS 285 (no fermi back then) scaled much better, but also more with clock rates than shadercount.

Another example of AMD's "One Size Fits All" tactics?
 
Wasn't it mentioned back then that it became limited by the UVD decoder? (should be easy to see from the framerate compared to realtime...)
 
Another possibility is that the decoder was never designed to work @ >30fps, and could be made faster if it was so desired.
 
Let's assume, that the decoder was designed do process about 80 megapixels per second (1080p @30FPS with some reserve). If they prepare full acceleration of 4k*4k video @30FPS, they would need to process 480 megapixel per second. That would require 6-time more processing power, which could be also utilized for (up-to 6-times) faster transcoding.
 
What are the chances of Barts XT @ 900Mhz actually beating Cypress XT @ 850Mhz in average DX10/DX11 gaming performance?
 
I have done some tests on that ealier this year (in our print issue 02/2010, fwiw) with the transcoder also used by third party products like Power Director 8, which i used.

All Radeon cards were strongly accelerated using the „GPU transcoder”, but from what I've measured, the difference between a rather lowly 4670 to the mighty 5870 was 2 secs (63 seconds to 61 seconds total) for our test clip converted to the "Iphone best" preset. When encoding to H.264 AVCHD, the difference itself became larger, but also more variable. HD 5870 was 20 seconds faster than 4670 in a total time of 191 seconds vs. 211 seconds. I attribute much of the difference to core clock, because a 4830, which should be fast than 4670 in general, scored worse both times. 5770 and 5870 were identical.

So in conclusion, I'd say, there was either no or only an insignificant part of the work done by the shaders. The Geforce models from GT220 to GTS 285 (no fermi back then) scaled much better, but also more with clock rates than shadercount.

Also: The image quality produced was slightly different despite using the same presets - CPU output was best. Small differences in „optimizations” would help a great deal here.

I'm finding the newest Cyberlink MediaEspresso doesn't scale well with more SP's on HD 5000 series. Low reported GPU utilization (10-15%, GPU-Z). I wonder if the width of parallelism is capped in the newer version (Espresso 5.5 scaled much better between 5770 and 5870).
 
191031052025.jpg



That would be the biggest epic fail since Nv40, IMO. 6870 slower than 5850 and quite likely more expensive?

Hate to be the poor schmuck consumer that bought into the new naming scheme thinking that 6870 should be faster than 5870.

So I can only hope that's an epic fake. :p

Regards,
SB
 
Motherboards.org seems to be very certain about the suggested BartsXT and BartsPro specs, that they even made a video out of them.
http://www.youtube.com/watch?v=cd2QRjEtddo
I am afraid this guy doesn't have a clue what he is talking about , I've seen him stating that a machine with an AMD CPU and GPU will work much better than an Intel CPU and AMD GPU , as you get to combine the power of CPU and GPU together !

He also said that AMD GPUs look better in games , while Nvidia GPU play games better !

EIDT : The video has been removed , any guess as to why ? :LOL:
 
Last edited by a moderator:
What if it's slightly faster and a bit cheaper? It wouldn't surprise me if 6870 is priced close to GTX 460 1GB.

Then it's only a failure on the level of R600. :D They spent all this time from 38xx to 58xx basically staying consistent to the naming scheme established by 38xx. If they now just arbitrarily change performance/market segments of the lineup without redoing the naming scheme, they are doing so purely to bilk (trick) the the consumer as there is no indication to the consumer that performance categories have changed for established conventions.

In other words, it's common knowledge that x8xx is the performance category. x9xx is the dual GPU enthusiast class. x7xx is mainstream performance. x6xx is mainstream. x5xx/x4xx/x3xx is budget territory. Anyone with a 3870 knew a 4870 was an upgrade while a 4770 probably wasn't. Likewise that 5870 is an upgrade to 4870 while 5770 probably isn't. As well, an x8xx to the next gen x8xx hasn't been a negligligible 5-20% average increase (much less the alledged decrease). Although there are times when x7xx might be similar in speed to the previous gen x8xx. So in this case 5770 was roughly similar to 4870. That all makes sense and had been established for years now.

If AMD are going to arbitrarily re-structure their naming scheme, they need to change it so the consumer knows that it is going to be different going forward. Otherwise, all you are doing is trying to unfairly take advantage of the consumer by subverting the expectations they have for the naming scheme that you (in this case AMD) have established. Therefore, if this is something they really are wanting to do. That is restructure their performance categories, they need to make a change similar to what they did when they went from the x18xx/x19xx/29xx -> 38xx/48xx/58xx. That way there is no consumer confusion. If they don't, then they are just consciously choosing to mislead and take advantage of their customers.

I blasted Nvidia in the past for being inconsistent with their naming schemes, I don't think I should hold AMD to a different standard. This is, of course, assuming the above linked picture doesn't contain fake information. If so, all this is probably much ado about nothing.

Regards,
SB
 
Then it's only a failure on the level of R600. :D

Regards,
SB

I actually think that Barts as the 68xx series makes a lot of sense. Bringing back the x8xx series close to the 200$ mark again with powerful midrange/upper midrange cards is a good move. This time it looks like the high end will have more varied options with the different 69xx series and the old Juniper can still kick it in the 100-150 price range. A much more balanced lineup than what they currently have.

edit: Lol you wrote a lot more after you pressed edit than I did :)
 
Back
Top