AMD: R9xx Speculation

The power figures are odd. Why would a 960-shader part draw more than 150W when the HD 5770 only draws 109W at the same clock?

the bus-width is different. or how about they are salvaged 5870 dies? it sepcifically mentions being based on the cypress architecture, with full AA and pixel rendering performance since the ROPs and the clock speed are the same.
barts doesn't have to be new architecture, cayman otoh:?:
 
Who said anything about TDP? The notion that AMD will launch a new family and use heavily crippled last-generation dies for one of the most important SKUs is ludicrous.
 
Who said anything about TDP? The notion that AMD will launch a new family and use heavily crippled last-generation dies for one of the most important SKUs is ludicrous.

stop-gap? a page from nvidia's playbook?
reworking the core and then printing cypress architecture with a TDP of >150 doesn't sound anything else to me. Of course if that pic is wrong then :oops:
 
You use stickers when you have no product. Not the other way round.

Nvidia didn't even do what he's suggesting. The rebranding king G92 was offered in all its full glory. The idea that they would use half of a last generation chip for their new mid-range card is nonsense.
 
I've read that the NI dice are a little bigger than their evergreen equivalent, so considering that at 40nm amd/ati had some production problem, i think that in our speculations we have to consider an architecture born with heavy redundancy in mind.
Of course i know that gpu are still heavely redundant, but this time they can have put this as a prioritary requirement over pure performance.
 
i have non istant source, but i've always read that ati can't produce as much gpu as would be able to sell

You make it sound like they have productions problems, and not problems with their production numbers. Some big difference there.

so if they are going with bigger dice there must be some some solution to help pure production

And what if they RV770'd the N.I. design? That would resolve their "production issues" they've been having for the past year and a half.
 
i have non istant source, but i've always read that ati can't produce as much gpu as would be able to sell
so if they are going with bigger dice there must be some some solution to help pure production

Capacity constrained is not the same a production problems; its means the fab can't build enough for them.
 
stop-gap? a page from nvidia's playbook?
reworking the core and then printing cypress architecture with a TDP of >150 doesn't sound anything else to me. Of course if that pic is wrong then :oops:

how do you rework the core of a salvage part? if its already built you can play with disabled SM's and clock speeds, voltages but thats about it afaik.
 
how do you rework the core of a salvage part? if its already built you can play with disabled SM's and clock speeds, voltages but thats about it afaik.

either it's a salvage part or it's a reworked core based on the cypress architecture. the TDP figures however seem to be closer to 5830 than 5770. (from that slide) doesn't make sense for a reworked core and a mature process to have that high of a TDP.
 
i have non istant source, but i've always read that ati can't produce as much gpu as would be able to sell
so if they are going with bigger dice there must be some some solution to help pure production

They didn't book/can't get enough wafers from TSMC. That's just bad planning. There was no production "problem".
 
either it's a salvage part or it's a reworked core based on the cypress architecture. the TDP figures however seem to be closer to 5830 than 5770. (from that slide) doesn't make sense for a reworked core and a mature process to have that high of a TDP.

That slide wasn't making sense in many many ways. It's most likely a fake and AMD would never produce something with worse performance and worse performance per watt. A simple underclocked 5850 would've blow such products out of the water, because it's a mature chip with no R&D cost other than cost down -- which probably is already done.
 
That slide wasn't making sense in many many ways.
Exactly what I was thinking. The supposed specs and performance tagets didn't really fit together:


  • Why waste a 256Bit memory interface on a chip supposedly not nearly as fast as Cypress ("positioned against GTX 460")?
  • Why carry over "full Cypress AA and pixel render performance" - yet keep shader and TMU counts only slightly above Juniper? What's Barts supposed to be? A mainstream chip with enough bandwith and AA performance to render most games @ Eyefinity resolutions and 8xAA - yet bottlenecked by shader and texture performance? How could they possibly market such a card to a "usual" mainstream gamer who couldn't care less about multi-monitor setups and the difference in picture quality between 2xAA and 8xAA - but rather prefers to play his games at the highest possible detail settings?
  • ~40% difference in peak FLOPs between Barts Pro and Barts XT - yet those cards are supposed to be "positioned against" two Nvidia cards that solely differ in memory size (786MB vs. 1GB) and bandwith (192Bit vs. 256Bit)?
  • A final Barts XT card that's generally slower than a HD 5850, yet consumes at least the same amount of power? "It's still more power-efficient than GTX 460" won't be enough to make reviewers overlock such a sharp decline in performance/watt.
...

Either AMD really lost their touch for what (and where) a mainstream card is really supposed to deliver - or that chart was deliberately faked to spread some misinformation and confusion.
 
Exactly what I was thinking. The supposed specs and performance tagets didn't really fit together:


  • Why waste a 256Bit memory interface on a chip supposedly not nearly as fast as Cypress ("positioned against GTX 460")?
  • Why carry over "full Cypress AA and pixel render performance" - yet keep shader and TMU counts only slightly above Juniper? What's Barts supposed to be? A mainstream chip with enough bandwith and AA performance to render most games @ Eyefinity resolutions and 8xAA - yet bottlenecked by shader and texture performance? How could they possibly market such a card to a "usual" mainstream gamer who couldn't care less about multi-monitor setups and the difference in picture quality between 2xAA and 8xAA - but rather prefers to play his games at the highest possible detail settings?
...

Either AMD really lost their touch for what (and where) a mainstream card is really supposed to deliver - or that chart was deliberately faked to spread some misinformation and confusion.

Why is 256bit such a cost problem today? Radeon 4850 had 256bit, 9600 GT had 256bit.
AA is now for high-end these days or what ? Even on 1920*1080 , AA can increase the picture quality most from all graphic setings.
And even low cost computers can have full HD monitors these days. The difference betwen cheap 17 inch sub HD monitor and a basic 22 inch full HD is 10-20 $ these days. So i dont see a reason why mainstream cards cant and shouldnt be much faster.
 
Why is 256bit such a cost problem today? Radeon 4850 had 256bit, 9600 GT had 256bit.
256bit memory interface isn't a cost problem, but it doesn't make sense to use 256bit interface on a product, which should be barely 20% faster than a previous product, which was pretty fine with 128bit interface. Especially if faster GDDR5 are available.
AA is now for high-end these days or what ? Even on 1920*1080 , AA can increase the picture quality most from all graphic setings.
That's true, but this product wouldn't have enough aritmetic and texturing power for ultra-high resolutions.

Well, look at the HD4890 - it had less MSAA performance drops, than GT200, which had twice as many ROPs and more bandwidth. Would it make sense to implement twice as many ROPs if the targeted performance is 10-20% above the old HD4890?

32 ROPs / 256bit would make sense only if the full-clocked Bart is targeted to perform at least as HD5850 :smile:
 
Back
Top