AMD: R9xx Speculation

So you are saying that the results in your marketing slides are taken with the default power level (lower TDP) ?
If slide are representing the default product specification then all settings are at the default specification. There appears to be a lot of misconception here, it would be best to wait a while to understand.

So wouldn't this give every card a potentially difference experience? Just like over-clocking, it all depends on whether you got one of the good ones. So how much throttling it does to stay within a certain power budget might vary from card to card.... no?
'Solutions' that read input currents are actually the ones prone to that.
 
Just limiting TDP for "outlier applications" like Furmark isn't very useful in general. However, it could be extremely cool if used to improve performance by dynamically transferring power from underused to overused blocks like Intel's Turbo does. That would be sweet and make the most effective use of available TDP.
 
Just limiting TDP for "outlier applications" like Furmark isn't very useful in general. However, it could be extremely cool if used to improve performance by dynamically transferring power from underused to overused blocks like Intel's Turbo does. That would be sweet and make the most effective use of available TDP.

One of the AMD slides mentions something like "clocks no longer need be constrained by outlier applications". Now that sounds like a breakthrough, but then 6970 is "only" 880 mhz, about a normal clock progression. That would have been more impressive if suddenly AMD was able to set clocks at 950, 1000 mhz, or some other envelope pushing number because of this technology.

But again, I may be misunderstanding the whole thing. And maybe 880 mhz is a quite high figure compared to what they would have obtained without the tech. But then again that seems doubtful because it seems 6970 will need every one of 880 mhz to be competitive performance wise, so it seems doubtful they could have been planning for less.
 
I think this dynamically adjusting clock system would be extremely useful for notebooks, much more so than for desktops (just like TurboClock for cpus). Though IIRC there were no plans for mobile Cayman.
 

Supposing these are true...

Not bad results. The 6970 seems to be equal to the 570, although we don't know the finer details of the power tuning at this point. For all we know the cards could be at their lowest level at these results (or the highest).

I'd expect better from LP2 but Unigine's results are hopeful.

I like how both PCBs are the same (apart from the 8pin connector of the 6970) and they seem to share equal VRMs, which in turn means that the 6950 overclocking potential will not be hindered.

The HSF seems very capable and both cards use the same. This should be interesting.

I'd prefer the DVI on the exhaust bracket to had been omitted and a larger exhaust hole be available there while the side holes be smaller or better yet, non existent.

The backplate seems nice as well.

Now all that remains is to see what exactly this power tuning is and how Cayman performs, when set at the same power draw as the GF110!
 
Last edited by a moderator:
his1.jpg


http://forums.overclockers.co.uk/showthread.php?t=18217697
 
But then again that seems doubtful because it seems 6970 will need every one of 880 mhz to be competitive performance wise, so it seems doubtful they could have been planning for less.

It's just marketing. The "max TDP" setting will simply be what we're accustomed to getting as "default clocks". The only way this would change things is if, as I mentioned above, different functional blocks can be dynamically up/down clocked depending on demand for those functions. Then you wouldnt just be getting better power management but also higher performance too as you're putting more of your power consumption into useful work. Maybe that's what AMD is doing which would be sweet. If not, then it's just a Furmark limiter.
 
I'll agree, they could pretty much ditch all the other connectors and stay with Mini-DisplayPort, e.g. like the Radeon HD 5870 EyeFinity version.
 
Now that we have naked PCB shots, why has nobody used Photoshop and determined an accurate die size yet?? :p

Measuring up primitively with the only known factor, the PCI-Express x16 slot (89mm), I get the die to be 20,8 x 18,7 mm, which coincides with the rumored 389mm2.
 
almost equal to the 580, well within the ATX spec anyway. (buy good cases, refuse designs that don't conform with ATX!)

So how long is the 580 then? ;) Come on, I need to measure up!

There are a lot of ATX cases that don't allow the full length of the ATX spec for the graphics card. I've got an Antec 300 that (for instance) wouldn't fit a 5970 that AMD says is within the ATX spec. I think It will take a 11 inch card, but you can't have drives in two of the hard drive bays to allow it to fit.
 
10.58 inches then. Thanks The_Mask. I've just noticed the sticker on the side that says "slow speed board".

Edit: If anyone else needs to know, there's a fraction over 11 inches in an Antec 300, but then you can't have a drive in the next door bay or the one below for the heatsink fan.
 
Back
Top