AMD: R9xx Speculation

I think they ment the 6870, so its some what old news. Oh, and if its only in tessellation we will see up to 50% improvments thats not much. ;-)
So Barts can offer up to 2x improvement in some tessellation situations, over Cypress, and according to slides Cayman offers 2x the tessellation performance of Barts.

Down playing or sandbagging?:???:
 
2 times over Cypress? Nah! So i guess it will still be under NV.

Achieving higher tessellation efficiency for the HD 6000 series involves a number of hardware tweaks along with pushing a method called adaptive tessellation. Adaptive tessellation involves applying higher levels of tessellation to objects that are closer to the camera while objects further away will be rendered using lower levels. Using this type of method could also decrease the performance impact of applying certain anti aliasing algorithms to tessellated scenes.

Does this translate into higher performance in comparison to the HD 5000-series? Yes, but only at lower tessellation levels. However, once the tessellation factor increases beyond a certain point, the overall tessellation performance of AMD’s HD 6000 series levels off and is only slightly above a HD 5800 series card.

http://www.hardwarecanucks.com/foru...7286-amd-radeon-hd-6870-hd-6850-review-4.html
 
AMD not expected to suffer much from delay of Radeon HD 6990, say sources

AMD recently postponed its planned Radeon HD 6990 GPU until the first quarter of 2011, though the reason for the delay is unclear, the sources noted. In addition to its dual-core Radeon HD 6990, AMD also plans to launch mid-range Turks and entry-level Caicos GPUs in the first quarter of 2011.

On the other hand, Nvidia is set to launch its GeForce GTX 560 in the first quarter of 2011 with performance to be 15-20% faster than the GeForce GTX 460. Nvidia is also ready to launch its GeForce GT 540 and GT 520 in the first half of 2011.

Both Nvidia and AMD declined to comment on unannounced products. http://www.digitimes.com/news/a20101115PD227.html

Dual core GPU's has a very small market share, it will NOT make any differentness if it's late.....
 
I' am bit lost here - What do they mean by tweaking ?

Increasing core clock already has been said, NOT enough time to do within 3 weeks.



Maybe not enough to do in 3 weeks, but maybe doable in 5-6 weeks, amd could have known potential performance from gtx580 for months already, it is not like they knew about the gtx580 the day it launched. They could have "easily" modeled its performance even the day the gtx480 came out.

My theory is amd is raising clocks and fine tunning 6970/50 to kick 580/570 respective asses.
 
I' am bit lost here - What do they mean by tweaking ?

Increasing core clock already has been said, NOT enough time to do within 3 weeks.


Validating frequencies are done within a range, so they can tweak with that range, but again, it won't improve performance much. I'm not sure if they are also already validating for higher frequency OC cards, that might be a possiblity too.
 
There is no point in increasing clocks if there is no chance to reach or beat GTX580 performance. GTX480 is well known. If Cayman would not be in part with these two, there would be no need to clock it like a crazy: simply AMD would price the cards according to the performance as they already did in the past generations: die size and bus width allows them to do this. So I simpy not believe in "last minute increase". Component shortage and/ordriver refinement are much more plausible to me. Or, they want to surprise everyone and launch 6950, 6970 and 6990 all together...
 
Seriously, I have a hard time understanding how Cayman could perform lower than GF110...

Since this time AMD went with a really decent "6700" die (as in "similar to RV570", even if it was named x1950 back then...), they have no reason to not take the "big die" approach for their high-end GPU, which would lead to considerable performance improvement, not only compared to Barts, but to Cypress too.

Now, where does "big" stand? With such a late introduction, R600-sized die clearly is possible and that's what I would have aimed for. Less than that would be a mistake as die size is less relevant than with a sweet-spot board.
 
Thats profiling, not qualification.


Ok so you profile a batch of chips first then you qualify each chip for a specific frequency. So if you want to change frequencies you select the profiled chips in a certian range and the ones that qualify for the higher frequency you can switch over without requalifying?
 
That allows binning, but if you launch a card you must respect power and thermal limits with the most of chips. So basically once you have designed the card, cooler and power supply you should have already chosen the maximum clocks that allow practically the most of GPUs to stay within the specs AND to be reliable. If you increase clocks, then you can face problems with reliability if you don´t modify power circuitry, cooler and some other parts, maybe including the PCB. So you need a requalification for this, that is, stress tests that may require several weeks if not months to be completed.
Of course I´m speaking about reference cards for this, you can even produce cards that stay in the limit with increased clocks by binning the clocks and some small adjustments, but this is not a viable solution to use as you want to use as much chip as possible (that is, binning means more scraps as fewer chips could reach higher frequencies and/or have lower leakage than the originally planned ones).
 
Can you explain it a bit more then? Because when you qualify chips you get a range of the way the current fluctuates over the chips different areas. Using that data you can then change frequancy, voltage settings to what is needed.
To be on the safe side you could only adjust the frequency down while staying at the same voltage. Every other combination of frequency and voltage adjustments isn't guaranteed to be stable or within power limits for already binned chips (I really doubt one stores a complete Shmoo plot with power consumption for each GPU during binning, it wouldn't result in bins anymore ;)).

Of course it would be possible to bin chips for reliable operation at 950MHz @ 1.05V @ 220 W and for operation at 850 MHz @1.0V @ 180W. If the power regulators are able to handle the higher load then you can decide more or less on short notice if you want to ship them at the higher or the lower frequency. But that would be quite a waste of good dies (being able to run at the 850MHz point but not at 950MHz) and also board expenses (for the beefier power regulation) if they really will be used only at the 850MHz frequency. So I doubt that this is done for the maximum operating frequency.
 
Maybe not enough to do in 3 weeks, but maybe doable in 5-6 weeks, amd could have known potential performance from gtx580 for months already, it is not like they knew about the gtx580 the day it launched. They could have "easily" modeled its performance even the day the gtx480 came out.

My theory is amd is raising clocks and fine tunning 6970/50 to kick 580/570 respective asses.

Well I think, rasing the "inofficial" clock (overclocking potential) by figuring out the max. acceptable power consumption wouldn't take so much time. That would mean, AMD's Ref. Design could be released at the originally intended clock speeds but custom designs could have more room for higher clocks. That could also be achieved by taking more time for chip selection, to have more higher overclockable chips at hand upon the launch.
 
Back
Top