AMD: R9xx Speculation

Not sure why you say that given the amount of space being dedicated to reviews on it (which is reflective of the amount of time we have put into conveying the message). For sure, though, 6970 is the less interesting one to look at of the two as it doea have a lot more headroom; 6950 has a much more stringent TDP budget to stick to and without PowerTune it wouldn't have been close to the clock it is at.

But I would have liked to see an option to disable PowerTune. It has its benefits but just in case users want to push the card to max
 
While in some cases it's not a big improvement over Cypress, that's expected given it's on the same process node and that some changes won't benefit all games. This launch was damaged more by excessive pre-release hype than anything particularly wrong with the cards.
 
Also I think this proves my point that powertune is just an anti-Furmark switch.
While I agree, that if forced to distill Powertune into half a sentence I'd probably chose similar words, it seems much more elaborate than Nvidias solution. Plus, the user selectable Overdrive function to partially ignore the set limits makes it much more useful. I'd only wish, I could go to, say, +30 or +40% instead of just +20. In one particular test, GPU-z showed me an average clock of 871 MHz over the duration despite the Powertune slider being at +20%. :)

If Nvidia had implemented a similar function we would have countless articles from Charlie about how Nvidia couldn't contain their TDP and how broken the architecture and chip are and he would have bumped up a load of BS articles about bumpgate to hammer the point home.
We have those alright, don't we? :)
 
I don't think this argument makes sense.
If the architectural changes don't pay off now then it would make more sense to introduce them when they DO pay off, rather than giving your customers a worse deal, and your competitor a leg up.
Major architectural changes are introduced to market several years after the designers made a prediction what changes would pay off in the final design.
If the chip design pipeline waits until a feature pays off, it won't come out for two or so years.
The choice is to pick possible winners as best you can way before you know the answer, or guarantee you are late to the party.
 
While I agree, that if forced to distill Powertune into half a sentence I'd probably chose similar words, it seems much more elaborate than Nvidias solution. Plus, the user selectable Overdrive function to partially ignore the set limits makes it much more useful. I'd only wish, I could go to, say, +30 or +40% instead of just +20. In one particular test, GPU-z showed me an average clock of 871 MHz over the duration despite the Powertune slider being at +20%. :)

I agree that it is more complex, but if you break it down to its bare elements then it is just an anti-Furmark switch.

We have those alright, don't we? :)

Other than a couple of comments here and there I haven't seen any articles along those lines, and definitely not from Charlie, he seems to think 6970 and PowerTune are the second coming...
 
Is it more complex? According to Anand nVidia has some hardware monitoring in place whereas AMD is employing a usage based formula to derive an estimate of power consumption. I've never heard of any hardware monitoring on GF110, thought it was simply application profile based.
 
Nvidia's engineering is looking smarter all the time lately...their whole architecture introduced with 8800 is looking better all the time. It now appears AMD is having to "catch up" to them in many areas, which is bloating AMD die size without adding speed.

For AMD's part imo they need to drop the whole small die pretense already imo. They might look a lot better if they could use up to 500 mm^2 in the first place. The only potential issue I guess I could see is such large dies could hamper the use of X2 cards due to power issues, as Nvidia has faced...but then again they might not. I dont get the sense 5970 is just killing it in sales anyway.

However, if 28nm is truly delayed a lot, I see it being a whole lot more problem for Nvidia...they are truly at max die size already more or less, so they literally have nowhere to go. Theoretically OTOH AMD has ~140 mm^2 left to play with. Theoretically imo AMD could introduce a next gen chip on 40nm, while Nvidia could not. A smart AMD would definitely use that imo to great advantage, but I doubt they will as they seem cautious and not playing to win again.

I worry about AMD if they are beginning to lose the plot in GPU's. They've been behind in CPU's of course for a long time but at least the GPU division seemed on the ball. But now cracks appear. I dont see AMD doing anything at all in the super important mobile phone type space either, while both INTEL and Nvidia do.
 
In the rather old, but nice Xvox, which I also used on Geforce cards, I am getting about 1,67 tris/clk for HD 6970, whereas GTX 460 is at only 1,5 (Dual Setups, more fair comparison than GF100/b with it's four engines; ~2,43 here) .
It seems to work sometimes, but not always. That's why I'm hopeful that it's just drivers.

Many setup bound games (you're not CPU bound, yet tripling the resolution only increased frame time by 50%) aren't showing improvement over the 6870. Cayman should have a larger improvement over its predecessors at low resolution, assuming memory and CPU aren't problems at either.
 
Except for about 2 of 6 game tests, or 1/3, 6970 is totally crushed (well, at lower resolutions?) even by 480. Dirt 2 and LP2. AMD really needs to optimize those. OTOH those games that get over 80 FPS, so it's debatable to me how much the numbers even matter, but then again I guess you could say that about most PC benchmarks.

I wonder how much the 2GB helps the 2560X 6970 benches? Is that the main factor?


Why, to optimise?!? I don't see any problem here.










http://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/14.html
 
My $370 seems to have died a little after reading those reviews....very very confusing launch results...perhaps the delays were due to the drivers...but definitely not the CHIL VRM (all HD69xx uses Voltera)...lolz...now i know which sites to take with a pinch of salt....and who were the ones hyping Unigine and GTX580 killing results...i think AMD kinda shot themselves with so much secrecy leading up to the 15th...
 
Many setup bound games (you're not CPU bound, yet tripling the resolution only increased frame time by 50%) aren't showing improvement over the 6870. Cayman should have a larger improvement over its predecessors at low resolution, assuming memory and CPU aren't problems at either.
There really aren't that many apps that are setup bound. You can see that from from any straight geometry tests the dual geometry engnes in termsof setup are working fine. There are more improvementsto come from the drivers with regards to the tessellation changes though.
 
Why, to optimise?!? I don't see any problem here.
[/url]

Some other sites Dirt 2 was doing a lot worse, such as...

34651.png


It seems like one of those outlying games like Hawx/Hawx 2 and Lost Planet 2 where AMD just gets crushed.

F1 I never mentioned as an outlier.
 
There really aren't that many apps that are setup bound. You can see that from from any straight geometry tests the dual geometry engnes in termsof setup are working fine. There are more improvementsto come from the drivers with regards to the tessellation chages though.

and besides,
looking at 5760x1080 then 6950x2 beats 580 gtx sli.
http://www.hardwareheaven.com/revie...s-card-review-crossfire-eyefinity-vs-sli.html

kinda good deal, half the price for better performance...
 
Future proof? Anyone? It seems they are designed with future heavy tesselation applications in mind.

Here is example. Look at the jump Radeon HD5870---->>> Radeon HD6970







 
Check out Amazon prices!

6970 goes for $480
6950 goes for $370

..... now was there ever a last minute price adjustment from AMD...after Nvidia surprised them with the 580 and 570 double whammy?? Guess if one were to speculate about AMD plans for "profits" (per Dave comments earlier in this thread), that did not turned out too well....kinda make some sense with the renaming bits...wonder if they had stuck with 67xx and 68xx from the start...doesnt seemed to make any difference now?
 
However, if 28nm is truly delayed a lot, I see it being a whole lot more problem for Nvidia...they are truly at max die size already more or less, so they literally have nowhere to go. Theoretically OTOH AMD has ~140 mm^2 left to play with. Theoretically imo AMD could introduce a next gen chip on 40nm, while Nvidia could not. A smart AMD would definitely use that imo to great advantage, but I doubt they will as they seem cautious and not playing to win again.
That is not going to happen as AMD just released their next generation (32nm) part on 40nm the 6970.

I am also under the impression that it takes 2 years to go from design of a new GPU to production so unless AMD started this next generation 40nm design 1 1/2 years ago (and they didn't) there will be no 6970+ coming in 2011.
 
Compared to Cypress, it isn't nearly that bad, but still die area and transistors increased more than performance.
I don't think so. Looking at ComputerBase, HD6970 is faster than HD5870 by:
16% at 1920×1200 / AA 4x
31% at 2560×1600 / AA 4x

die-size increased from 334mm² to 389mm² = 16%

And this is comparision of drivers which were polished for 1 year with fresh and evidently buggy driver. I think this launch is very similar to R520's launch...
 
Hmm this isn't very convincing. There's some good stuff but overall it doesn't look like a very efficient chip, compared to in-house competition (it still does ok against competition).
Most sites have about a 10% difference between HD 6870 and HD 6950 overall (with HD5870 being as fast as HD 6950), and another 10% between HD 6950 and HD 6970 - about 20% between full Barts and full Cayman then. Cayman cards are definitely priced to reflect performance, though.
Despite Cayman having a 50% larger die size, 31% more memory bandwidth, way more simds (and also more peak alu rate), and also a power draw which is definitely larger than the increased performance would indicate (probably directly related to the increased die size / transistors). Granted it has definitely improved geometry setup / tesselation (with the latter still giving errorneous results in some tests where Barts is actually faster probably due to drivers).
Also the 10% difference between HD 6950 and HD 6970 is very small, corresponding exactly to clock increase (core and mem). If Cypress had very bad simd scaling, Cayman seems to have non-existent simd scaling - anyone bench the cards at same clock? I think I'll stick to the theory that once you go past 8 or so simds per graphics engine (or rasterizer in case of Evergreen) things don't really improve much. Also maybe speculation about not quite sufficient internal bandwidth could be true, it would certainly only get worse if you add more simds (I haven't seen anything indicating bandwidth has improved for Cayman). So maybe the VLIW-4 simds would be more efficient than VLIW-5, but since the simds hardly scale at all it is a wasted effort for this chip to have more (but smaller) simds.
Compared to Cypress, it isn't that bad, but still die area and transistors increased more than performance. Granted, the two graphics engine are definitely warranted for increased tesselation performance (and it pays off in some titles using tesselation) but overall I just don't think it's very efficient.

There's also some good stuff, Powertune imho has tremendous potential in the mobile space I think, but for desktop it's not nearly as important.
Cayman was initially planned for 32nm right? If so I can only wonder what (if anything) was sacrificed for 40nm - I think on 32nm there would be room for some more things even when not exceeding 300mm² (why not 4 GE with 8 simds each and doubled internal bandwidth :) ).
 
Last edited by a moderator:
Back
Top