AMD: R9xx Speculation

haha Harison, I was about to post the same and maybe add Dave's reply too. ;)

Changing clock speeds is not something you do on a whim, nor can it be necessarily done quickly. Dependant on where you are in a qualification cycle changing clocks will have major ramifications that can result in potentially months of schedule alteration.

:)
 
haha Harison, I was about to post the same and maybe add Dave's reply too. ;)

Quote:
Originally Posted by Dave Baumann View Post
Changing clock speeds is not something you do on a whim, nor can it be necessarily done quickly. Dependant on where you are in a qualification cycle changing clocks will have major ramifications that can result in potentially months of schedule alteration.

:)

My guess is driver optimizations. If Cayman is a new architecture, then maybe there are a few optimizations they could not implement before the original launch date, but which are needed to make a good case against the GTX580. You can only make a first impression once in dozens of launch reviews, so it's important to get the best performance possible from the launch drivers.

Releasing optimized drivers a month later will have no effect on the benchmark graphs, conclusions and recommendations of all these launch reviews, which will exist forever, carved in stone on the internet.
 
My point wasn't that the clocks can be changed on a whim.

My point was that clocks are basically baked in. More mature process might help, but a few more weeks won't help much.
 
My guess is driver optimizations. If Cayman is a new architecture, then maybe there are a few optimizations they could not implement before the original launch date, but which are needed to make a good case against the GTX580. You can only make a first impression once in dozens of launch reviews, so it's important to get the best performance possible from the launch drivers.

Releasing optimized drivers a month later will have no effect on the benchmark graphs, conclusions and recommendations of all these launch reviews, which will exist forever, carved in stone on the internet.
That may well be a reason too, or some serious driver bug they need some time with. Funny thing is, we may speculate in many ways why delay happened, but probably will never know for sure. As long as its a good product for a nice price, I'm fine with it, few weeks of delay doesnt matter much, it harms AMD more than it harms me :p
 
Again, folks, one does not simply change a BIOS or clocks within a couple of weeks...

Lets say ten thousand cards have been produced already. That's on the low end of a launch, but let's pretend just 10,000

Then imagine not only the time it takes to update the BIOS on every card... but then to put those cards through quality assurance/control by plugging them all up, running them at the new clocks, and testing them under load to detect artifacts/overheating/power draw.

That's MANY hours needed to check every produced card.

That's a reason for why there are changes that may take many months depending on the part of the product's lifecycle... had it been done say 3 months ago, before cards were being produced, it's easy to test - because as soon as cards come off the assembly line, it's a matter of doing the usual testing. Making changes after the card is produced... well, then you have to check all the cards that have already been produced.
 
ZerazaX, from the current launch date and AMDs knowledge about GTX580 performance is more than a month. Maybe more, if their moles are working well. Its plenty of time to make those changes. And I doubt cards were already finished, as Charlie said some AIB just got chips in the beginning of January, therefore they were in the middle of cycle at best, some AIBs probably havent even started when AMD informed them about delay. It takes two-four weeks from the chips on AIB hands to the finished cards, not 3 months.

Plus it doesnt even matter if they wouldnt finish quality testing of all cards, for the launch and reviewers its enough to have part of the stock available. After that - steady stream of cards to the shops and np.
 
If the performance isn't sufficient to match/beat the GTX580, would higher clocks help? It's possible to expect, that the core clock was planned to be 850-900MHz. Let's say 880MHz. Many 40nm GPUs don't go much beyond 900MHz without tweaked voltage. Increased voltage have quite dramatic influence on power consumption and noise - something what ATI/AMD needs to keep as low as possible (ATI/AMD wouldn't be excited, if reviewers would call Cayman the second R600 etc.). So... what could they do? Increase core clock from 880MHz to 920MHz? It would result in 3-4% performance boost... nothing, what could save hypotetically underperforming product. I think last minute change of clocks is quite unlikely.

Other speculations seem to be more likely - shortage of TI mosfets, manufacturing capacity allocated primarily for Barts because of OEM contracts, bug in driver, bug in BIOS...
 
My guess? Tessellation/tri throughput and drivers. They probably either want a clear allout win against the 580 as the latter is now really balanced and as compelling as a $400-500 (but mainly $400) solution can get.
 
I'm not sure I'm following this .

Cayman is supposed to be smaller than the gf110 (gtx 580) and not much bigger than cypress.

We are speculating $300-$400 usd for the card which is $200 under msrp for the gtx 580 and $130 less than what i'm seeing it go for at the low end.

Does Cayman even need to be faster at these price points . Think about it , Nvidia will again be forced to drop the gtx 580s price tag to compete even if cayman xt is 5% or 10% slower in most games.

THe other thing to factor in is dual caymans. Right now CF 6870s are faster than the gtx 580 in just about every game out there, some tests had it 30% faster. So dual caymans on a board could end up slotting in at the $600 price point again and be much much faster than the gtx 580. It might actual end up using similar power to the gtx 580.

Dual gtx 580s might be faster than the 6890 however it be more expensive. $1,000 or so vs $600 and I'm sure most will perfer the cheaper option that is again within a few % of the nvidia solution.


I would love to see cayman come in at $400 and be faster than the gtx 580. I don't think it will happen though.
 
I'm not sure I'm following this .

Cayman is supposed to be smaller than the gf110 (gtx 580) and not much bigger than cypress.

We are speculating $300-$400 usd for the card which is $200 under msrp for the gtx 580 and $130 less than what i'm seeing it go for at the low end.

Does Cayman even need to be faster at these price points . Think about it , Nvidia will again be forced to drop the gtx 580s price tag to compete even if cayman xt is 5% or 10% slower in most games.

THe other thing to factor in is dual caymans. Right now CF 6870s are faster than the gtx 580 in just about every game out there, some tests had it 30% faster. So dual caymans on a board could end up slotting in at the $600 price point again and be much much faster than the gtx 580. It might actual end up using similar power to the gtx 580.

Dual gtx 580s might be faster than the 6890 however it be more expensive. $1,000 or so vs $600 and I'm sure most will perfer the cheaper option that is again within a few % of the nvidia solution.


I would love to see cayman come in at $400 and be faster than the gtx 580. I don't think it will happen though.
The way I see it, if both Caymans are slower than GTX580, it will be priced accordingly, probably:

6950 - 299$
6970 - 399$
<free price point, occupied by NV, even if they'll make discount, not by much>
6990 - 599$

Now if 6970 would be competitive with GTX580, or even faster, AMD could command extra price tag, think of:

6930 - 299$ (spots opens up for a salvage Caymans)
6950 - 350-380$, depending on performance
6970 - 499$ (NV would be forced to lower GTX580 price by 100$ or more)
6990 - 599$ or even 699$ (!)

It would be a huge win for AMD, as well as massive profits.
 
I already did them in my head and what I stated holds true :LOL:.
Well, you didn't do a very good job. At best you can call it a wash, but you're way, way off base in saying it's over 50% faster in "almost all games".

There's a reason that all reviews with averages across all games peg the 580 at ~45% faster than the 6870.
 
Well, you didn't do a very good job. At best you can call it a wash, but you're way, way off base in saying it's over 50% faster in "almost all games".

There's a reason that all reviews with averages across all games peg the 580 at ~45% faster than the 6870.


If you take into account lower res and lower af and af settings yeah, but either of these cards are not really stressed at those settings so you can't really find relative performance at those settings.
 
If you take into account lower res and lower af and af settings yeah, but either of these cards are not really stressed at those settings so you can't really find relative performance at those settings.
Sure, lets compare cards at ultra resolutions (5040x1050, 7800x1600) with 8x AA or MLAA(!), nothing else much stresses those cards anyway :p
 
and that is being facetious since no reviews really have those settings
As much as I'm kidding, its BS we have GTX580 and 5970 tested at 1024x.., but not at ultra resolutions. I hope some reviewers will get off their lazy butts and do something about that ;)
 
If you take into account lower res and lower af and af settings yeah, but either of these cards are not really stressed at those settings so you can't really find relative performance at those settings.

http://www.computerbase.de/artikel/...-nvidia-geforce-gtx-580/20/#abschnitt_ratings - 1920 shows 40, 45 and 39% with 1/4/8 AA (2560 with aa shows framebuffer limitations not relevant for 2gb cards).
http://techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html - 41, 42% in 1920, 2560

I don't get why they even spend time testing those highend cards at 1024/1280..
 
But if we compare the relative performance in ALL RESOLUTIONS, then we will find that R6870 is at 73% of GTX580's performance, which automatically means that the latter one is 36-37 % faster. Right? I think that if AMD does not something foolish like artificial limitation of R6970' perfomance, then it should be faster than GTX580.
And it is clearly visible than the higher the stress, respsectivelly the resolution, the higher the performance gap between R6870 and GTX580 is. I know that testing in 1024ish resolutions doen't make any sense, but for some people it is interesting to see how much their HD5450 is slower than GTX580. :LOL: It's a joke, of course. :LOL:
 
Last edited by a moderator:
If you take into account lower res and lower af and af settings yeah, but either of these cards are not really stressed at those settings so you can't really find relative performance at those settings.

Sure throw out 1024, and throw out any result that is beyond a playable frame rate anyway, because anything over 60fps is wasted on a 60hz display. And while you're at it throw out the tests that unnecessarily stress features without producing visible results.

You can't throw out blanket statements and expect to get away with defending it by eliminating any result that falls outside of your liking.
 
Last edited by a moderator:
Sure throw out 1024, and throw out any result that is beyond a playable frame rate anyway, because anything over 60fps is wasted on a 60hz display. And while you're at it throw out the tests that unnecessarily stress features without producing visible results.

You can't throw out blanket statements and expect to get away with defending it by eliminating any result that falls outside of your liking.


Ok.... what if you are at a CPU bottleneck for one of the cards? What happens to those results, do you get an appreciable performance difference? No you don't, that's why just reading the end numbers means nothing if you don't have something up in your head to actually understand what is going on. You can't take all resolutions because the 580 gtx will get CPU bottlenecked at lower reses and lower AA and AF settings much more then the 6870.
 
Back
Top