AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Because its not the same thing. GeForce's have a Base and a Boost clock. The Boost clock is an average of boosting from samples, and nVIDIA always stated that. They said higher clocks could happen, but they were not guaranteed in anyway. But they guaranteed 1) card would not go below base clock 2) card would achieve the advertised Boost clock. That is why the increase from Base clock to Boost clock (guaranteed one) is mild at best.. after all 50Mhz is nothing to be praised at.

Now look at AMD situation with 290X: where is the base clock? Nowhere to be found. There is no base clock. AMD only says "up to 1Ghz", which can and does mean that the card will go further down, as further as 700Mhz-800Mhz, where it clearly is much slower than the press samples.

Its much worse than nVIDIA: they guaranteed clocks, in the form of minimum clocks, while AMD doesnt guarantee anything. If anything they look to guarantee that card cant run faster than 1Ghz (which is also a lie :LOL:). In the end AMD just put themselves in this horrible mess.

There are gaming situations that GK110 based cards aren't running at their "guaranteed" average boost clocks...
 
Last edited by a moderator:
What i see from the TR piece is...if AMD can get the 290x (AIB) to stay in boost state longer, it should perform as good as 780Ti! Those tested 290x seems to run at >50mhz slower than the 780Ti...

Although overclocking potential, 780Ti is much better...

It is impressive that Nvidia got the 780Ti fan to spin faster yet sounded less noisy...could be the metal casing softening the noise better than AMD plastic shroud.
 
There are gaming situations that GK110 based cards aren't running at their "guaranteed" average boost clocks...
Similarly, in the case of AMD, you could state that there are cases where you can meet the boost clock. E.g. In the first 3 minutes of operation. Or when you happen to be in the possession of a reviewers card. ;)

AMD has been extraordinarily clumsy in this whole affair. Not spending the time and effort for a decent cooler. Specifying maximum possible clocks instead of a lower bound. Instead of delighting a customer with much higher than minimum expected performance in most cases, they left the door wide open for ridicule. With exactly the same technical solution (well, except for the cooler) and a better understanding of human psychology, they could have created a much better story.

I don't think it matter all that much in the short term: these cards will sell like hotcakes no matter what, thanks to whatever crypto currency du jour is popular.

But they do create/reinforce an image of flakiness that will linger for a long time. It's similar to the perennial to claims that AMD drivers are worse. I have no idea if they are, but the perception exists. Or take the Crossfire jitter, or the 4K vertical tearing. It doesn't really matter for 99% of the users, but Nvidia is so incredibly good at magnifying these kind of flaws and creating a feeling of being more reliable. This is why they can charge higher $ for the same !.
 
Similarly, in the case of AMD, you could state that there are cases where you can meet the boost clock. E.g. In the first 3 minutes of operation. Or when you happen to be in the possession of a reviewers card. ;)
We know that PowerTune keeps track of the power draw of chip, which means it should be running close to max power in that time period. That it drops down afterwards puts a fair amount of blame (most?) on the cooler.

AMD has been extraordinarily clumsy in this whole affair.
You're not seeing the big picture--the sheer genius of it all.
They're outsourcing their QA and product followup to their competitor, and look at all the quality frame variance tools and statistical data they're getting for free!
At this rate, they're bankrupting Nvidia with the wealth of bugs it has to track down for them.
Clumsy? Clumsy like a fox. ;)

Specifying maximum possible clocks instead of a lower bound.
Their clock scheme might be aggressive enough that its ability to dial back clocks when the GPU queues are low might violate any non-embarrassing minimum given, while forcing a higher minimum would leave performance on the table.
The way around it would be some kind of profiling tool that would distinguish between throttling and idling so that they or reviewers could provide more complete messaging. If not that, a performance-monitoring API or toolset with a function that provides average non-idling clocks.

Then there's the whole "complete messaging" thing.

I don't think it matter all that much in the short term: these cards will sell like hotcakes no matter what, thanks to whatever crypto currency du jour is popular.
Might be a business model. Think MantleCoin, or the like.

It doesn't really matter for 99% of the users, but Nvidia is so incredibly good at magnifying these kind of flaws and creating a feeling of being more reliable. This is why they can charge higher $ for the same !.

Apparently, Nvidia is on top of both their products and their competition, like they take their craft seriously.
Their competition put its emphasis on other things besides looking at the output or consistency of their own products, going by the latency problems and microstutter, because that's how you show you care.
 
Sorry, I did not read the article (I did it now), but it was baffling to me that Nvidia criticized AMD for the exact same thing they already did in the past. But I did not see TR bashing Nvidia publicy for this reason, instead that very tech site reviewed TurboBoost as a great feature, not they did an article about the differences between sample and retail cards (like othere sites did). I am not happy the way AMD is addressing these issues, because they are real and the variance is somewhat too big, but the funny thing is that they are only following the footsteps of NV... When they had guaranteed performance for each card, they were bashed. Now they implemented their "TurboBoost" and they get bashed again. Really, these are two measures and the bias tech sites are putting in one direction or another is really a bad thing.
 
There is an element of human psychology to all this.
One vendor provides a product with variable performance past a mostly consistent published baseline clock.
The other provides no baseline, but a maximum it does not guarantee.

Uber mode handles much of this this, I think, but I think it was sub-optimal to make the non-default mode the more consistent one, and to set everything relative to a marketed max clock the card can only fall short of.
The default silent mode then becomes one where individual cards provide varying levels of disappointment.
That the retail samples were consistently below AMD's expectations (don't know what methods it used to set those expectations) doesn't help, because that means the product as sold will have by default an implication of misrepresentation on top of the inferior power consumption and acoustics.

Compare that to a quieter, more efficient card that provides varying amounts of icing on the cake turbo.

There is variation, but we can see how having a few different positives or negatives and appropriate framing can change perception.
 
In their latest press release, AMD claims to not understand how retail performance is biased towards the lower side: http://techreport.com/news/25733/amd-issues-statement-on-r9-290x-speed-variability-press-samples

At the same time, Nvidia is so confident that they are all biased towards the low side that they're willing to play the Newegg lottery on this.

Let's assume that AMD is speaking the truth: that means that they really didn't have a clue about the behavior of their own product. Stunning.
 
At the same time, Nvidia is so confident that they are all biased towards the low side that they're willing to play the Newegg lottery on this.
I really think there's something more going on with this.
1) Discounting deceit on the part of AMD.
2) Discounting a screwup on the part of AMD for the press samples or retail.

Why would Nvidia be so confident on this?
Did it buy a big selection of cards and profile them?
Even if there was a higher chance of lower performance, Techreport's sample was two cards. It would only take one sample from the minority to blow up in Nvidia's face.

Is there something physical in the cards that they picked up on, like a noted change in manufacturing tolerances, the rough handling of retail warehousing and shipping, or a lack of curing period on fresh retail samples?

Did Nvidia offer this deal to other sites?
What were the publishing conditions of this deal?
Can Nvidia decide what can be published if the results were different?
Would sites that found no significant variation decide there was nothing to publish?

Let's assume that AMD is speaking the truth: that means that they really didn't have a clue about the behavior of their own product. Stunning.
See microstutter and frame latency in the last year, although I believe at least the crossfire stutter was known and considered WAD.
 
Almost every editor removes the cooler of the press sample to look at the GPU, PCB etc. It's necessary to clean remains of the original TIM and apply a new one (almost everything is better than the original one). It may improve the thermal transfer and performance, of course. Couldn't this be the reason why are the press samples slightly faster than most retail boards?
 
I wonder what will go through AMD engineers' minds when they see perf graphs like such....
GTX-780-TI-GB-36.jpg

It is like...20nm comes early...7970/280x to 290x gets you 30% more fps..while 680/770 to 780Ti overclocked gains 80% more fps!!.

and this...
GTX-780-TI-GB-50.jpg

Dave you guys need to get your Powertune bios correct soon...please dont wait until next gen to make fix...
 
Almost every editor removes the cooler of the press sample to look at the GPU, PCB etc. It's necessary to clean remains of the original TIM and apply a new one (almost everything is better than the original one). It may improve the thermal transfer and performance, of course. Couldn't this be the reason why are the press samples slightly faster than most retail boards?
In the comments, Scott Wasson says explicitly that he never removed the cooler of any of his samples.
 
Dave you guys need to get your Powertune bios correct soon...please dont wait until next gen to make fix...
What's wrong with that picture specifically?
Discounting the special OC versions, the wobblier line is on average likely better than if AMD made it flat for Hawaii.
 
Last edited by a moderator:
Almost every editor removes the cooler of the press sample to look at the GPU, PCB etc. It's necessary to clean remains of the original TIM and apply a new one (almost everything is better than the original one). It may improve the thermal transfer and performance, of course. Couldn't this be the reason why are the press samples slightly faster than most retail boards?

Is this really such a common practice? Did the Tech Report do it?

Edit: oops, Silent Guy answered that last part.
 
Alexko: Every site showing a photo of the die / PCB / memory modules etc. had to remove the cooler.
But in this case the cause is likely somewhere else...

The clocks are related to temperature and (temperature is related to) fan speed. Fan speed variability was fixed, TIM was untouched in this case, so the last thing, which could affect thermal transfer is the heatsink / vapor chamber. Simple swap of coolers between the press sample a an another board would likely tell us more.
 
The clocks are related to temperature and (temperature is related to) fan speed. Fan speed variability was fixed, TIM was untouched in this case, so the last thing, which could affect thermal transfer is the heatsink / vapor chamber. Simple swap of coolers between the press sample a an another board would likely tell us more.

If they swap coolers, they need to also test the case where they remove the cooler and reinstall it back on the same card with whatever process for applying TIM done as they would normally.

Otherwise, swapping is changing the cooler and TIM simultaneously.
A more rigorous test would wind up reseating and testing multiple times, and reseating and testing multiple times after swapping. There can measurable differences between compound applications, so multiple tests would be needed to get an idea of how variable that is.

Even past that, I would wonder about secondary contributors to power draw and temperature.
What is the power draw of the fan, and does that figure a little into when the card is power-limited?

What if we had a heat source of controllable power output, and we used it to test the behavior of the heatsink over time? There's definitely a period where the mass of the cooler can keep temps low, which probably explains the first part of each test run where clocks run at max.
Is there a power level where performance degrades, such as heat pipe or vapor chamber dry out?

We don't have a good idea of how consistent the mechanical and physical portions of the card are, aside from the fans we know can vary very significantly.
 
Back
Top