AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Is it anywhere enough to justify the electricity consumption? Distributed computing is merely an inefficient method of using inefficient hardware to solve a task.
Inefficient compared to what?

FAHBench results in ns/day:
R9-290X: 48.3 explicit, 178.4 implicit
Intel i7-4770K: 4.2 explicit, 4.6 implicit
 
Super-computers aren't necessarily based on recent hardware. Often it's a bunch of ~4-year old CPUs. A valid argument is that the distributed nature itself reduces efficiency because of a lot of added latency and because of the energy cost of distant communication, but that's not always a problem, it depends on the algorithm. For Folding@Home I don't think it's much of an issue, because there's not a whole lot of communication.

In a nutshell, it depends on the data_movement/computation ratio.
 
ugh I gotta start doing litcoin again with my 6950. I made about $200 doing it before (after power consumption costs)
 
so in what ways do you guys think the audio stuff can be improved in the next generation of cards ? Just running at faster speeds or more dsp ?
 
Heh, NVIDIA was so confident that the retail 290Xes are inferior to the review samples that they purchased and drop shipped a couple of 290Xes from Newegg to Scott :D
http://techreport.com/review/25712/are-retail-radeon-r9-290x-cards-slower-than-press-samples

Of course, the results did show that the review sample is superior. Dirty business.. wonder if AMD actually thought nobody would notice..

This isn't a positive indicator for AMD's quality control, in the best case.

There's a bunch of factors I wish we could tease out, but the data presented is what we have to go by.
The sample size isn't great, and if it were possible I wish there were more than one 780 Ti sample, just to see if this isn't something other cards have when we're being told to hold one particular chip under the magnifying glass. Generally, I'd lend more credence to Nvidia's clock promises in part because it makes actual promises and in part because I doubt Nvidia's clocking scheme is flexible enough for them to pull this kind of stunt (to an extent, I believe at least some sites have seen a bit of variation).

Did they put each card through a break-in period so that they had similar levels of uptime?
What if we could compare this to a graph of power draw through the test runs?
What if we could independently verify the on-die heat readings and fan RPM?

We all know that this setup is excessively sensitive to the performance of the cooler, and it's come up over and over again that something as simple as reseating the cooler or applying a different TIM could measurably change the behavior of other AMD GPUs.
What if the coolers were re-seated, or a new compound applied?
What if the coolers were then switched?

I'm hoping AMD's physical characterization of its chips isn't too far off, because a shortfall there would be very serious and even less forgivable. This would put more burden on the coolers being consistent. We already know that the coolers are not, and it is again to the detriment of AMD that its follow-through seems so stubbornly calibrated to fall short of Nvidia's ability to easily embarrass it.

I can see if the press samples got some special treatment, if not for cherry-picked chips, just for extra care in assembly and delivery.

The blame really falls on AMD for its getting scooped on the behavior of its product like this.
If the silicon isn't being pushed to its edge, it is certainly being pushed to the edge of AMD's cut-rate product engineering.
I'm leaning slightly away from malice, if only because doing this on purpose implies more effort and due diligence than I'd give them credit for going from past history.
That's not to say I wouldn't accept a more nefarious explanation if a bit more evidence in support of it came up.
 
The amount of hypocritical drivel coming from Nvidia and the usual suspects is pretty amusing. Where was the outrage and 5 paragraph long posts when Nvidia did exactly the same?
http://www.techhum.com/geforce-gtx-680-test-results-with-commercial-versions/

The techreport article takes the cake though claiming 10% performance difference when their numbers only show a 3-5% difference. Hypocrites fishing for page views on an article spurred by Nvidia. One quick glance at newegg and other retailers tells us why they are doing this, all AMD Tahiti and Hawaii video cards are literally sold out and demand is trickling down even to the lower end models :LOL:
 
And what if the cards bought from Nvidia were indeed selected as the worst among a bunch of retail cards? Marketing is a dirty business, indeed...
 
The amount of hypocritical drivel coming from Nvidia and the usual suspects is pretty amusing. Where was the outrage and 5 paragraph long posts when Nvidia did exactly the same?
http://www.techhum.com/geforce-gtx-680-test-results-with-commercial-versions/

The techreport article takes the cake though claiming 10% performance difference when their numbers only show a 3-5% difference. Hypocrites fishing for page views on an article spurred by Nvidia. One quick glance at newegg and other retailers tells us why they are doing this, all AMD Tahiti and Hawaii video cards are literally sold out and demand is trickling down even to the lower end models :LOL:

Because its not the same thing. GeForce's have a Base and a Boost clock. The Boost clock is an average of boosting from samples, and nVIDIA always stated that. They said higher clocks could happen, but they were not guaranteed in anyway. But they guaranteed 1) card would not go below base clock 2) card would achieve the advertised Boost clock. That is why the increase from Base clock to Boost clock (guaranteed one) is mild at best.. after all 50Mhz is nothing to be praised at.

Now look at AMD situation with 290X: where is the base clock? Nowhere to be found. There is no base clock. AMD only says "up to 1Ghz", which can and does mean that the card will go further down, as further as 700Mhz-800Mhz, where it clearly is much slower than the press samples.

Its much worse than nVIDIA: they guaranteed clocks, in the form of minimum clocks, while AMD doesnt guarantee anything. If anything they look to guarantee that card cant run faster than 1Ghz (which is also a lie :LOL:). In the end AMD just put themselves in this horrible mess.
 
The variance seems to be on average lower for the 680 in that review, not that we have seen a comparison for the 780 and the like to be certain.

Perhaps all of this is in the presentation.
Nvidia apparently has the savvy to make a show of things by buying its competitor's cards to show how certain it is that they'll not live up to the reviews.

There's a number of ways this could have been pulled off.
One is AMD cherry-picked press samples, and it's really not hard to detect.
Another is that Nvidia checked out the review scores, found the reviewers whose samples ranged at the top the curve, and offered to get cards for them. The sheer amount of variability and the sparseness of some reviewers' test methods would make that rather difficult, I think.
If it's a question of curing time for the thermal compound, Nvidia would be able to give fresher cards, or at least cards AMD might have not been able to burn in.

And what if the cards bought from Nvidia were indeed selected as the worst among a bunch of retail cards? Marketing is a dirty business, indeed...
Is Newegg in on this conspiracy?
 
Now look at AMD situation with 290X: where is the base clock? Nowhere to be found. There is no base clock. AMD only says "up to 1Ghz", which can and does mean that the card will go further down, as further as 700Mhz-800Mhz, where it clearly is much slower than the press samples.
Yes, but the press samples can go down there too, and the 2nd press sample provided by AMD to TechReport was showing closer to same performance as the retail samples than the 1st retail sample, so it's not just "golden retail samples".
Funny, too, how they used the 1st samples BIOS on the worst performing card of the all, not on the others to see the difference on those.

The MHz difference between the TechReport tests ranged was at max 6,3% and 5,5% on average on the worst card compared to the best.
The performance difference, however, was at max 6,2% and 4,6% on average on the worst card compared to the best
 
Because its not the same thing. GeForce's have a Base and a Boost clock.

Actual clocks are irrelevant - performance matters. And especially the variance between retail and review cards. If the review cards are better than specified (geforce 680+) or the retail cards are worse (r9 290) doesn't matter much - problem is the artificial performance in reviews.


On another note, Asus custom 290x:
https://www.facebook.com/media/set/?set=a.10152417414532388.1073741850.405774002387&type=1
 
Last edited by a moderator:
Back
Top