AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Discussion in 'Architecture and Products' started by Nemo, May 7, 2013.

Tags:
  1. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
  2. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    It sure is a much better use of energy than Facebook.
     
  3. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Inefficient compared to what?

    FAHBench results in ns/day:
    R9-290X: 48.3 explicit, 178.4 implicit
    Intel i7-4770K: 4.2 explicit, 4.6 implicit
     
  4. Accord1999

    Newcomer

    Joined:
    Jun 21, 2003
    Messages:
    132
    Likes Received:
    0
    I'm commenting on the general principle of distributed computing where a lot of older, inefficient equipment are being used and the need for additional redundant processing.
     
  5. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Super-computers aren't necessarily based on recent hardware. Often it's a bunch of ~4-year old CPUs. A valid argument is that the distributed nature itself reduces efficiency because of a lot of added latency and because of the energy cost of distant communication, but that's not always a problem, it depends on the algorithm. For Folding@Home I don't think it's much of an issue, because there's not a whole lot of communication.

    In a nutshell, it depends on the data_movement/computation ratio.
     
  6. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    Pretty much all the 7950, 7970, 280x, 290 and 290x are sold out on newegg.
     
  7. thatdude90210

    Regular

    Joined:
    Aug 9, 2003
    Messages:
    937
    Likes Received:
    6
    It seems that the 290/290x are even better for litecoin mining than Tahiti. We usually see shortages at launch, I just thought it would be for other reasons, like not enough launch units. AMD might have hit a gold mine, as long as these digital currencies keep going.
     
  8. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    594
    Likes Received:
    298
    1 Mhash on litecoin on a single GPU is pretty amazing. Why do those numbers not match the gaming power consumption?
     
  9. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    ugh I gotta start doing litcoin again with my 6950. I made about $200 doing it before (after power consumption costs)
     
  10. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,153
    Likes Received:
    928
    Location:
    still camping with a mauler
  11. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Ouch, that's pretty aggressive marketing. I love it! :wink:
     
  12. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    so in what ways do you guys think the audio stuff can be improved in the next generation of cards ? Just running at faster speeds or more dsp ?
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    This isn't a positive indicator for AMD's quality control, in the best case.

    There's a bunch of factors I wish we could tease out, but the data presented is what we have to go by.
    The sample size isn't great, and if it were possible I wish there were more than one 780 Ti sample, just to see if this isn't something other cards have when we're being told to hold one particular chip under the magnifying glass. Generally, I'd lend more credence to Nvidia's clock promises in part because it makes actual promises and in part because I doubt Nvidia's clocking scheme is flexible enough for them to pull this kind of stunt (to an extent, I believe at least some sites have seen a bit of variation).

    Did they put each card through a break-in period so that they had similar levels of uptime?
    What if we could compare this to a graph of power draw through the test runs?
    What if we could independently verify the on-die heat readings and fan RPM?

    We all know that this setup is excessively sensitive to the performance of the cooler, and it's come up over and over again that something as simple as reseating the cooler or applying a different TIM could measurably change the behavior of other AMD GPUs.
    What if the coolers were re-seated, or a new compound applied?
    What if the coolers were then switched?

    I'm hoping AMD's physical characterization of its chips isn't too far off, because a shortfall there would be very serious and even less forgivable. This would put more burden on the coolers being consistent. We already know that the coolers are not, and it is again to the detriment of AMD that its follow-through seems so stubbornly calibrated to fall short of Nvidia's ability to easily embarrass it.

    I can see if the press samples got some special treatment, if not for cherry-picked chips, just for extra care in assembly and delivery.

    The blame really falls on AMD for its getting scooped on the behavior of its product like this.
    If the silicon isn't being pushed to its edge, it is certainly being pushed to the edge of AMD's cut-rate product engineering.
    I'm leaning slightly away from malice, if only because doing this on purpose implies more effort and due diligence than I'd give them credit for going from past history.
    That's not to say I wouldn't accept a more nefarious explanation if a bit more evidence in support of it came up.
     
  14. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    The amount of hypocritical drivel coming from Nvidia and the usual suspects is pretty amusing. Where was the outrage and 5 paragraph long posts when Nvidia did exactly the same?
    http://www.techhum.com/geforce-gtx-680-test-results-with-commercial-versions/

    The techreport article takes the cake though claiming 10% performance difference when their numbers only show a 3-5% difference. Hypocrites fishing for page views on an article spurred by Nvidia. One quick glance at newegg and other retailers tells us why they are doing this, all AMD Tahiti and Hawaii video cards are literally sold out and demand is trickling down even to the lower end models :lol:
     
  15. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    And what if the cards bought from Nvidia were indeed selected as the worst among a bunch of retail cards? Marketing is a dirty business, indeed...
     
  16. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,551
    Likes Received:
    695
    Because its not the same thing. GeForce's have a Base and a Boost clock. The Boost clock is an average of boosting from samples, and nVIDIA always stated that. They said higher clocks could happen, but they were not guaranteed in anyway. But they guaranteed 1) card would not go below base clock 2) card would achieve the advertised Boost clock. That is why the increase from Base clock to Boost clock (guaranteed one) is mild at best.. after all 50Mhz is nothing to be praised at.

    Now look at AMD situation with 290X: where is the base clock? Nowhere to be found. There is no base clock. AMD only says "up to 1Ghz", which can and does mean that the card will go further down, as further as 700Mhz-800Mhz, where it clearly is much slower than the press samples.

    Its much worse than nVIDIA: they guaranteed clocks, in the form of minimum clocks, while AMD doesnt guarantee anything. If anything they look to guarantee that card cant run faster than 1Ghz (which is also a lie :lol:). In the end AMD just put themselves in this horrible mess.
     
  17. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    The variance seems to be on average lower for the 680 in that review, not that we have seen a comparison for the 780 and the like to be certain.

    Perhaps all of this is in the presentation.
    Nvidia apparently has the savvy to make a show of things by buying its competitor's cards to show how certain it is that they'll not live up to the reviews.

    There's a number of ways this could have been pulled off.
    One is AMD cherry-picked press samples, and it's really not hard to detect.
    Another is that Nvidia checked out the review scores, found the reviewers whose samples ranged at the top the curve, and offered to get cards for them. The sheer amount of variability and the sparseness of some reviewers' test methods would make that rather difficult, I think.
    If it's a question of curing time for the thermal compound, Nvidia would be able to give fresher cards, or at least cards AMD might have not been able to burn in.

    Is Newegg in on this conspiracy?
     
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    Yes, but the press samples can go down there too, and the 2nd press sample provided by AMD to TechReport was showing closer to same performance as the retail samples than the 1st retail sample, so it's not just "golden retail samples".
    Funny, too, how they used the 1st samples BIOS on the worst performing card of the all, not on the others to see the difference on those.

    The MHz difference between the TechReport tests ranged was at max 6,3% and 5,5% on average on the worst card compared to the best.
    The performance difference, however, was at max 6,2% and 4,6% on average on the worst card compared to the best
     
  19. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    Actual clocks are irrelevant - performance matters. And especially the variance between retail and review cards. If the review cards are better than specified (geforce 680+) or the retail cards are worse (r9 290) doesn't matter much - problem is the artificial performance in reviews.


    On another note, Asus custom 290x:
    https://www.facebook.com/media/set/?set=a.10152417414532388.1073741850.405774002387&type=1
     
    #2219 Psycho, Dec 5, 2013
    Last edited by a moderator: Dec 5, 2013
  20. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    If you'd actually read the article...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...