AMD Polaris Rumors and Discussion

Discussion in 'Architecture and Products' started by gamervivek, Dec 6, 2016.

Tags:
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,143
    Likes Received:
    1,830
    Location:
    Finland
    It would have if they wouldn't have cranked the OC's so high. Sapphire's Nitro+ has milder OC over AMD reference clocks and has bettter performance per watt than RX 580
    RX 590 at RX 580 clocks would have clearly better perf / watt
     
    ToTTenTranz and Ryan Smith like this.
  2. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    178
    Likes Received:
    147
    10% above GTX 1060 with 2* the power (125W vs 249W)... This is not even funny anymore.
     
    Geeforcer and pharma like this.
  3. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    Polaris may be 2.5 times brighter today than when Ptolemy observed it in 169 A.D, but it has also been dragged along 2.5 years longer then anyone had anticipated when we observed it in AD 2016.
     
    Kej, Garrett Weaving, eloyc and 8 others like this.
  4. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,143
    Likes Received:
    1,830
    Location:
    Finland
  5. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,124
    Likes Received:
    902
    Location:
    still camping with a mauler
    Don't care about perf/$ or watt, Fatboy is best name ever would buy. Maybe they make a Fat Man but probly wouldn't do well in Japanese market.
     
  6. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,780
    Likes Received:
    4,431
    If it bothers you so much, then it's great that the GTX1060 exists for you.

    People who are stuck with a 300W power supply and/or a tiny SFF case with restricted airflow should not be looking at the RX590.
    People with >450W PSUs, regular cases and don't have time to play videogames more than ~15 hours/week will find a better deal with the RX590 than the GTX1060. And the advantage gets significantly larger for those using cheap freesync monitors.
     
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,143
    Likes Received:
    1,830
    Location:
    Finland
    To add to this, our sample was quickly tested to run fine at 1.1V (default 1.15V), which resulted 50W lower consumption
     
  8. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,489
    Likes Received:
    907
    Is there that much voltage margin on GeForces?
     
  9. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,780
    Likes Received:
    4,431
    Nope.
    AMD seemingly likes to overvolt everything...
    I'm yet to hear about a single case (since Hawaii and later AMD cards) where people haven't gained a significant amount of efficiency after undervolting.
     
  10. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,135
    Likes Received:
    2,248
    Location:
    Wrong thread
    AMD must be desperate to maximise yields with no regard for efficiency.

    Nvidia have the gimped 3GB 1060 for chips that can't make it as a 'proper' 1060.
     
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,780
    Likes Received:
    4,431
    But even AMD's own cards with gimped chips get significant gains with undervolting...

    I just think whoever at AMD that is writing the electrical guidelines on GPUs for AIBs was physically, mentally and spiritually hurt by a professional graphics card undervolter.
     
    #271 ToTTenTranz, Nov 16, 2018
    Last edited: Nov 16, 2018
    Picao84 likes this.
  12. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,489
    Likes Received:
    907
    Assuming that this really is a bigger phenomenon for Radeons than GeForces, there must be a good reason for it.

    Too little volume on Radeons to have enough separate bins? Not enough resources at AMD to fund proper binning? So little power-efficiency compared to NVIDIA that AMD just figured "fuck it, we're going to lose the power-efficiency contest anyway, so let's crank voltage to the moon and make the binning process as cheap as it can be"?
     
    swaaye and DavidGraham like this.
  13. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,149
    Likes Received:
    570
    Location:
    France
    A mix of all that I guess.
     
  14. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    178
    Likes Received:
    147
    You seem to take it kinda personally.

    That particular card has avg power consumption ~ R9 290X non-Über (232W vs 236W). Considering the recent climate shift in Europe, the power/heat metric somehow grow in importance for many.
     
  15. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,120
    Likes Received:
    2,867
    Location:
    Well within 3d
    Yield maximization seems plausible. Other trends seem to show that AMD's ability to characterize silicon for its various dynamic voltage and frequency measures has been less than perfect on the first try, but this seems like a poor excuse for an architecture as mature as Polaris. I presume that AMD doesn't prioritize those features or that efficiency for the client SKUs at least, though I haven't seen a systematic comparison with the professional and datacenter ones.

    Another possibility is that the standards applied by undervolters versus the manufacturer's validation suites are very different. Corner cases or functional blocks poorly exercised in benchmarks may not flake out, or involve relatively uncommon bit errors that frequently do not manifest in significant ways without the testing regime to find them. Some error types are also recoverable and not typically reported to users, but these would likely throw a red flag in validation. Tweaks to the fabric voltages and clocks for Zen can start causing internal error codes to be thrown even though the chip remains functional, and elements like the CRC retry in GDDR5 and clock-stretching for vdroop can keep the chip running while losing cycles to error recovery. Too many of those incidents would likely not allow a chip to pass, but whether most users are able to tease out that effect is unclear.
     
    Lightman, Bob, Alexko and 3 others like this.
  16. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,285
    Likes Received:
    8,485
    Location:
    Cleveland
  17. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,489
    Likes Received:
    907
    If there are corner cases that require significantly higher voltage than the rest of the chip, wouldn't that point to flaws in the physical design? I mean, ideally, you'd want the entire chip to be optimised to run at exactly the nominal voltage, right? I understand that's probably not possible, but is there reason to believe that AMD's GPUs exhibit more variability on this front than NVIDIA's?
     
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,120
    Likes Received:
    2,867
    Location:
    Well within 3d
    If by nominal voltage you mean what the foundry gives for its process node, that's something like .7-.8 versus the 1.0-1.x volts shown for the load figures. The designs themselves may have a nominal target, although even then there are regions of the chip tuned for their desired voltage, leakage, and performance. Interconnects, memory domain, SRAM, and logic can be targeted for different levels for reliability, speed, and efficiency.
    Different clock and power domains mean signals may need to be buffered or stepped up to different levels, where the requirements to make the crossing depend on how far apart the states are from one side to the other.

    Then with DVFS, where one of the dynamically variable elements is voltage, there's a complex set of changing conditions for the parts of the chip that are operating together. Device variation can affect things as well, influencing the safety margin of a given circuit, and elements of the system can be sensitive to mismatches with other circuits or can suffer from aging effects like SRAM.
    Then, there's dynamic behaviors that cannot be readily predicted like spikes in demand for power due to a larger number of heavy operations being issued by many neighboring units, or disruption from events like other regions coming out of gated states. Localized behaviors can take voltages closer to their thresholds, and those thresholds are subject to statistical variation.

    As for whether AMD or Nvidia suffer from this more, I haven't seen a wide-scale attempt to tease this out. I know of at least anecdotal reports of Nvidia chips being undervolted and benefiting. It seems as if some architectural choices could influence how much hardware is running in parallel, and how effectively they can mitigate transients across the clock/voltage range. Perhaps one perverse scenario is that Nvidia's chips have a reduced need to drive voltages higher to overcome wire delay and can operate closer to the lower limits of the process where they can generally go no lower or lack sufficient compensation methods to stably undervolt to the same degree.
     
    Lightman and Alexko like this.
  19. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
    In my experience, underovlted AMD GPUs often exhibit artifacts or lockups during certain workloads or game scenes, but because no one bothers to proof test these cards in all scenarios, the anecdotal consensus (especially among miners) is undervolting = better performing and efficient card. Which is a bit naive, considering it's silicon lottery to begin with and it was only really heavily tested in one workload (mining).
     
  20. del42sa

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    164
    Likes Received:
    82
    GCN
    Undervolting


    name more iconic duo :D
     
    Garrett Weaving likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...