AMD Radeon VII Announcement and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Jan 9, 2019.

Tags:
  1. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Completely agreed.

    Most of AMD's new GPU launches from the past 5 years seem like the result of a monkey paw wish.

    Hawaii: good performance at launch, stable drivers but most terrible cooler in the world which obfuscates all positives.
    Fiji: good performance at launch, stable drivers, good cooling solutions but small amounts of VRAM that hurt their long-term performance.
    Polaris 10: good performance, stable drivers, better efficiency, adequate amount of VRAM, but pulling too much power from the PCIe
    Vega 20: good performance at competitive price, new price, finally a high-end solution buuut driver implementation for SMU is broken on windows so efficiency and stability goes down the toilet.


    AMD's official response to this seems to be terrible at the moment. Apparently they told the guys at Gamers Nexus that everything is final and working as it was supposed to be, so there's no improvement to expect.
     
  2. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
  3. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    332
    Likes Received:
    87
    Maybe that's just what the bin is, an MI 50 that's not efficient? Efficiency matters for corporate customers, they're the ones most sensitive to electricity bills. AMD previously stated most consumers don't seem to care, and I suppose they'd know best from sales numbers. Bad efficiency bins are definitely a thing, but I'm just speculating.
     
  4. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,015
    Likes Received:
    112
    I actually thought efficiency would matter for corporate customers too.
    But clearly that's not the case neither. If it actually would matter AMD would be selling these chips in cards using ~230W, being roughly 10% slower but using 25% less electricity.
    And yes I'm somewhat surprised AMD isn't doing exactly that (but indeed, for consumer cards it's not surprising).
     
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,789
    Likes Received:
    2,596
    The pitch was that AMD is competitive with NVIDIA in the pro space, especially against the Tesla V100. Doing so would undermine AMD's claim.
     
    pharma and digitalwanderer like this.
  6. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Apparently the cards can use 230W while keeping performance intact.
    Here's an example of a guy who's running the GPU at 960mV vcore (down from 1250mV 1094mV default):



    I can't find anyone who hasn't been successful at reducing vcore to at least 1V.
    Driver auto-undervolting had to have been working on day one. It was absolutely essential for this card.
     
    #306 ToTTenTranz, Feb 11, 2019
    Last edited: Feb 11, 2019
    Lightman and BRiT like this.
  7. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,179
    Likes Received:
    581
    Location:
    France
    The Vega 10 had / has massive undervolting potential too. I guess they don't have the tools to do a better voltage adjustment (is that the correct word ? Like testing gpus to see what is the needed voltage case by case, like the Hovis method). Or Vega wasn't designed with that aspect in mind.
     
  8. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Wait, they're pumping 1.25V into 7nm transistors? I'm not surprised that some cards unvervolt well!
     
    CaptainGinger likes this.
  9. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Sorry I misread. Standard core voltage is 1094mV.
     
    Alexko likes this.
  10. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,015
    Likes Received:
    112
    Yes, but I'm not talking about undervolting. This might be an interesting discussion as well (could amd ship those cards with lower voltages and still guaranteeing they work correctly in all conditions), but really a separate one.
    But I'm only talking about the fact that amd is operating these chips way beyond their point where they operate efficiently, even on those datacenter oriented cards. Hence even with the standard (non-undervolted) voltage/frequency curves if you drop clocks 10% power drops like 25%.
    (While not a test with such a card, but rather the Vega VII, computerbase.de tested with a -20% TDP limit (which is the minimum you can even set...), and it resulted in a 3% performance drop on average, go figure... Granted that is biased by the fact that some apps don't actually reach the 300W limit, but even the maximum performance drop (which definitely went from 300W to 240W) was only 6% - https://www.computerbase.de/2019-02...ns-creed-origins-effizienzvergleich-bei-240-w. Now on desktop, it sort of makes sense amd absolutely wanted to get as much out of the chip they possibly could by any means, hairdryer or not, but I'd have thought in the datacenter efficiency would matter a lot more - if you need more performance there get more cards...)
     
  11. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,179
    Likes Received:
    581
    Location:
    France
    Maybe apps on datacenter don't stress all the part of the gpu at the same time like games do ? Like, in my mind, they will use shaders/maths a lot, but tmu/rops ? While for gaming everything will be a active most of the time.
     
  12. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,884
    Likes Received:
    1,759
    Yup. A GPU running compute tasks at "100% utilization" barely stresses the GPU compared to a game which uses all parts of chip.
     
  13. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
    AMD Radeon VII: Benchmarks with current games and (Async) Compute
    February 11, 2019
    https://www.computerbase.de/2019-02/amd-radeon-vii-sonder-test/
     
    Lightman likes this.
  14. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    629
    Likes Received:
    493
    Does anyone do these kind of tests with low end CPUs? Like what does a dual core i3 from a couple gens back look like with the different APIs? Or an quad core, first gen Ryzen?
     
  15. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    [​IMG]

    From here:



    At 986mV the core clock stays at over 1790MHz, so performance actually increases while being almost silent (fans run automatically at 1550rpm).
    The overclock in his card was achieved with a slight undervolt to 1082mV which enables the core to clock at >1920MHz. He claims a 9-10% bump in performance after GPU and HBM2 overclock (+increased power limit), which in a comparison against the RTX 2080 would be enough to tilt the balance in favor of the VII in most games.


    AMD's "factory overvolts" seem super weird at this point. All cards I've seen so far can undervolt at least by 100mV.
    The Radeon VII's reference core voltage at the moment hurts everything: temperatures, power consumption, core clocks and noise.
    Why?!





    Who's making the HBM2 stacks? They all seem to clock at 1200MHz with no problems (aside from slightly increased temperatures). Maybe these are Samsung Aquabolt chips?
     
    nnunn and Lightman like this.
  16. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    520
    Likes Received:
    239
    Or newer Hynix HBM.
    Either way these are 2Gbps stacks, probably the very same ones WX8200 uses.
     
  17. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    pharma, vipa899 and Lightman like this.
  19. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    It looks like it's not working, and the use cases where Crossfire "works" seem to be where DX12 explicit multiadapter is supported instead.

    But damn, those scaling numbers on explicit multiadapter are ridiculous! Practically 100% scaling on any architecture.
    Here's hoping explicit multiadapter will be broadly supported in the future. If they did right now, two $200 RX580 8GB would be giving a performance between the RTX 2080 and the 2080 Ti.
     
  20. msia2k75

    Regular Newcomer

    Joined:
    Jul 26, 2005
    Messages:
    326
    Likes Received:
    29

    From the graph' how much exactly the Radeon VII (at 984mV) card is consuming?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...