AMD Vega Hardware Reviews

Discussion in 'Architecture and Products' started by ArkeoTP, Jun 30, 2017.

  1. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    I thought there was some outline of the methods used, but when I tried to find the thread to refresh my memory I did not find posts that corresponded to a null or informative outcome.
    I guess the next step is hopefully the open variant of the tool that was mentioned in the other Vega thread being made available at some point.
     
  2. roybotnik

    Newcomer

    Joined:
    Jul 12, 2017
    Messages:
    18
    Likes Received:
    14
    It's very easy to accomplish. The power limit needs to be increased to 30-40%, and max fan speed needs to be set to ~3500. I've done a bunch of benchmarking with mine and have never had a problem getting 1600mhz and much higher. I don't know why people are having trouble with wattman, it's been great for me.

    Here's a random firestrike: http://www.3dmark.com/3dm/21034813

    I get anywhere from 29k-31k with my overclocked 1080Ti in the same system, so Vega isn't as hopelessly far away from the Ti as some people think, at least in the ultimate e-peen measuring contest.
     
    RootKit and BRiT like this.
  3. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
  4. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
    I would consider a card 22% slower than 1080 Ti using 400W to be pretty hopelessly behind it :neutral:
     
    xpea and homerdog like this.
  5. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,151
    Likes Received:
    571
    Location:
    France
    The power draw is still crazy with the WC version. I was hoping better results (because less leakage ?), and 1600mhz even within the 300W settings. It's sad.
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Temperature does have an effect on leakage, but leakage was particularly problematic at 28nm planar. FinFETs do still leak, but it seems like the amount is much less than at the last planar nodes. Those troubles likely played a role in why 20nm planar was skipped by so many.
    The better control over the channel that FinFETs have pushed back the level of leakage sensitivity to temperature, and have improved a number of other types like static and (within more modest clock ranges) sub-threshold leakage.

    What was rather noticeable for Fury X probably should not be as significant for Vega.
     
    DavidGraham likes this.
  7. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
    Clock range does increase by 100MHz, despite the same 300W limit and power draw. So leakage reduction ia still there.
     
  8. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,909
    Likes Received:
    1,607
    So in the PcPer review they don't include water cooled FE out-of-the-box settings, but use overclocked FE settings for all game benchmarks?
     
  9. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
    Well, it's out of the box, just after a BIOS switch. Both BIOS(i?) came out of the box. :razz:
     
  10. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Is it?
    pretty much at the bottom of the page:
    https://www.pcper.com/reviews/Graph...-16GB-Liquid-Cooled-Review/Power-Consumption-

    I am a bit unclear on what they are presenting on the following pages. AC is clear, WC might be misleading because of the same clock speeds given as for AC - typo?
     
  11. T1beriu

    Joined:
    Jun 28, 2017
    Messages:
    7
    Likes Received:
    1
  12. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
    It's quite clearly the BIOS switch tested. 350W is the stock setting for the BIOS switch.
     
  13. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,909
    Likes Received:
    1,607
    https://www.pcper.com/reviews/Graph...-16GB-Liquid-Cooled-Review/Power-Consumption-
     
  14. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    After reading through the overclocking section, they clearly state that 1712 MHz was the limit they could reach stably - with occasional dips back down to 1637 MHz.
    In the graphics, there's two Vega FE WC - one at 1382 MHz and one at 1712 MHz. Since PCPer is giving base clocks for the other cards as well, it seems like they tested the 1382-WC-version at stock settings (no mention of BIOS switching) and the 1712-WC-version ist their manually OC'ed variant, since they do not mention anywhere that clock speeds go up with the 350W BIOS. On the contrary, they state that clocks stay at the same ceiling height with the 350W BIOS here:
    https://www.pcper.com/image/view/84108?return=node/68126
     
  15. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    It would be nice if more factors were controlled for, or measured.
    When Anandtech did a rough check for the temperature-related effect on power consumption, it was between separate runs of the same Crysis workload, one when the CLC was at 40C and one after the coolant loop had reached a steady state of 65C.
    In that modest range, there was a difference of about 18W.

    It's not a precise experiment, but it does keep silicon and board variability confined to one card. The Vega comparison is between two different chips and different treatment of the on-board components in terms of cooling. Not getting to the thermal ceiling that the air-cooled chip gets to may also reduce effects like the HBM2 stacks overheating or localized hot spots causing the GPU to scale back.

    Getting a figure for how much power consumption changes between the cooled loop and the fully warmed loop can give a picture of leakage scaling, with the same caveat with Fury X that the scaling for the 65-85 range would be unmeasured. It may be a more important omission this time because FinFETs are at least expected to handle this better.

    Going with the 100 MHz difference at an assumed fixed power target, something like 7% of an upclock. However, let's assume the base voltage isn't higher or that the GPU didn't select a higher clock and voltage point, which may be a shaky assumption, and that the step is modest enough to assume the difference comes down wholly to leakage savings translating into a linear increase from the clock. It's a roughly 7% increase of clock in the range between 65C and 85C if ASIC power is in the 250W range, which Fury X experienced in the 40C and 65C range.
     
    Cat Merc likes this.
  16. roybotnik

    Newcomer

    Joined:
    Jul 12, 2017
    Messages:
    18
    Likes Received:
    14
    It's a 1080Ti with one of the best aftermarket coolers vs a blower with a heatsink from a laptop though. Power draw...yeah it's a problem. But heat isn't and clocks should be much better and more stable with a competent cooler.

    One odd thing I noticed is that setting voltage control to manual in wattman sets the max voltage to 1.2V by default, which results in better power usage without actually changing anything. But the card is fully stable at 1.2 even when I overclock it. Also, looked at the drivers there aren't any new registry settings related to power, I would've expected some new keys or something.
     
  17. BacBeyond

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    73
    Likes Received:
    43
  18. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,133
    Likes Received:
    905
    Location:
    still camping with a mauler
    Vega is huge and uses HBM.. why does it suck so badly compared to GP104?
     
  19. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    Wiki has the answers:

    "Vega is only about a tenth of the age of the Sun, but since it is 2.1 times as massive its expected lifetime is also one tenth of that of the Sun.
    ...
    Most of the energy produced at Vega's core is generated by the carbon–nitrogen–oxygen cycle (CNO cycle), a nuclear fusion process that combines protons to form helium nuclei through intermediary nuclei of carbon, nitrogen, and oxygen. This process requires a temperature of about 15 million K, which is higher than the core temperature of the Sun, but is less efficient than the Sun's proton-proton chain reaction fusion reaction. "
     
    Kej, Cat Merc, Clukos and 3 others like this.
  20. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    AMD have devoted too much silicon for uses that don't do much for gaming? HBCC and the FP16 doubling. Perhaps the HBM 2 controller isn't as small as for hbm in furyx.

    Memory bandwidth is likely the major culprit for performance. The ln2 fury run was easily 10%ahead of 1080 while being clocked at 1400Mhz.

    On the power side it looks like AMD need higher voltage for same clocks. I think Pascal can do 1600Mhz easily with 0.9V, maybe even less while AMD have to put it at 1.2V to be sure.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...