AMD Vega Hardware Reviews

Discussion in 'Architecture and Products' started by ArkeoTP, Jun 30, 2017.

  1. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    555
    Likes Received:
    93
    GDDR5 and HBM2 have that as part of their specification, HBM1 notably did not. Not all cards implement the full specification though, for example the first use of GDDR5 in the HD4xxx series didn't.
    http://www.anandtech.com/show/2841/12
    As mentioned above, bandwidth also depends on what clock the infinity fabric interconnect is running at. It may not be directly proportional to memory clock.
     
    #1061 kalelovil, Aug 16, 2017
    Last edited: Aug 16, 2017
    homerdog likes this.
  2. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,123
    Likes Received:
    3,183
    Location:
    Pennsylvania
    Infinity fabric in Vega is it's own clock, it's not tied to anything.
     
  3. Mat3

    Newcomer

    Joined:
    Nov 15, 2005
    Messages:
    163
    Likes Received:
    8
  4. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    From a implementation point of view, how do they differ?

    No matter what they're called, it's still just instructions executing on the same shader cores?

    Is it wrong to look at it as coalescing multiple smaller shaders together and getting some optimizations out of it?

    What prevented AMD (and Nvidia) from doing that in the past?
     
    no-X likes this.
  5. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,436
    Likes Received:
    264
    A Vertex Shader thread doesn't have access to neighboring vertices like a primitive shader or geometry shader. Nvidia and AMD have supported a "fast path GS" which gives access to all vertices of a primitive but doesn't support amplification. The primitive shader uses a different data path through the hardware than the VS. The whitepaper says "Primitive shaders will coexist with the standard hardware geometry pipeline rather than replacing it" and it's alluding to the different data path.

    I wouldn't say anything strictly prevented an IHV from doing this in before now.

    There's a high level pipeline diagram on page 6 of the whitepaper that shows the concept of reducing the number of shader stages and culling before attribute shading. Coalescing multiple shader stages is part of the Next Generation Geometry Engine and the primitive shader is one of those stages. Technically you can use the GS to implement some of the primitive shader functionality, but the way the GS API is specified is not the best approach. Calling the new stage a primitive shader highlights it combines the functionality of the VS and GS even if the user doesn't specify a GS. A primary feature is giving the shader access to connectivity data without requiring a GS.
     
  6. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,436
    Likes Received:
    264
    Are you referring to memory or engine overclocking? Memory overclocking can lead to errors and more repeated transactions, but NOPs aren't inserted for engine overclocking. Fixed function logic will straight up fail if a signal doesn't complete the path by the next clock edge. Parts of the design with EDC or ECC may be fault tolerant and retry, but most of a design won't consist of this logic.
     
    tinokun likes this.
  7. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,990
    Likes Received:
    1,500
    got a crystal ball ?
     
  8. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,990
    Likes Received:
    1,500

    The 56 is $400 when it releases later this month. The 64 was $500 before selling out and that would put it $40 cheaper than the geforce 1080 on amazon
     
  9. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    Looks like combination of no MSAA and ryzen allows Vega 56 to take a big lead over 1070 even in games where nvidia cards do better,



    Hardware Unboxed did tests with and without MSAA,

     
    Heinrich4, Dygaza, Scott_Arm and 3 others like this.
  10. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    The distinction is just the scope of the shader as I understand. Same hardware being used without any special new functions. Maybe that 4SE one in the ISA, but I'm guessing we haven't seen all the instructions yet. A primitive shader would be inclusive of geometry and vertex shaders. Simply ignoring the greater scope they would be identical. Unless AMD saves overhead with the old way, I'd speculate everything becomes a primitive shader for the purpose of optimization. Coexistence only from the programmers perspective. From the drivers point of view, the hardware and programmable stages would be the same shader, just with internal functions added to the beginning and end. It's only when rasterization kicks in that another pipeline stage truly kicks in as lots more threads are spawned. Will be interesting to see where they go with this in regards to dynamic memory allocations in shaders.

    Engine overclocking in a throttled environment. Doing nothing as a means to conserve energy in the presence of a thermal or power limit without adjusting clockspeeds. Not as effective as changing the clocks, but situationally useful. Some overclocking tests have resulted in higher core clocks, but had no appreciable impact on performance. Entirely separate issue from timing constraints.
     
  11. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,000
    Likes Received:
    1,687
  12. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,000
    Likes Received:
    1,687
    Review: Asus Radeon RX Vega 64 Strix Gaming
    http://hexus.net/tech/reviews/graphics/109078-asus-radeon-rx-vega-64-strix-gaming/
     
  13. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
  14. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,805
    Likes Received:
    9,159
    Location:
    Cleveland
    *AHEM* Please pardon the temporary interruption while we clean up this thread to what it's meant for, HARDWARE REVIEWS...
     
    Cat Merc, Mize, T1beriu and 1 other person like this.
  15. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,805
    Likes Received:
    9,159
    Location:
    Cleveland
    Kyyla, Cat Merc, Mize and 8 others like this.
  16. DavidC

    Regular

    Joined:
    Sep 26, 2006
    Messages:
    347
    Likes Received:
    24
    I think the drivers are a problem for Vega as much as the hardware.

    Actually Nvidia was always much better on the driver department. Their features just work.

    Some observations:
    -If you have the iGPU enabled, you can't change clock and voltage settings on Polaris. Even with Afterburner or other 3rd party utilities its disabled. With Nvidia cards you can.
    -With Nvidia drivers, you don't need to restart the PC. It doesn't ask for anything. I find that quite impressive since nearly every installation requires a reboot on Windows, even with software!
    -Certain features like AA can be forced on with Nvidia drivers, but work intermittently or not at all in the case of AMD(and Intel). Nvidia drivers really do override application settings. As for the other two vendors, I'm not sure what they are doing.
    -AFAIK Nvidia drivers use less CPU for the drivers?
     
  17. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    You can add better VRAM management (a 1050 2GB has much less problems than a RX460 2GB)
     
  18. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,000
    Likes Received:
    1,687
    AMD Radeon RX Vega 64 review
    http://www.pcgamer.com/amd-radeon-rx-vega-64-review/
     
    xpea and homerdog like this.
  19. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    Hm, I don't really know anything about anything really, but seems likely from a strictly layperson's perspective there's no any single cause of vega's power draw. Would scheduling really account for a hundred watts increase in dissipation? You'd have to have the world's most overengineered scheduler to hit such numbers methinks. (Also comparatively most underperforming one, seeing as vega at a much bigger die doesn't perform a whole lot better than GF1080, or even any better at all in certain titles.)

    My analysis - for what it's worth:
    NV/Jen Hsun said several years ago now (kepler era maybe?) that moving around data in a GPU costs more power than doing calculations on said data. He also said around pascal's release (prolly the reveal conference) that they'd taken a lot of care in laying out the chip/routing data flow, or words to that effect. Spared no expense, most likely (or very little anyhow - as pascal reportedly cost a billion buckaroos IIRC to develop.)

    We know AMD doesn't have billions and billions to spend, and what money they do have must be shared with console SoC and x86 CPU divisions. They've also had issues hiring and retaining highly qualified staff, and from what I've read here and other places, laying out modern microchips is very difficult work, perhaps amongst the most difficult? As a result, doesn't it seem chances are fairly high that vega isn't nearly as efficiently laid out as it could have been, and that much power is spent/lost just on shuffling bits around the die? Hardware units themselves might also be comparatively inefficiently designed compared to NV's chips.

    Anyway, I was quite prepared for vega being a power-hungry mother. I've been using RX 290X and 390X cards for years already, they're quite hoggy as it is. I don't really care about power, to be honest. It's not as important to me as raw performance, features and overall capabilities. On that front, vega does pretty well, I like that it seems to be the first fully DX12 capable chip ever. It's also a fast chip in absolute terms, even though it is not the fastest (or even consistently faster than GF1080). Price/performance might be hella dodgy though if the "stealth" price increase ends up being the new status quo from now on.

    From pre-release numbers of the ASUS Strix board, this will be the new measuring stock for vega. Forget the default AMD OEM blower cards - they've always sucked really bad. (Well, err...) The Strix is noticeably faster, way quieter and runs around 10C cooler, even though it sometimes seem to draw even more power. :p

    Nobody should settle for anything less.
     
    Cat Merc, sonen, homerdog and 3 others like this.
  20. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,179
    Likes Received:
    964
    Location:
    still camping with a mauler
    AMD probably uses way more automated design/layout tools than NVIDIA or Intel. As a result their GPUs appear to be way less efficient at shuffling bits.

    Also aren't modern Intel iGPUs fully DX12 compliant? Not that it really matters; the majority of PC gamers have NVIDIA cards (~64% according to Steam), so devs have to code with that in mind. Kind of a shame but this is reality.

    P.S. does Vega do conservative rasterization? I see a bunch of talk about FP16 but conservative rasterization is much more important (and has been supported by NV since Maxwell).
     
    Grall likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...