AMD Vega Hardware Reviews

Discussion in 'Architecture and Products' started by ArkeoTP, Jun 30, 2017.

  1. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,607
    Likes Received:
    1,780
    Location:
    Germany
    CAD-Benchmarks are mostly a representation of artificial market segmentation, they rarely do indicate inherent performance of a chip.
     
  2. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    8,418
    Likes Received:
    3,060
    The timing for this comparison is interesting, and that conclusion is even more so, considering how very recently Raja tweeted the following:





    How many times must AMD officials claim that a bulk of the new features were put into Vega in order to make it more competitive with a larger number of markets and not just gaming?

     
    #1402 ToTTenTranz, Sep 13, 2017
    Last edited: Sep 13, 2017
    no-X likes this.
  3. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
  4. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,470
    Likes Received:
    1,801
    Location:
    La-la land
    Where has it been conclusively shown that vega is competitive across the board with GP102 (running professional drivers), and not just in a few select benchmarks?
     
  5. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
    If you mean professional drivers as in Quadro P6000 see above. About selective benchmarks, it's not like there is a lot of them around. They are accepted for P6000, why not for Vega FE?
    In any case Compute will always be "selective", it's not like you have a standard API like DX. There is OpenCL but it's adoption seems quite low to be relevant anyway. Most workloads are customised so it's not easy to compare. Nvidia has the advantage there with CUDA.
     
  6. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,607
    Likes Received:
    1,780
    Location:
    Germany
    Isn't compute very specialized? I know of very little creative professional companies using their graphics cards for 3D-rendering while also running FP64-Monte Carlo Simulations while going out for lunch break.

    Crypto-Engines seem to be a very strong point in Vega, though.
     
  7. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,470
    Likes Received:
    1,801
    Location:
    La-la land
    Yeah, you made your post just as I was making mine. :p I'll go have a look-see.
     
  8. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
    Yes, and that is why I answered Grall that any benchmark will be "selective". For compute benchmarks can only be illustrative of strong/weak points of a chip. Still, if Vega 64 and P6000 strong/weak points are not that different from each other, we can at least conclude they compete with each other.
     
    #1408 Picao84, Sep 13, 2017
    Last edited: Sep 13, 2017
  9. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,918
    Likes Received:
    5,218
    Location:
    Helsinki, Finland
    Fp64 compute is only a tiny fraction of all compute workloads. Even professional compute can often be fp32, or even fp16 (neural nets, etc). All modern AAA games spend significant time of their frame time running compute shaders. Many games are 50% compute, 50% rasterization nowadays, and the amount of compute vs rasterization is rising all the time.
     
    Heinrich4, Alexko and Lightman like this.
  10. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,607
    Likes Received:
    1,780
    Location:
    Germany
    I mentioned it because that's one of the areas where Vega stands out in that Techgage.com-test that was linked. For FP32-compute, apart from crypto/hashing, Vegas performance did not seem very much outstanding (again: In that linked test).

    What I mean is: In the professional space, you're usually pretty specialized in what you do. You're in the AI-business, you might want high training (FP16) or inferencing (INT8) throughput, if you do seismic exploration, you might want large datasets in the first place, if you want to fluid dynamics you want high FP64. If you're in the movie industry, you might want high FP32 throughput and large memory sizes, if you're in the secret service, you might favor fast crypto-engines and when you're in the game business (on the gamer side, not the dev side), usually you care mostly about high framerates and a smooth and hasslefree gaming experience.
     
    #1410 CarstenS, Sep 13, 2017
    Last edited: Sep 13, 2017
    Kej, Grall and Despoiler like this.
  11. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,673
    Likes Received:
    1,199
    I don't see a vastly different feature set. It's a GPU. It has shaders cores that are quite similar. It has texture units and ROPs that are fixed function. It has geometry processing units, timing, large caches, etc.

    The differences are minor: HBCC, 2xFP16, ...

    You can look at facts, such as TFLOPS, texture ops, memory BW, ROPs, etc and compare those against the competition. And for everything except ROPs, Vega has ballpark the same number or higher than a 1080 Ti.

    And that's when you conclude that 1080 level performance is something that worthy of heavy criticism.

    There's nothing particularly special about Vega's compute performance: it's exactly where it should be given its raw specs.
     
    xpea likes this.
  12. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
    That's looking at it from a very high level and silo vision (the total is not equal to the sum of its parts). If it is so simple as you make it out to be, why did NVIDIA go to the trouble of branching out compute from graphics products?

    If, like you say, there should be no difference in performance between a part oriented for gaming and one for compute, why did they bother? Its clearly not the case. Seriously aiming for compute with the same chip most likely leads to suboptimal performance in graphics.

    On the other hand, AMD always had loads of TFlops of theoretical compute power, without the graphics performance to justify it. It's not anything new.

    Edit - And for all those metrics that are similar to GTX1080Ti.. It does compete in compute, highlighting even more the fact that it trades pure gaming performance for compute.

    Edit 2 - It would be nice to see Vega going head to head against GP100, did anyone know of any base for comparison?
     
    #1412 Picao84, Sep 13, 2017
    Last edited: Sep 13, 2017
  13. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,607
    Likes Received:
    1,780
    Location:
    Germany
    Was that a wise or a necessary decision?
     
  14. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
    I would say necessary. As necessary as Fermi was for NVIDIA. It would be unwise if AMD had the cash to make two chips, but they don't seem to have, so there is that. Regardless, I prefer to see a company take a risk for the future than playing safe for the moment.
     
  15. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,673
    Likes Received:
    1,199
    In the case of VEGA, it clearly isn't. And that's an anomaly.

    You ask very hard hitting questions and I don't have a few hours to research this, so you'll have to do with just the following bullet points:

    - a gaming GPU has no business having a 1/2 FP64 ratio.
    - a gaming GPU has not business having 4 or 6 high BW inter-chip links
    - a gaming GPU has no business having ECC everywhere
    - a gaming GPU apparently doesn't need a silly amount of L2 cache, local memory or register files
    - a gaming GPU has no business having 4 HBM interfaces and everything that goes with it

    Another hard hitting question. Unfortunately, my only answer to this is production cost and thus profitability. I know that's not something AMD is terribly concerned with.

    Clear as mud.

    The disparity has never been as large as with VEGA. Not even close.

    It highlights even more that VEGA has a serious issue with gaming performance. And it's not at all obvious why because previous AMD GPUs had a much more reasonable compute vs gaming performance ratio.

    GP100 would destroy VEGA in FP64 and inter-chip workloads. You know the stuff that costs area.
     
    Grall likes this.
  16. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,607
    Likes Received:
    1,780
    Location:
    Germany
    So you'd have preferred multiple, more specialized dies?
     
  17. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    1,926
    Likes Received:
    805
    Clearly Vega has a bigger flops count than GP102, any applications that depend purely on flop count is going to favor Vega, no doubt about that, Cryptography is the same as well (for obvious reasons), no surprises there. However, in the cases that use mixed workloads it's not going to be the same, from the very review you quoted, Vega trails the Titan XP by a big margin in: 3D Max, AutoCAD, and the whole SPECviewperf/SPECapc lineup of tests. It's also worth to point out that Vega is blasted beyond it's optimal clocks/efficiency curve, in other words it's pushed beyond it's limits just to compete with GP104 in gaming, If Pascal is put under similar conditions it would pull further ahead.
     
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,807
    Likes Received:
    2,072
    Location:
    Well within 3d
    When it's claimed that Vega's reduced perf/mm2 is due to the inclusion of high-end features, there are die shots to look at. Are these features added outside of the areas dedicated to core graphics functions?

    AMD's own marketing shot may be decent enough. Take a few nibbles out of the two bottom corners, all of the strip dedicated to HBC, and some of that un-colored area that might cover functions like solid-state control, and some rough pixel counting gets me maybe ~13%. However, that's potentially generous since some of those units would be there regardless, like the HBCC's performing functions that would be necessary with or without Vega's split focus. Something would be in their place to perform those functions with non-zero area.

    Would a 420-440mm2 Vega change much, if these unused elements weren't there? There are die shots of GP104 and Polaris if we want to see how much non-graphics silicon there is in a smaller die.
     
  19. Picao84

    Regular

    Joined:
    Feb 15, 2010
    Messages:
    975
    Likes Received:
    283
    Sorry if it wasn't obvious, but the question was rethorical. It was my rebuttal to your assessment that there isn't much that's different between Vega and GP102. Vega has several of those things including FP64 at 1/16 vs 1/32 (still incipient performance but its there and occupies die space) and very large L2 cache (you mentioned HBM2 before). The impact these have on Vega graphics performance is unknown but is certainly not zero. They occupy space on the die that could have been used for things that benefited graphics (e.g. more FP32 units).

    That doesn't make sense. If there would not be a performance difference between a graphics and compute oriented parts then it would be more profitable to have a single chip covering both markets (less R&D), as it was before Pascal arrived with GP100 and GP102.

    I don't see that. Vega 64 gets 46% more FP32 performance than Fury X. The performance increase is exactly in line with this.

    Fury X had 22% higher FP32 than 980Ti and was 10-15% behind in graphics performance.

    Now Vega 64 is 25-30% behind in graphics performance to 1080Ti, BUT its advantage in FP32 has shrunk to only 11%. It is quite linear and reflects more the large strides NVIDIA made (60% increase!) than anything inherently to Vega. The deficit ratio is stable, its just than nvidia catched up in the FP32 with the respective effects in graphics performance.

    I never denied there is an issue. Regarding the ratio see above. You are equivocated, ratio is pretty much the same give or take some error margin.

    That is my feeling as well, but would like to see. Maybe there could be some interesting conclusions.
     
  20. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,205
    Likes Received:
    133
     
    pharma, Grall and Geeforcer like this.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...