Why is AMD losing the next gen race to Nvidia?

Discussion in 'Architecture and Products' started by gongo, Aug 18, 2016.

Tags:
  1. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
    Announcing GPUs for Google Cloud Platform
    Tuesday, November 15, 2016

    Google Cloud will offer AMD FirePro S9300 x2 that supports powerful, GPU-based remote workstations. We'll also offer NVIDIA® Tesla® P100 and K80 GPUs for deep learning, AI and HPC applications that require powerful computation and analysis. GPUs are offered in passthrough mode to provide bare metal performance. Up to 8 GPU dies can be attached per VM instance including custom machine types.
    [​IMG]
    https://cloudplatform.googleblog.com/2016/11/announcing-GPUs-for-Google-Cloud-Platform.html
     
    Alexko likes this.
  2. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570
    HardwareCanucks just revisited the GTX1060 vs. RX480 dispute with quite a large number of titles.

    Shame on AMD for not getting good driver updates on time for the cards' and games' releases, I guess.

    [​IMG]

    [​IMG]
     
  3. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,585
    While AMD's driver efforts in the latest string of games is commendable, the article will have reached an entirely different conclusion if he included these new 2016 games as well: Dishonored 2, Watch_Dogs 2, Shadow Warrior 2, XCOM 2, Civilization 6, Mirror's Edge Catalyst, Quantum Break DX11, and No Man Sky. While also avoiding the use of the built in benchmark of Totalwar Warhammer (which gives results that are not representative of actual gameplay).

    Also the comparison to July 2016 is flawed, many of the games tested were released AFTER this date. The comparison thus is not apples to apples.
     
    I.S.T. and MDolenc like this.
  4. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    I do not know why he changed the Quantum Break settings to be so brutal compared to before, that has skewed it more towards AMD in terms of DX12.
    Original test:
    [​IMG]

    It gives the wrong impression on data analysis when he uses the latest as a comparison to how things have changed:
    This is from the Dec 5th benchmark.
    [​IMG]

    And as can be seen the performance gap with those brutal settings is much larger, not necessarily reflecting the more real-world setting he used in the past.
    Old test was 12% gain for AMD 480 over reference 1060, new test is 25.6% gain for AMD 480 over reference 1060.
    I do not have the time but would want to check if any of the other games have notable performance difference that indicates game setting change rather than drivers.
    Cheers
     
  5. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570
    OTOH, in Doom's Vulkan test he apparently enabled that AA mode that effectively turns async compute off, which reduces the performance gap between the RX480 and GTX1060.

    I don't know what's different between those tests, but given the massive performance gap in all the results, I'd say some description in those graphics is definitely wrong.
    Looking at other scores, it doesn't look like the first graph is really showing results for 1080p medium settings and scaling off.
     
  6. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Oh man GameGPU.. :)
    Remember HardwareCanuck uses PresentMon with a representative area-scene actually playable in the game to reflect it as a whole.

    But yeah, needs careful look at the settings overall from past to now.
    The historical settings are fine as long as they continue to be used as it provides a baseline.
    Doom still does not allow that AA mode in Vulkan?
    Not disagreeing but cannot remember the reason why it was like that originally and whether something they were going to change.
    That said the Async Compute is nowhere near as impressive as the AMD Shader Extension used under Vulkan in Doom, but would add another 5-8%.
    Cheers
     
    #226 CSI PC, Dec 6, 2016
    Last edited: Dec 6, 2016
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,585
    These are the DX11 version scores, at that time, DX11 only boosted fps for NV GPUs. Seeing the test video at GameGPU, it is identical, according to HardwareCanuks description:

    Different CPU or frame logger tool perhaps?

    No, It allows it just fine.


    One month ago, results were consistent::
    1060 3GB Review:
    [​IMG]
    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73737-nvidia-gtx-1060-3gb-review-11.html

    EDITED for glitchy browser.
     
    #227 DavidGraham, Dec 6, 2016
    Last edited: Dec 6, 2016
  8. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Yes it is the latest comparison article where the changes happened, sorry if I did not make it clear.
    My context regarding the AA is using SMAA with Async Compute and whether its status has changed to work now (assumption is it still does not).
    You saying you can now use SMAA with Async Compute?
    Cheers
     
    #228 CSI PC, Dec 6, 2016
    Last edited: Dec 6, 2016
  9. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570
    id (specifically Tiago Sousa) also promised multi-GPU support for Doom almost 7 months ago and we still have nothing on that.
    Let's say post-launch development seems to be kind of slow. I guess they're getting ready for a major expansion and that's when they're introducing new features.
     
    CSI PC likes this.
  10. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    I went through more of the article,
    unfortunately it seems Michael took on way too much for one article and project.
    If you look closely you can see some games are in both APIs such as Hitman (strong for 480) and yet for others such as Rise of the Tomb Raider it now only has DX12 results which removes one that is strong for the 1060 under DX11.
    I would say the data is incomplete, not surprising because this is a massive undertaking and probably needs another revist to clarify aspects such as testing Doom Vulkan and validating if SMAA has async compute performance, along with the complete data for games that can do both DX11 and DX12, and also importantly have the game settings matching historically (Quantum Break really should have more realistic settings as it did in the past but at least one other game has what could be a subtle option change as well).

    I like his work, but yeah this needs more time before any conclusion, although does not stop anyone looking at the individual results to get a feeling on what is happening per card.
    Cheers
     
    #230 CSI PC, Dec 6, 2016
    Last edited: Dec 6, 2016
    pharma and DavidGraham like this.
  11. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
    AMD Readies Massive Radeon Software Update with Crimson Relive Drivers – Performance Increases For Radeon GCN Cards Across The Board
    http://wccftech.com/amd-radeon-software-crimson-relive-driver-leak/
     
    RootKit likes this.
  12. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,153
    Likes Received:
    928
    Location:
    still camping with a mauler
    Have they done anything with their compiler?
     
  13. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    859
    Likes Received:
    262
    Not his fault, MultiGPU didn't make it into the Vulkan 1.0 specification.
     
    Razor1 and Anarchist4000 like this.
  14. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,585
    Good point, Also he ignored Quantum Break DX11.
    SMAA is not using Async to this day, there hasn't been any major update in that regard since the original Vulkan patch.
     
  15. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    mGPU seems to be waiting on updates to both DX12 and Vulkan. Updates that Sousa likely has access, but no control of release schedule.

    Possible but the release date seems a bit early. I know bridgeman mentioned on another forum they had the backend for Vega coming in a dev preview for ROCm around Dec 14th and used for the linux graphics stack as well as windows. That should be the compiler, but it doesn't seem like any major changes for existing products would be with it. The real changes would seem tied to the SM6.0 release and DX12/Vulkan updates whenever that occurs.
     
  16. firstminion

    Newcomer

    Joined:
    Aug 7, 2013
    Messages:
    217
    Likes Received:
    46
    When talking about that DX11 chart he does say "Now the RX 480 basically ties NVIDIA’s card in a world of averages but what you can’t see in the chart above is AMD’s impressive showing in newer titles" in the article.
     
  17. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570
    SMAA does disable Async Compute on AMD cards (not intrinsic functions though) but it turns out SKYMTL used TSAA instead:
    http://www.hardwarecanucks.com/foru...dated-review-comment-thread-3.html#post836088


    RotR shows better results in DX12 for both cards, so what's the point in including both DX11 and DX12?
    [​IMG][​IMG]

    In the end, I do think it doesn't make much sense to include both DX11 and DX12 charts. Each game should be tested with the API that better suits the hardware being tested and that's it.
    As long as the IQ is the same, we should be looking at each card's best potential, period. At most, do a little wrap-up on the latest DX12 titles to try and predict future trends, but that's it.


    It's an ever-increasing and huge possible library and it's impossible to test every single game in every single setting that every single person wants. I for one wonder if 1.5 year-old GTA V is still relevant. Witcher 3 has been updated with DLCs until recently, but GTA V hasn't.
    Regardless, I think the plethora of games is a lot more complete than most reviews out there so his conclusion seems pretty valid to me. I'm counting 6 Gameworks titles and 4 Gaming Evolved titles, out of a total of 13 games. There's little reason to complain about diversity.

    And just as one could complain about not including Tomb Raider's DX11 results, other could complain about not waiting a few days to include AMD's next big driver update ReLive promising substantial performance optimizations.
    But then he'd be waiting another week for a relevant geforce driver update, and then it'd be Christmas break, and then maybe he should be working on Zen, and then Vega.
    And then he'd be stuck like Anandtech who have been getting a habit of releasing very relevant articles on dates so late that make them a lot less relevant.




    Is there any predicted release date for Vulkan Next?
     
  18. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    It is integral to the latest article that has a conclusion about both APIs and importantly relative performance between the cards, if you include DX11 for Hitman and exclude DX11 for RoTR that is better for Nvidia, you skew the end conclusion about DX11.
    By your suggestion then it should not be based upon comparing relative API performance but game performance and using what works best for each card, however it seems the scope of the project was to see how DX11 and DX12 performance has evolved since the launch of these cards.
    We have Quantum Break DX12, but what would happen to the DX11 end conclusion if he also included DX11 from Steam that runs well?

    There is a lot of further information required for such a project analysis because the conclusion can be skewed, such as settings and whether one uses options that would never be used in real-world gaming or those that tests features and how optimised; so we now have features enabled on Quantum Break that radically pushes the gap higher and seem aligned to AMD and probably would not be used on these cards unless capping performance at 30fps or have freesync, but people would complain if Witcher 3 had Hairworks enabled (which also would not be used on these cards and yeah an extreme example but just highlighting this factor).
    And some other settings would need to be carefully considered and validated that they do not do more on one card over the other; not sure if anyone compared AMD-Nvidia for Fallout 4 from perspective of Godrays with HBAO+ and image quality, same way PureHair does more on an AMD card in Rise of the Tomb Raider so its impact needs to be exactly defined but then these options are part of the game and people would use them with certain settings.
    And of course there is the async compute aspect that needs to be ensured is working such as with Doom (2016).

    It is a tough call how to approach such a project and finding the right balance, which with the scope of what he did is pretty difficult and maybe it was too large.
    Cheers
     
    #238 CSI PC, Dec 7, 2016
    Last edited: Dec 7, 2016
  19. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,585
    You are comparing two slides with two different set of visual settings, DX11 was tested with MAX quality and SMAA, DX12 was tested with NO AA and the Very High preset, which reduces the quality of shadows, sun shadows, reflections and hair simulation. The fact that DX11 managed to achieve such close performance to DX12 despite the significantly higher visual upgrade, means it is a good deal faster than DX12.
     
    CSI PC, Razor1 and pharma like this.
  20. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570


    You're making this comparison to be a lot more than what actually is. Here's the author's description of what the article is about:

    A large bunch of games were tested. A large bunch of scores were published. Author claims it's practically a wash between the two cards. His words were "it'll boil down to whatever is on sale" for customers looking at that price range.
    Author also claims that performance delta of the RX480 got smaller in the same titles it lost when he made the GTX 1060 review 5 months ago. This is undeniable. He also claims between the two he would personally go for a RX480 because of the better scores he got with the new API. The importance of that may be subjective and each reader can take it at face value. If power consumption is a more important matter, the choice would obviously go to the GTX 1060. If getting a Freesync display is on the horizon, then the choice would obviously go to the RX 480.


    The point of this work was simply to evaluate if the difference in performance between the RX480 and GTX1060 was the same as when the GTX1060 came out. It's an excellent work, at the very least because very few other websites seem to give a crap about incremental driver performance optimizations on cards that already had their reviews out. Many websites even throw into the pile several-months-old results when reviewing new cards.
    That said, I'm not sure I understand why so much scrutiny needs to be made on what seems to me to be a fair piece. Is it perfect? No, but no piece would ever be.
    Had he tested 50 games on 4 different resolutions with every possible API using 4 different cards at 4 different clock levels and using 10 different CPUs for each test, some dude in the internet would probably theorize it wasn't "enough data to reach a conclusion" because he had only used DDR4-2400 RAM and there should be 4 different RAM speeds.
     
    #240 ToTTenTranz, Dec 7, 2016
    Last edited: Dec 7, 2016
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...