No DX12 Software is Suitable for Benchmarking *spawn*

Discussion in 'Architecture and Products' started by trinibwoy, Jun 3, 2016.

  1. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    pharma and DavidGraham like this.
  2. DeF

    DeF
    Newcomer

    Joined:
    May 3, 2007
    Messages:
    162
    Likes Received:
    20
    Looking at the results of this test it seems like the first sentence of the conclusion is contradicting what is shown on the charts, here are some examples:
    [​IMG]
    As you can see above all DX12 games show that Ryzen performs much closer to 7700K when paired with RX480.

    To better ilustrate my point here is a graph with average excluding AoTS:E as it has negative scaling (Ryzen is faster):
    [​IMG]

    As you can see from the average 7700K is 20,73% faster than Ryzen when paired with GTX 1060 in DX12 titles but only 5,89% faster than Ryzen when paired with RX480.

    So looking at techreport article's title (Does Ryzen perform better with AMD GPUs?) we can answer like this:
    Ryzen does compare better to 7700K in DX12 titles when paired with AMD GPU.

    But instead author writes in first paragraph of the conclusion:
    He did not find any evidence? This is clearly contradicting with presented data. There is some evidence showing that there are limitations when we pair Ryzen with Nvidia GPU. Its not decisive, but its there.
    I am not saying that we should sharpen our pitchforks or start putting our tinfoil hats on. I am saying that reviewers should clearly rethink their testing procedures when reviewing Zen based cores in the future.
     
    w0lfram, BacBeyond, Malo and 2 others like this.
  3. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Not wrong there.
    Ironically the video looking to prove Nvidia has issues with DX12 put the reason forward on why he went with 720p, and his super OC'd 480 while performing slightly worst with DX11 at 1080p was 18% faster in DX11 at 720p against the 1060 in one game and this was on Intel CPU, while other games of course did not necessarily behave the same.
    Cheers
     
    Silent_Buddha likes this.
  4. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    At the end reviews are relative and don't necessary represent the performance that the end user will see unless it play with same hardware and in the same scenario. It's just a guide rather than absolute true.

    Enviado desde mi HTC One mediante Tapatalk
     
  5. firstminion

    Newcomer

    Joined:
    Aug 7, 2013
    Messages:
    217
    Likes Received:
    46
    Yes, Adored observed that using nvidia cards on DX12 benchmarks would negatively impact Ryzen performance against Core.
     
    BacBeyond likes this.
  6. madyasiwi

    Newcomer

    Joined:
    Oct 7, 2008
    Messages:
    194
    Likes Received:
    32
    Why stop the conclusion at that though? We can also add this for fairness sake:
    Ryzen does compare better to 7700K in DX9, DX11, OpenGL and Vulkan titles when paired with NVidia GPU.

    The only definitive conclusion here is just reinforcing that Ryzen 1800X is slower than 7700K in gaming. You can switch Ryzen with any Intel CPU comparably slower to 7700K @4GHz and the conclusion will still hold true. What's new here? Anybody who had following the pattern don't need to spend hours of analysis and you tube videos or a bunch of graphs to come up with that.
     
    w0lfram, Razor1 and DavidGraham like this.
  7. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    I would be interested in seeing GCN2 vs GCN4 CPU utilization benchmarks. AMD has improved their hardware schedulers and command processors in GCN3/GCN4. Most likely they can offload some work to hardware schedulers. Could be simply that the main render thread load is lower with AMD GPU. This would help 8-core CPU more than 4-core CPU. Of course before we can draw conclusions like this, we need more benchmarks. Preferably also Intel 8-core vs Intel 4-core to rule out Ryzen architecture specific differences.

    Just looking at frame times (or fps counters) isn't going to bring enough information. There's tools available that capture and show timelines of CPU activity, including which threads of which application are running on which core. This gives much better insight about multithreaded scalability issues. It is easy to see how well the code is using all the available cores and whether there's a single thread bottleneck or not.
     
    w0lfram and DavidGraham like this.
  8. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Maybe part of this also comes back to games that had their engines/games modded from DX11 to DX12 even on 4C/8T and on Intel, seems these have the most issues for Nvidia in general compared to say AoTS (runs great these days on Nvidia) and latest titles such as Sniper Elite 4 or Gears of War 4.

    An aspect that may compound this for some games and comes back to how the devs handle it is Gameworks library/suite as it only recently has been updated to DX12 since January - and to a lesser extent but still applicable is AMD library/suite (such as purehair including how it can also be used for environment as the most well known example).
    So what do the game devs implement in terms of middleware options and 'bolt-ons' that they look to carry across from DX11 to DX12 before this latest update specific for Nvidia, and does this also apply to a lesser extent with AMD game related libraries/suite
    Some options clearly are not carried over but some may, never seen any article investigate this tbh as it is delving pretty deep into the games.
    Cheers
     
    #1008 CSI PC, Apr 7, 2017
    Last edited: Apr 7, 2017
  9. Dygaza

    Newcomer

    Joined:
    Aug 27, 2015
    Messages:
    40
    Likes Received:
    39
    How exactly does applications and games detect cpu's thread capability? And can we even be sure that application x runs up to 16 threads on both Intel I7-6900k and Ryzen R7 cpu's? I would check this out personally but I only have 8 thread cpu. I do know ashes does run 16 threads on 16 thread cpu, and 8 threads on 8 thread cpu, but for example witcher 3 runs 16 threads no matter what cpu you use (most of the threads are very light).

    Would be interested to see this from RotR DX12 especially.
     
    #1009 Dygaza, Apr 7, 2017
    Last edited: Apr 7, 2017
  10. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,986
    Likes Received:
    847
    Location:
    Planet Earth.
    std::thread::hardware_concurrency() in C++, or GetSystemInfo(...) on Windows, for exemple.
    (I think there's a more accurate GetLogicalProcessorInformation(...) function, don't have my own code at hand to check how I did it though...)

    -edit-
    GetLogicalProcessorInformation is the only reliable one on Windows giving you even NUMA infos, or even better GetLogicalProcessorInformationEx ^^
     
    #1010 Rodéric, Apr 7, 2017
    Last edited: Apr 7, 2017
    Lightman, BRiT, Razor1 and 1 other person like this.
  11. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    The problem with that is the contraposition of the argument has to hold true, and if it doesn't then it all wrong.

    And guess what the contraposition if held true, nV's cards would hurt AMD ryzen in all API's! Nope doesn't do that, nor does it do it consistently in DX12. So start making other theories up, we already see bios updates, giving less problems. It could be a platform issue, and needs bios and microcode fixes....

    There is a ton of things to look at, its not as simple as running the benchmarks and pointing to nV drivers, cause we see so many different things affecting performance on the Ryzen platforms, not only that, we need to see how different manufacture platforms differ from each other too at this point.

    Now does it look like this is something that is going to be easy to do?
     
  12. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,779
    Likes Received:
    2,566
    That's why I stated that Ryzen indeed worked better with AMD GPUs in DX12, however it worked worse with it in DX11 in several titles. Mind you this happened: with a downclocked 7700K, @720p and low details (see sebbbi's posts above about different bottlenecks @720p). A better comparison should be made with high end GPUs, and Ultra details preferably with an 8 core Intel CPU as well.
     
    pharma, Razor1 and CSI PC like this.
  13. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    One thing that really surprises me is how users manage to find best case scenarios for ryzen that AMD themself haven't shown that tomp rider bench at 720 would have been a boom for a performance preview prior to launch(I'm talking from the marketing side of view) and some cases would have shown ryzen better in the initial reviews. AMD really need to step up in their way to launch products to shown best case scenarios to boost sales, they need to find money and that is an easy way to do it(and lets not talk about the reviews MoBos not better ram pass 2133..)
     
    pharma and Razor1 like this.
  14. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Well its also the pressure they have to deliver in a timely manner, its been a long time since they have been able to do this just from a tech/performance point of view. Having so little marketshare both in the GPU and CPU fronts doesn't do them any favors, dev's are just not going to tune hardware for them firstly without the hardware on the market, secondly because of the lack of market penetration. Many things go hand in hand and that is what we are seeing. GPU side they did a pretty good job with DX12 and LLAPI's but that is because nV's focus was still on older API's as most engines and games still have those paths.

    What will the outcome be for AMD and the future, expect much tougher competition from these new API from nV, their entire GDC presentations were on the new API's, they aren't going to just sit by and watch, just took them a bit of time to refocus there efforts.
     
  15. DeF

    DeF
    Newcomer

    Joined:
    May 3, 2007
    Messages:
    162
    Likes Received:
    20
     
  16. monstercameron

    Newcomer

    Joined:
    Jan 9, 2013
    Messages:
    127
    Likes Received:
    101
    Why don't you write a program to display this when/if you have free time?
     
  17. DeF

    DeF
    Newcomer

    Joined:
    May 3, 2007
    Messages:
    162
    Likes Received:
    20
    You are correct when we look at DX11, there is a ~2% difference in favour of Nv GPU, still a far cry from what we can observe in DX12. We do not have data for DX9, OGL and Vulkan therefore your argument in this part is not valid.
    [​IMG]

    [​IMG]

    If we consider all games and APIs we get this:
    [​IMG]
    Ryzen does look better vs 7700K when paired with AMD GPU. And that's the new and interesting part here.

     
    BacBeyond, Silent_Buddha and Malo like this.
  18. DeF

    DeF
    Newcomer

    Joined:
    May 3, 2007
    Messages:
    162
    Likes Received:
    20
    First of all I am not making up any theories, i am merely analysing (limited) data from the article.

    Secondly if we consider all APIs contraposition is true: on average in all tested games and APIs Ryzen does look better vs 7700K when paired with AMD GPU. Obviously difference is marginal and only DX12 shows big differences but the point still stands. Author admits that there is something going on by saying:
    You can also point to DX11 results and claim my assertion is wrong but i can do the same and point to DX12 results, hence the "all games and APIs" average in the 3rd graph. And please do not try to move the goal posts by throwing in platforms, bioses and other issues. I am talking about data from the article and author's conclusion.

    I fully agree with your third paragraph but i thought i made it clear in my first post (remember the pitchfork/tin foil hat reference?).

    And lastly, did i mention somewhere that verification of this finding is going to be easy?

    Can someone fix my posts? I was not aware of reply timers and messed them up a bit.
     
    #1018 DeF, Apr 8, 2017
    Last edited: Apr 8, 2017
  19. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    It cannot even be thought of all games generally for a specific API.
    As an example in a 480 vs 1060 back in December last year at 1080p with DX12
    Hitman the 480 is 12% faster
    Quantum Break the 480 is 28% faster
    Gears of War 4 the 1060 is 9% faster
    http://www.hardwarecanucks.com/foru...945-gtx-1060-vs-rx-480-updated-review-10.html

    Quantum Break hammers the Nvidia GPU in terms of the global illumination/volumetric lighting and Nvidia could resolve some of this with their DX11 driver, Hitman not sure what is going on with that but is based on an older engine crudely forced into being DX12.
    Gears of War 4 is one of the newer gen DX12 games.
    Sniper Elite 4 another newer gen DX12 game that AMD uses in their tech demos/events has the 480 around 5% faster than a reference 1060, and custom models from both closing the gap to 2-3% at 1080p resolution.
    https://www.computerbase.de/2017-02/sniper-elite-4-benchmark/
    http://www.pcgameshardware.de/Sniper-Elite-4-Spiel-56795/Tests/Direct-X-12-Benchmark-1220545/

    DX12 is a mess to try and work trends out in a general way as we are still in the early stages of dev cycles, and was not helped by DX11-to-DX12 crude engine modification or heavily designed games around a particular optimised architecture such as Quantum Break (just like Fallout 4 in DX11 engine designed for optimised performance around Nvidia).

    Cheers
     
    #1019 CSI PC, Apr 8, 2017
    Last edited: Apr 8, 2017
  20. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    I wasn't saying you were, I'm saying we have to start thinking of other theories, cause it doesn't look to be a singular problem


    There are three contrapostion arguments not just one, One is API another is Platform, the three is Drivers. All have to conclude the same thing for the one proposed to be true

    He couldn't make an assertion because not all the data fit into what he thought was the original case.

    Well I was just clarifying, nothing to do with what you stated.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...