No DX12 Software is Suitable for Benchmarking *spawn*

Discussion in 'Architecture and Products' started by trinibwoy, Jun 3, 2016.

  1. monstercameron

    Newcomer

    Joined:
    Jan 9, 2013
    Messages:
    127
    Likes Received:
    101
    I had to log in just to agree with this point!
     
  2. PlanarChaos

    Newcomer

    Joined:
    May 30, 2016
    Messages:
    30
    Likes Received:
    1
    Pineapple and anchovies! :runaway: The horror... the horror...
     
  3. SimBy

    Regular Newcomer

    Joined:
    Jun 21, 2008
    Messages:
    502
    Likes Received:
    135
    AotS supports both DX11 and DX12. You obviously don't wanna use DX11 with AMD hardware.
    It's obvious they don't wanna tell us how much under 150W it is. If they said it's just over 100W, well.

    There are also 4 and 8GB versions. I think I've seen some power consumption numbers for the 4 and 8GB GPUs and the difference was around 40W. If thats the case a 4GB version could well be under 100W.

    Can't find those power consumption charts unfortunately, but here's the quote from AT:

    "The current GDDR5 power consumption situation is such that by AMD’s estimate 15-20% of Radeon R9 290X’s (250W TDP) power consumption is for memory."
     
    #43 SimBy, Jun 5, 2016
    Last edited: Jun 5, 2016
  4. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    372
    Likes Received:
    308
    Well, after reading the last pages of this topic, I'm not very happy to see my last year prediction coming true: DX12 Giving too much control on the hands of devs brings a lot of mess, much more than with DX11 and AMD/Nvidia in charge of a larger part of the pipeline ! Devs are too much under schedule pressure by publishers and it's been ages that we haven't seen a game released (relatively) bug free. DX12 will amplify the problem and this is exactly what we see...

    and regarding AOTS controversy, one interesting comment on reddit by Kontis:
     
  5. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,051
    Likes Received:
    5,002
    That comment is only relevant if developers don't bother to take advantage of DX12. Which may or may not happen, but considering how closely Dx12 resembles the programming done on consoles, I would imagine there's more likelihood of developers adapting to Dx12 than continuing to have PC ports of games being radically different from their console counterparts.

    Some developers will transition more quickly than other developers.

    Regards,
    SB
     
  6. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    It was obvious right after GDC. AOTS's engine does shading in texture space via compute shaders(no place for delta compression:-(), it does it for two 4096x4096 texture atlases(16.7 millions of pixels for terrain, 16.7 millions of pixels for units = tons of overshading) and it uses quite simple shaders for lighting(hence mediocre look). Such render will always be bandwidth (especially with MSAA) and texture bound + there are no heavy vertex shaders(you don't need skinning and other stuff for simple units), so geometry pass should be quite light too = good use case for async, all in one -- it's very very very different from modern deferred shading engines and it's a perfect fit for Fury with its massive raw bandwidth, texturing speed and async
    I wonder whether by "FP16 pipe" developer simply meant FP16 render target for HDR since the latest version I've seen looked just like LDR
     
    xpea and DavidGraham like this.
  7. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Work too good on Intel IGP, so Nvidia discard it and find chapters 2 more relevant..
     
    ToTTenTranz likes this.
  8. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Aiee, the problem is reviewers think there's one, it use MLAA for AA, and not FXAA from Nvidia, ( but they have not give numbers without AA ( or MLAA ).
     
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,749
    Likes Received:
    2,516
    It's quite obvious really. AMD's vigorous push to represent their cards in light of the ashes benchmark will do nothing to improve their position, it wil give their products false image or expectation which will only serve to hurt their products when it's fully dissected and it falls short on those expectations.
    Double standards, Gears Of wars was discarded because it's broken on AMD hardware. rise of Tomb raider also got discarded for being broken on both vebdors.
     
  10. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    609
    Likes Received:
    1,036
    Location:
    PCIe x16_1
    Note that the DirectX 12 patch for Warhammer is not out yet. Any current benchmarks are done with a press beta.
     
  11. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    Was? They do include GoW in their "performance leadership" in dx12 slide, though a dx9 game rendered in dx12 doesn't look any good.
    [​IMG]

    kontis sez:

    It's all very really simple indeed!

    The thread needs more leaks.
     
  12. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,051
    Likes Received:
    5,002
    Then that would be on Nvidia to bring it back. Why should dev's have to make their life more difficult if they don't have to. They have to maintain multiple versions of games anyway. If a Dx12 version that very closely matches the console version works, then why wouldn't they use it? Just because Nvidia don't want them to because it makes them look bad?

    People didn't have a problem when developers took advantage of things on Nvidia hardware that made AMD look bad. So they certainly shouldn't have a problem if it's the other way around.

    Regards,
    SB
     
  13. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    2,931
    Likes Received:
    485
    Pffft you & your logical consistency :rolleyes: this is GPUs :runaway:
     
    ToTTenTranz likes this.
  14. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Kontis is wrong there on two levels. Fermis hardware scheduler wasn't anything like a an ACE and it wasn't cut out for area but for power reasons.
     
    I.S.T. and RecessionCone like this.
  15. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,907
    Likes Received:
    1,607
    https://forum.beyond3d.com/posts/1919278/
     
  16. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Your really being pendantic on this, did you raise this many arguments regarding Chapter 1 being used? No.
    My context is performance even if I do not keep re-iterating that in every response I am doing.
    While the DX12 memory protection affects both, performance with Nvidia has relatively improved a lot compared to AMD.
    Also shown is a performance boost with DX11 for Nvidia over AMD, where this issue does not exist.
    Now you cannot do a straight compare between chapters because the benchmarks has one is inside a large room while the other is more open outside (Chapter 2).

    So some may put the case forward Chapter 2 can be used as performance is good for Nvidia (both DX12 and DX11), however to be fair I said that the game may be broken in Chapter 2 for AMD and so should not be used.
    I mention may because we do not know how AMD's performance is affected if at all by the memory protection or something else, like we do not know what is tanking Nvidia performance in Chapter 1.

    Do we really need so many posts discussing this?
    Because it is obvious Ch1 AMD has a performance benefit where results below trend for Nvidia, Ch2 Nvidia has a performance benefit that changes the performance a fair bit for AMD.
    Conclusion: maybe we should all accept that using this game as a factual point regarding comparing DX12 between AMD and Nvidia is not a good idea because one could present different results and conclusions depending upon chapter used.

    Cheers
     
    #56 CSI PC, Jun 5, 2016
    Last edited: Jun 5, 2016
  17. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    I didn't really look closely at Hitman, but: Isn't the game using the latest engine iteration for all chapters?
     
  18. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Any chance PCGamesHardware could revisit Chapter 1 then (both DX12 and DX11)?
    Would be great to see if that has also has updated performance.
    Problem is nearly all reviews of Chapter 1 was quite awhile ago, while only a rare few benchmarked Chapter 2.
    Although even PCGamesHardware states 60% performance boost for Ch2, but did they retest Ch1 and is that % boost applicable to both chapters since update or just latest chapter.
    Thanks
     
    #58 CSI PC, Jun 5, 2016
    Last edited: Jun 5, 2016
  19. PlanarChaos

    Newcomer

    Joined:
    May 30, 2016
    Messages:
    30
    Likes Received:
    1
    Until I read the footnotes I discount GoW. The rest seems reasonable although deceptive for Hitman given it's issues you guys say in chapter 1 versus 2.
     
  20. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    Screen space techniques and temporal reprojection do not prevent multi-gpu techniques. AFR doesn't work well with techniques using last frame data, but who would want to use AFR in DX12? AFR adds one extra frame of latency. AFR was fine when developers could not write their own custom load balancing inside a frame. The DX9-11 driver had to automatically split workload between two GPUs and splitting odd/even frames worked best with no extra developer support needed. This was a big compromise.

    I would have included some multi-gpu and async compute thoughs in my Siggraph 2015 presentation, if I had more time. With GPU-driven rendering, it is highly efficient to split the viewport in half and do precise (sub-object) culling for both sides to evenly split the workload among two GPUs. This is much better than AFR.

    Temporal data reuse is a very good technique. Why would anyone want to render every pixel completely from the scratch at 60 fps? The change between two sequential images is minimal. Most data can/could be reused to speed up the rendering and/or to improve the quality. It is also a well known (and major) optimization to reuse shadow map data between frames. It's easy to save 50% or more of your shadow map rendering time with it. AFR chokes on this optimization as well. Sure you can brute force refresh everything every frame if you detect AFR, but this makes no sense. DX12 finally allows developers to use modern optimized techniques and make them work perfectly with multi-GPU. There is no compromise.
     
    dogen, Lightman, Razor1 and 5 others like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...