DX12 Performance Discussion And Analysis Thread

Discussion in 'Rendering Technology and APIs' started by A1xLLcqAgt0qc2RyMz0y, Jul 29, 2015.

  1. lanek

    lanek Veteran


    Sorry for the editing, was tthinking it is a bit too early for this type of question he he.
     
  2. CSI PC

    CSI PC Veteran

    Good points, my context was more about implementation strategy and controlling performance to some degree that way, rather than competitive performance (Intel are improving here and as you say considerations between integrated and discrete).
    Anyway both AMD and NVIDIA have both been guilty of trying to control the performance via implementations that may be said not to be conducive to the other :)
    Having Intel involved is a good thing IMO, although I can see that it is also in their interest as they want to push people onto the newer processors and also compete by pushing performance as much as possible for an integrated solution; they should be pushing the EDRAM more IMO as well and probably will.

    Cheers
     
  3. CSI PC

    CSI PC Veteran

    More for the developers out there.
    Any idea just how much work it would take for NVIDIA to be able to move the core used aspects of Gameworks to DX12 and specifically Asynchronous Compute/shaders related functionality?
    I just wonder if there is more than one possible headache NVIDIA is experiencing with regards to the Async compute debate, and I can imagine as far as they are concerned Gameworks must work going forward (at least several aspects of it anyway).
    Not saying Gameworks is good/bad here (does seem to be cumbersome though to say the least), just wondering if this is also part of the logistics involved in NVIDIA being quiet to date on this subject and that Kollock of Oxide suggests support for async compute does exist in a driver they have although currently disabled (although again it is open to interpretation how that support is implemented).
    Mahigan seems to be making assumptions for NVIDIA so prefer not to rely on all he mentions.


    Thanks
     
  4. Razor1

    Razor1 Veteran

    it shouldn't take much time at all, most of the effects won't even take any time since they don't have anything to do with the programmable shader side of things, or very minimally (ei: hairworks), things like god rays, and AO those might take a bit more time, gotta make sure those fences and barriers are in place for their cards ;). But by doing so, that might hurt performance on other IHV GPU's, so there would be a need for different paths......
     
  5. CSI PC

    CSI PC Veteran

    Yeah it was hairworks, god rays and AO I was thinking of being core.
    I know they would also like certain aspects of PhysX being core but I am not sure developers fully buy into it even if used for specific functions such as fluid/gas.
     
  6. Alessio1989

    Alessio1989 Regular

    Ambient Occlusion morghulis.
     
  7. MistaPi

    MistaPi Regular

    Some say that DX12 saw the light of day because of Mantle (at least as quickly as it did after the Mantle announcement). Is there some proof of when development started on DX12? Would'nt the feature sets in AMD GCN and Nvidia Kepler and Maxwell like resouce binding and tiled resources be a strong indication that develoment started years ago?
     
  8. Alessio1989

    Alessio1989 Regular

    Speaking about public (aka non NDA) materials:
    Microsoft was planning WDDM 2.x before Vista launch. [WinHec 2016]. However, current WDDM 2.0 is different, and the biggest missing feature is page faulting.
    I can say also that Direct3D 12 was subject to different changes from GDC 2014 announcement and current version, some of them should be discoverable from public presentations (sorry but you have to find those changes alone).
    Finally, some D3D12 rendering capabilities are possible on DirectX 11 via proprietary extensions (https://twitter.com/MyNameIsMJP/status/691460815338098689)
     
  9. CSI PC

    CSI PC Veteran

    I assume Microsoft created a council with invites to certain other people/companies (usually small group)?
    I know they do this for other technologies/aspects although I do not want to name any, and yeah these are under a tight NDA.
    Cheers
     
  10. Alessio1989

    Alessio1989 Regular

    Well, it is clear that Microsoft never wrote alone without involve IHVs and ISVs in the specifications :p
     
  11. CSI PC

    CSI PC Veteran

    Well there is a difference between being active in their councils and say meetings with IHV-companies, just saying because AMD made it sound like Microsoft did not have them involved as much as one should expect with regards to DX12.

    Ironically if I remember the Sony team commented that aspects of DX12 is pretty close to their own low level functionality; although form and function can mean different teams can arrive to a very similar solution, or we can just go all out conspiracy that many like on some other sites in their comment section :)
    Cheers
     
  12. gamervivek

    gamervivek Regular

    Hitman dx12 benchmarks are here from the two popular german review sites and it's a repeat of AotS with nvidia cards usually losing performance under dx12 and AMD cards gaining it. 390X is now besting a 980Ti. Fury cards struggle to scale over Hawaii again.

    [​IMG]

    http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/

    [​IMG]

    http://www.computerbase.de/2016-03/...2/2/#diagramm-hitman-mit-directx-12-3840-2160


    I had speculated that dx12 will help AMD more than nvidia but it's not even helping them. Though pcgh did get a dx12 boost for 980Ti.
     
  13. pjbliverpool

    pjbliverpool B3D Scallywag Legend

    AMD's huge performance advantage seems to have more to do with the game itself than DX12 in this case (although that also helps). Note the 390x is almost as fast as the 980Ti even under DX11. Which is a bit unusual to say the least.
     
    BRiT and Razor1 like this.
  14. Putas

    Putas Regular

    Strange thing about PCGH graphs is that red line is not dropping to the min fps value stated.
    Computerbase numbers show solid gain on FX-3870, maybe Nvidia does not yet know how to put i7 to good use under DX12.
     
  15. CSI PC

    CSI PC Veteran

    Or maybe it is because many of these games are more finely tuned towards the console architecture, that then benefits AMD and affects NVIDIA architecture revisions differently (albeit subtly).
    This was clearly seen with the Alpha version of the latest Doom that has had no "tuning" for PC, and AMD are well in front in terms of performance.

    I honestly felt NVIDIA really could not afford to let AMD control the console market, but then they cannot offer a GPU-CPU solution and I doubt Intel would want to help them with a joint venture :)
    We will need to see how well the 12.1 functions work for NVIDIA, when we are eventually able to compare a game designed for both.
    But then raises question how many developers will implement these.
    Cheers
     
  16. Jawed

    Jawed Legend

    Well, let's hope that maximum D3D 12.1 capabilities are in both Pascal and Polaris...
     
    pharma, Razor1 and pjbliverpool like this.
  17. Ika

    Ika Newcomer Subscriber

  18. 3dilettante

    3dilettante Legend Alpha

    AMD didn't put a *NEW* label on items like the rasterizer and render back ends, which I think were involved with the Intel implementations of conservative rasterization and rasterizer order views. There are other ways to get similar results, by playing around a bit with the inputs and outputs for the fixed-function hardware for rasterization (rasterizing larger triangles as patented by Nvidia) and using compute synchronization for ROVs (AMD). Whether the benefits will overcome the negatives with future hardware is unclear. There's evidence things like AMD's workarounds are not acceptable with the hardware as we know it.
     
  19. There are way too many games where a full Fiji can hardly put some distance from full Hawaii, which suggests that Fiji's substantially higher CU count is mostly standing idle and the chip may be sitting on a geometry bottleneck.

    What's shocking to me is Pitcairn's performance in that game. It gets half the performance of a Tahiti card, which never used to happen before. Maybe AMD's driver isn't handling the 2GB limitation very well?
     
  20. lanek

    lanek Veteran


    What console have to do with that ? ... Dou you know how many " consoles " games run like sh... on AMD PC hardware ? What is the conclusion we can have by having games who run well on consles ( With AMD GPU`s and processors ) and run like sh... on AMD PC gpu`s ?
     
Loading...

Share This Page

Loading...