No DX12 Software is Suitable for Benchmarking *spawn*

Discussion in 'Architecture and Products' started by trinibwoy, Jun 3, 2016.

  1. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    609
    Likes Received:
    1,036
    Location:
    PCIe x16_1
    Well then if no one minds, here they are as fixed background PNGs.

    [​IMG] [​IMG]
     
    Kej, Silent_Buddha, Lightman and 4 others like this.
  2. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,780
    Likes Received:
    4,431
    Looks like Crystal Dynamics / Eidos Montréal is using the DX11 path for compatibility purposes only, and they're the first ones to use the DX12 path as the main one for optimization in the PC spectrum.

    That's really nice. One would have thought DICE / Frostbite Labs would take the lead on something like this, but turns out their DX12 path in Battlefield V is still a low effort result at the moment.

    I wonder if the Guardians of the Galaxy game they're making is using the same Foundation Engine.
     
  3. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,952
    Likes Received:
    3,032
    Location:
    Pennsylvania
    There's obviously something broken there, either in the game in their testing to get those results.
     
    Ike Turner likes this.
  4. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,718
    Likes Received:
    2,454
    The resulted are repeated in other sites as well.
    PCGH:
    DX12 is 200% faster than DX11 in their scene
    http://www.pcgameshardware.de/Shado...Shadow-of-the-Tomb-Raider-Benchmarks-1264575/

    PClab:
    DX12 is 200% faster than DX11 in their scene
    https://pclab.pl/art78818-5.html

    ComputerBase:
    DX12 is 12% faster than DX11 in their scene
    https://www.computerbase.de/2018-09...ottr-mit-finalen-treibern-und-patch-1920-1080

    That leaves the DX11 path broken, it can't produce fps beyond a certain point. My theory is that the studio had limited resources that are obvious in this title. They will also integrate RTX which will need DX12, so instead of developing two paths, they focused on DX12 for RTX. And DX11 got shafted, maybe it will improve with a later patch.
     
    pharma, Malo and Lightman like this.
  5. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,883
    Likes Received:
    1,585
    Lightman likes this.
  6. Gelanin

    Newcomer

    Joined:
    Aug 27, 2006
    Messages:
    94
    Likes Received:
    44
    Location:
    Norway
  7. Gelanin

    Newcomer

    Joined:
    Aug 27, 2006
    Messages:
    94
    Likes Received:
    44
    Location:
    Norway
  8. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,780
    Likes Received:
    4,431
    Lightman likes this.
  9. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    618
    Likes Received:
    477
    I was able to get a remarkably stable 1080p60 w/4X MSAA on my 4GB RX480 with everything at ultra but textures, shadows and SSAO. I would expect the RAM use to definitely be a problem below that. The Benchmark utility on the demo is super nice. Makes it very easy to track performance, make adjustments as see the changes quickly. My only complaint is that benchmark mode overrode the in game master volume setting for some reason. I had to adjust via the Windows mixer instead...
     
  10. Putas

    Regular Newcomer

    Joined:
    Nov 7, 2004
    Messages:
    387
    Likes Received:
    55
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,414
    Likes Received:
    411
    Location:
    New York
    The 770 falls exactly where expected close to the 280/X.
     
  12. Putas

    Regular Newcomer

    Joined:
    Nov 7, 2004
    Messages:
    387
    Likes Received:
    55
    True, but compared to newer GeForces (even first Maxwell) they are too far behind.
     
  13. Svensk Viking

    Regular

    Joined:
    Oct 11, 2009
    Messages:
    498
    Likes Received:
    51
    I've seen some users claim that the RTX patch for Battlefield V finally fixed the issues in DX12, though all big sites just seem to focus on testing the DXR performance just now
     
  14. metacore

    Newcomer

    Joined:
    Sep 30, 2011
    Messages:
    107
    Likes Received:
    79
    Jozape, Ext3h, pharma and 2 others like this.
  15. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,718
    Likes Received:
    2,454
    pharma and metacore like this.
  16. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,707
    Likes Received:
    458
    I never understood why these APIs kowtow to multi-user scenarios so hard. I just want to play a single game at a time. Just give the programmer the ability to request GPU memory and let it be his till the process ends, no exceptions, immediately blue screen if some dumbfuck part of the fucking driver has dared to reassign it.
     
  17. Alessio1989

    Regular Newcomer

    Joined:
    Jun 6, 2015
    Messages:
    579
    Likes Received:
    283
    Query DXGI or whatever on linux once a frame to profile the VRAM usage and allocation capabilities is not a issue at all. Giving back the application the total - kernel - control of VRAM allocation means go back at least to Windows XP and let the system shot tons of BSODs for tons of reasons, from mindfucked users that filled the OS with crapware, to third party applications (rivatuner & co, I am speaking mostly about you!), to a bad code logic, to a lot totally legit actions and situation that even the best user cannot avoid or control, moreover it would totally break the the OS responsiveness and prevent toys like reshade to work at all. Finally you cannot guarantee allocation contiguity anyway. Programmes waited decades for page faulting on GPUs, what they really need and want is just a more efficient and faster page faulting mechanism.
     
    #1237 Alessio1989, Feb 8, 2019
    Last edited: Feb 8, 2019
  18. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,707
    Likes Received:
    458
    We have fucking gigabytes of memory on GPUs, a game could allocate 95% of it and still leave enough for anything other than games on my system. Nothing on my system needs it except games and trash software I would remove if I'd know it was allocating GPU memory.

    It won't BSOD, it will simply say "can't allocate memory, won't start, shut down other programs first". Or you can ask the user to hibernate software tying up the GPU to run the game. There are solutions if you look for them. The only time it would BDOD if retarded system software reallocated memory when the system software writers should damn well fucking know they aren't allowed to.

    They can still provide an API for multi-user use cases, but most people simply don't have one. They just want to run a single game and have nothing else GPU intensive running. Develop for the majority, not the minority.

    PS. game devs would simply have to use two computers and do remote debugging/development, same as they do on consoles. Not a problem.
     
  19. Alessio1989

    Regular Newcomer

    Joined:
    Jun 6, 2015
    Messages:
    579
    Likes Received:
    283
    Games can already allocate 95% of VRAM without issues. OS pre-emption on RAM is needed since VRAM usage fluctuate for tons of reasons as I said. Moreover VRAM usage for the same object/resource varies on different architectures and is not guaranteed to be the same on the same architecture on different drivers. Finally the memory fragmentation - external and internal - is impossible to avoid at all on such application, you can only try to mitigate it and APIs like Vulkan and Direct3D 12 already give developers the best tools to mitigate it. All this makes the exact computation of VRAM needed for a commercial game impossible to compute, plus its complexity rises when new hardware come on the market. Give total and exclusive control of game developers to the VRAM would be like give Schettino the total control of the USS Enterprise.
    If you want blame someone, blame the IHVs that give from crappy documentation to no real documentation at al of their drivers and hardware architectures (especially NVIDIA).
     
    #1239 Alessio1989, Feb 9, 2019
    Last edited: Feb 9, 2019
    Pixel likes this.
  20. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,707
    Likes Received:
    458
    Tons of reasons which aren't relevant to me. Just expose all the resource pools and the exact storage requirements to the developer at allocation, make it actually low level. If they can't handle it they can use your babbies API which pulls the rug out from under them at will.

    Fragmentation is unavoidable, but you can allow the programmer to mitigate it instead of the driver. Just like they do in their own process space on the CPU (at least until they start using 64 bit addressing).
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...