No DX12 Software is Suitable for Benchmarking *spawn*

Well then if no one minds, here they are as fixed background PNGs.

xzsY0N0.png
kKFMnAW.png
 
Looks like Crystal Dynamics / Eidos Montréal is using the DX11 path for compatibility purposes only, and they're the first ones to use the DX12 path as the main one for optimization in the PC spectrum.

That's really nice. One would have thought DICE / Frostbite Labs would take the lead on something like this, but turns out their DX12 path in Battlefield V is still a low effort result at the moment.

I wonder if the Guardians of the Galaxy game they're making is using the same Foundation Engine.
 
There's obviously something broken there, either in the game in their testing to get those results.
 
There's obviously something broken there, either in the game in their testing to get those results.
The resulted are repeated in other sites as well.
PCGH:
DX12 is 200% faster than DX11 in their scene
http://www.pcgameshardware.de/Shado...Shadow-of-the-Tomb-Raider-Benchmarks-1264575/

PClab:
DX12 is 200% faster than DX11 in their scene
https://pclab.pl/art78818-5.html

ComputerBase:
DX12 is 12% faster than DX11 in their scene
https://www.computerbase.de/2018-09...ottr-mit-finalen-treibern-und-patch-1920-1080

That leaves the DX11 path broken, it can't produce fps beyond a certain point. My theory is that the studio had limited resources that are obvious in this title. They will also integrate RTX which will need DX12, so instead of developing two paths, they focused on DX12 for RTX. And DX11 got shafted, maybe it will improve with a later patch.
 
I was able to get a remarkably stable 1080p60 w/4X MSAA on my 4GB RX480 with everything at ultra but textures, shadows and SSAO. I would expect the RAM use to definitely be a problem below that. The Benchmark utility on the demo is super nice. Makes it very easy to track performance, make adjustments as see the changes quickly. My only complaint is that benchmark mode overrode the in game master volume setting for some reason. I had to adjust via the Windows mixer instead...
 
I've seen some users claim that the RTX patch for Battlefield V finally fixed the issues in DX12, though all big sites just seem to focus on testing the DXR performance just now
 
I never understood why these APIs kowtow to multi-user scenarios so hard. I just want to play a single game at a time. Just give the programmer the ability to request GPU memory and let it be his till the process ends, no exceptions, immediately blue screen if some dumbfuck part of the fucking driver has dared to reassign it.
 
Query DXGI or whatever on linux once a frame to profile the VRAM usage and allocation capabilities is not a issue at all. Giving back the application the total - kernel - control of VRAM allocation means go back at least to Windows XP and let the system shot tons of BSODs for tons of reasons, from mindfucked users that filled the OS with crapware, to third party applications (rivatuner & co, I am speaking mostly about you!), to a bad code logic, to a lot totally legit actions and situation that even the best user cannot avoid or control, moreover it would totally break the the OS responsiveness and prevent toys like reshade to work at all. Finally you cannot guarantee allocation contiguity anyway. Programmes waited decades for page faulting on GPUs, what they really need and want is just a more efficient and faster page faulting mechanism.
 
Last edited:
We have fucking gigabytes of memory on GPUs, a game could allocate 95% of it and still leave enough for anything other than games on my system. Nothing on my system needs it except games and trash software I would remove if I'd know it was allocating GPU memory.

It won't BSOD, it will simply say "can't allocate memory, won't start, shut down other programs first". Or you can ask the user to hibernate software tying up the GPU to run the game. There are solutions if you look for them. The only time it would BDOD if retarded system software reallocated memory when the system software writers should damn well fucking know they aren't allowed to.

They can still provide an API for multi-user use cases, but most people simply don't have one. They just want to run a single game and have nothing else GPU intensive running. Develop for the majority, not the minority.

PS. game devs would simply have to use two computers and do remote debugging/development, same as they do on consoles. Not a problem.
 
Games can already allocate 95% of VRAM without issues. OS pre-emption on RAM is needed since VRAM usage fluctuate for tons of reasons as I said. Moreover VRAM usage for the same object/resource varies on different architectures and is not guaranteed to be the same on the same architecture on different drivers. Finally the memory fragmentation - external and internal - is impossible to avoid at all on such application, you can only try to mitigate it and APIs like Vulkan and Direct3D 12 already give developers the best tools to mitigate it. All this makes the exact computation of VRAM needed for a commercial game impossible to compute, plus its complexity rises when new hardware come on the market. Give total and exclusive control of game developers to the VRAM would be like give Schettino the total control of the USS Enterprise.
If you want blame someone, blame the IHVs that give from crappy documentation to no real documentation at al of their drivers and hardware architectures (especially NVIDIA).
 
Last edited:
Back
Top