DX12 Performance Discussion And Analysis Thread

Thanks to dx12 I can soon finally downgrade my screen from 5K to a shiny HDR 1440P or 1080P capable one, as multi GPU seems to die. This with a touch of UWP really does wonders for pc gaming.

It was nice to have you arround.
 
In theory, in praktise it went downhill fast with, apart from AOTS, no multiGPU and even not a word about it, Remedy even set it's to much work:
http://www.overclock.net/t/1597486/g3d-quantum-break-will-not-support-multi-gpus#post_25075720

DX12 GOW, no multi GPU
DX12 Tombraider, no multi GPU
DX12 Hitman, no multi GPU
Talos Vulkan, no multi GPU.
It's the law of physics;
GPU vendors had to push it for selling GPU's
Devs won't take the effort to invest in it.

It's the final blow for mgpu..Thank you dx12....
 
Talos hasn't a D3D12 backend, D3D12 backends has nothing to do with game engine architectures. For most deveolopers, multi-gpu support is a waste of time and money.
 
- Tomb Raider's DX12 implementation is just a little science project and completely useless to the end user IMO. Just use DX11 and you'll loose pretty much nothing.
- Hitman AFAIK is based on the same engine, and unless you're running SLI/Crossfire with an AMD Excavator then you're better off with DX11 too.
- Anything UWP is the laughing stock of the PC industry right now. The thing was obviously not ready for the consumer market and the lack of multi-GPU is the least of its problems. Why someone would buy a UWP game right now, is beyond my comprehension. Other than through sheer ignorance, of course.

You think DX12 wouldn't have its toothing issues, just like all its ancestors?
AFR is still available in DX12, so down the road even if developers don't want to lose their time with split-frame rendering, load balancers, etc.. they always have dumb AFR to fall back to, which has worked just fine for about a dozen years.

Multi-GPU isn't going anywhere, rest assured.
 
Lol,you can choose: AFR or PCI-E. One of them must die. And nope, I am not so confident in PCI-E 4.x and NVLINK (also remember even with PCI-E 3.0 most of configuration runs on x8-x8, somtime even still under PCI-E 2.x half bandwidth), at least if folks want 120/144 hz, 4K and 10-12 bit HDR. Coupled/shared resources create both access and transfer issues with actual technologies. Split screen techniques just solve the issues of non trivial rendering paths where AFR does not, but not the resource coupling.
Independent adapters multi-GPU is the way to go, at least for now: developers can better balance what resource need to be shared and what do not, it works in both homogeneous and heterogeneous scenarios and with both discrete and UMA devices
 
Last edited:
Lol,you can choose: AFR or PCI-E.
I would choose S-GPU any day until they figured out how not to have me - as a potential whale who bought two expensive graphics cards from one manufacturer - sitting there with every other major release, waiting for a patch, disabling half of my investment or being outright told "won't work, can't fix".
 
- Tomb Raider's DX12 implementation is just a little science project and completely useless to the end user IMO. Just use DX11 and you'll loose pretty much nothing.
- Hitman AFAIK is based on the same engine, and unless you're running SLI/Crossfire with an AMD Excavator then you're better off with DX11 too.
- Anything UWP is the laughing stock of the PC industry right now. The thing was obviously not ready for the consumer market and the lack of multi-GPU is the least of its problems. Why someone would buy a UWP game right now, is beyond my comprehension. Other than through sheer ignorance, of course.

You think DX12 wouldn't have its toothing issues, just like all its ancestors?
AFR is still available in DX12, so down the road even if developers don't want to lose their time with split-frame rendering, load balancers, etc.. they always have dumb AFR to fall back to, which has worked just fine for about a dozen years.

Multi-GPU isn't going anywhere, rest assured.

This way there is always an excuse to find, it's sucks majors balls they not even think about it to enable it in the near future and we're told dx12 should making mgpu easier to implement.
At least there is dx11, for most of them, for now, but in Quantum Break you can forget that, they won't do it, because it's to difficult, by by playing in 4K or 5K, back to 1080P and be happy.

I dare to bet my hat, the situation will stay much worse then at the time of their dx9-11predecessors, when the good old IHV's could do most of the work to save mgpu. Its all nice in theory, but I don't see devs taking effort and much time pleasing me playing @5K, it won't happen. Anybody want a Dell2715K to buy, its time to go step into the future and play in 1080P and be green.
 
Last edited:
You're being sarcastic, right? Sar-effin'-castic - just forgot the appropriate tags.
No, I'm not. For me, AFR works just fine. I can crank up almost every AAA demanding game at 2560*1600 maxed out with solid 60 FPS V-Synced, which I couldn't do with any single GPU so far (e.g. Witcher 3). And when I get a Freesync monitor, I'll be able to do it with even higher resolutions below 60 FPS and it'll look fantastic nonetheless.
Is it far from the potential of the platform itself? Yes, it is. The PC has always been unoptimized, but save from a few exceptions (within the realm of demanding games) it works nonetheless.

And so what if I have to wait a couple of weeks for the IHV to come up with a multi-gpu profile? It's not like we have a drought of great PC games (or other stuff to do) at the moment.
I just want to play the game as the devs idealized it. I'm not in a hurry to play it before everyone else.

Lol,you can choose: AFR or PCI-E. One of them must die. And nope, I am not so confident in PCI-E and NVLINK (also remember even with PCI-E 3.0 most of configuration runs on x8-x8, somtime even still under PCI-E 2.x half bandwidth), at least if folks want 120/144 hz, 4K and 10-12 bit HDR.

My LGA-2011 system actually does x16-x16 PCI-E 3.0, but if I had to chose I'd say goodbye to >75Hz. I just don't see very high refresh rates as worthwhile at all.
Give me better and more pixels at 50-75Hz and I'm good. At least for monitors and not VR.


This way there is always an excuse to find, it's sucks majors balls they not even think about it to enable it in the near future and we're told dx12 should making mgpu easier to implement.
There will always be teams of developers who want to push the platform as much as they can, and others who are more concerned about bringing a game to the market, please the publishers and keep their jobs (which is also very important). It's the same as always.

Regardless, you can expect DICE, CD-Projekt RED, Rockstar and others to make the extra effort to bring explicit multi-adapter to life, eventually.
 
Last edited by a moderator:
No, I'm not. For me, AFR works just fine. I can crank up almost every AAA demanding game at 2560*1600 maxed out with solid 60 FPS V-Synced, which I couldn't do with any single GPU so far (e.g. Witcher 3).
In that case, I envy you for being unsusceptible to uneven frametimes and the like as well as being much more patient than I am. :)
 
In that case, I envy you for being unsusceptible to uneven frametimes and the like as well as being much more patient than I am. :)

Yeah, same here. I stopped using AFR years ago although I still keep an eye on it to see if any advances to multi-GPU come out. DX12 is the most promising thing on that front since Mantle.

I watch port reports and performance analysis video's and it's the same almost every time.

SLI doesn't work (not many do port reports or performance analysis videos with AMD cards) or SLI reduces framerates for this game. Wait until Nvidia releases a new driver. It may or may not allow SLI to work correctly. Most times it eventually works, but not always.

Oh and the other thing often mentioned is that they were able to get SLI to work by using "X" SLI profile in the Nvidia manager, and it'll allow SLI, but often causes uneven frame times. Hence, they don't recommend it unless you really want SLI to work even if it degrades visual performance.

Regards,
SB
 
AFR doing fine mostly here, say 95% on a scale of 100 games I have installed. Some with the help of small tricks or with a framerate cap and sometimes triple buffering, things running pretty decent here, even @5K.
Crossfire is great when working! (for example I cann't believe how smooth Talos dx11 @5K with 18 custom maximized settings runs, let alone how it looks, esp. with some small post shader injections op top of it :)

AFA mgpu is concernd into the future, sure some will have it, but will it 30, 50 or 80%? And I see no single GPU in the near future handling 5K, maybe 4K?
Even Epic still haven't talked about it in their 4.11 build of UE4. At least forced AFR works on most of the builds in dx11 used in several games, Kholat, Alone in the Dark, UT beta for example, with some small errors despite.
 
In that case, I envy you for being unsusceptible to uneven frametimes and the like as well as being much more patient than I am. :)

Please explain how solid V-Synced 60 FPS lead to uneven frametimes.
 
If they're totally and completely vsynced EVERY single image, then yeah, frametimes should be identical. In my experience though, that's not always the case.
That's... what "solid VSynced 60 FPS" means. If I disabled VSync I'd probably get an average over 90 FPS with dips to 70.
 
That's... what "solid VSynced 60 FPS" means. If I disabled VSync I'd probably get an average over 90 FPS with dips to 70.
Right but the lie of AFR is that you're not actually reducing your latency to the level of if you were getting double-buffered "solid 60 vsynced" on a single GPU. You still have an extra frame of lag in there, as if it was only 30 :S So the image on the screen may look smoother (debatable vs. g-sync though IMHO), but it doesn't actually help the "feel" much.
 
What's about inter frame async compute? I wonder how large is additional latency from it in comparison with perf gains
 
Back
Top