Comparative consideration of DirectX 12 in games *spawn

From a replay I recorded earlier (almost 20 minutes of gameplay). Hardware RT is only about a 11% hit compared to software RT with Lumen GI and Reflections on High in Fortnite Chapter 5 Season 1, which is UE 5.4. Epic is actively working to make hardware RT achievable at 60 fps on console. So they are working on it (has some benefits) and the performance hit is not that big anymore. On my 3080 I can get about 80 fps with HW RT at 1440p TAA native.
Pathtracing-GI @ low (better quality than anything in UE5) in Alan Wake 2 has a 20% hit on Lovelace. And the GPU is still under full load, so there are no bubbles and non processing bottlenecks.

There's has been exactly ZERO major functionality updates to DXR for OVER 4 years now and the folks at DF have the audacity to question developers why they won't use dead end APIs even if Nvidia is the leading PC graphics vendor ? Once spring comes around, we'll be at 5 years without any DXR functionality updates ...
Why should there be any? "flexibility" is the problem here. DXR is good enough for real time Pathtracing. Raytracing is a compute heavy workload. So making it more flexible will always cost performance.
But here is the fun fact: Even with 100 TFLOPs Epic cant use a modern GPU to its full potential with their software solution. So i guess we do not need more functions but just a different kind of fixed function units.
Here is another fun fact: UE5 runs on Maxwell. Blaming an API from 2018 for an engine for 2014 hardware is strange to me.

Despite DXR may be not progess much nVidia has. Providing not only much faster RT hardware (4x triangle intersection tests per RT Core, 6x including higher clock rates since Turing) but releasing new API functions for their hardware: SER and OMM. And yet none of these advanced has been implemented into the main UE5 branch. So no, DXR as an API is not a problem.


UE5 supports high fidelity HWRT reflections. I’m sure you know this.
And none UE5 games support it. Robocop would have been a prime example for better reflections.
Missing proper HW RT implemention is only one of the problems. Another one is the slow performance on nVidia GPUs. Jusant runs with only 60 FPS in 4K on a 4090. Metro Exodus EE with a better and more compute heavy GI implementation gets over 100 FPS. AMD GPUs do not lose much performance going from Metro to Justant. It is nVidia who gets crippled by an unoptimized application.
 
And none UE5 games support it. Robocop would have been a prime example for better reflections.
Missing proper HW RT implemention is only one of the problems. Another one is the slow performance on nVidia GPUs. Jusant runs with only 60 FPS in 4K on a 4090. Metro Exodus EE with a better and more compute heavy GI implementation gets over 100 FPS. AMD GPUs do not lose much performance going from Metro to Justant. It is nVidia who gets crippled by an unoptimized application.
What if it is optimized... Just not for nVidia hardware.
 
That means it's not optimized ... because ...

If it's optimized for NV hardware but not AMD hardware that means it's optimized ... because ...

:p

Regards,
SB

I think he just meant the ray tracing implementation is designed to have good performance on AMD hardware, not that it isn't code isn't optimized for nvidia.
 
What if it is optimized... Just not for nVidia hardware.
Like DX12? Where nVidia GPUs lost performance because these renderer havent been as optimized as the driver? Sure.

UE5 reminds me of DX12. DXR allows for IHVs to do a lot of work behind the scene. Hardware can evolve much faster than the API. This is the way for the PC plattform. UE5 has put us back at least five years. Now we are going from highly optimize multi-core processing to unoptimized single-core processing. Obviously AMD will lose the least amount of performance because their Hardware RT implementation is the most basic one. Only DXR emulation (like nVidia did for Pascal) is slower.
 
Back
Top