No DX12 Software is Suitable for Benchmarking *spawn*

For what it's worth I expect RDNA 3 to scale worse than the trendline indicates simply because register bandwidth restrictions will impact computational throughput.
RDNA3 has lower register bw than RDNA2?
 
"Speed Way uses DirectX Raytracing Tier 1.1 for reflections and global illumination"

I wonder if that's why Navi 21 is substantially faster than 2080Ti, as opposed to normally being around the same performance...
Port Royal uses DXR1.0, and yet the 6950XT is 30% faster than 2080Ti .. it scales with FLOPs that's all, the RT in both tests are not that heavy.


Overall it does seem to be strongly correlated with TFLOPS.
Yeah.
 
Per issued instruction, yes...

Everything indicates that AMD is doubling the FP32 SIMD lanes while keeping the register bandwidth unchanged.
You mean those LLVM/AMDGPU commits? They can be changed as well as during Navi1 and Navi2 launches
 
Really makes you wonder how much performance we are potentially losing now that most titles are DX12 only.

Some games run better in dx12, vulkan that in dx11. Dx12 is not universally slower. It's just slower in Witcher 3 remastered or whatever it's called. Also because it's basically doing D3D11 to D3D12 translation instead of using the D3D12 api natively.
 
Some games run better in dx12, vulkan that in dx11. Dx12 is not universally slower. It's just slower in Witcher 3 remastered or whatever it's called. Also because it's basically doing D3D11 to D3D12 translation instead of using the D3D12 api natively.
In my case DX12 is pretty much universally slower with a select few exceptions where the DX11 path is flat out broken with absurd CPU performance walls. When GPU limited DX11 is always faster.
 
So Microsoft lied about benefits of dx12 over dx11 biside the dxr ?

They didn’t lie about the benefits. They just forgot to mention that it’s only achievable by omniscient game developers who know the insides of every CPU and GPU architecture and driver better than the people who build them.

Hopefully it’s made life easier for the driver development teams but it’s probably made it worse for them too.
 
How did nVidia lied about DX12? They have written enough blog post. There is a reason why nVidia optimized Lovelace for energy efficient. Doesnt make sense to change your architecture for developers who dont care about the pc plattform. Let the GPU just run as low power as possible. Will help them in the notebook market, too...
 
How did nVidia lied about DX12? They have written enough blog post. There is a reason why nVidia optimized Lovelace for energy efficient. Doesnt make sense to change your architecture for developers who dont care about the pc plattform. Let the GPU just run as low power as possible. Will help them in the notebook market, too...
Nvidia marketed their architecture as designed for DX12 and other low level API during the Maxwell and Pascal era. This is far from the truth as both are very ill suited to low level APIs and their binding model. Nearly every game ever released with user selectable DX versions shows a performance drop when running DX12 on GPUs from these architectures. Save for a few rare cases with absurd CPU bottlenecks where it seems almost no work was even put into the DX11 version. Even the Doom games which are regarded as having the best implementation of a low level API show performance regressions on these Nvidia GPUs when not CPU limited.

I don't agree that Nvidia shares enough in their blog posts. Can you provide me with the Nvidia equivalent of this for any of their consumer architectures in the last decade +? Their "white papers" are glorified marketing pieces.


If Nvidia doesnt gift you engineers are you just left to guess at what's going on when you hit slow paths?


These guys certainly don't know what's going on. Where are the resources about the architecture for them to learn?
 
Last edited:
Nvidia marketed their architecture as designed for DX12 and other low level API during the Maxwell and Pascal era. This is far from the truth as both are very ill suited to low level APIs and their binding model. Nearly every game ever released with user selectable DX versions shows a performance drop when running DX12 on GPUS from these architectures.

Does TW3 run faster in DX12 than DX11 on RDNA?
 
Nvidia marketed their architecture as designed for DX12 and other low level API during the Maxwell and Pascal era. This is far from the truth as both are very ill suited to low level APIs and their binding model. Nearly every game ever released with user selectable DX versions shows a performance drop when running DX12 on GPUs from these architectures. Save for a few rare cases with absurd CPU bottlenecks where it seems almost no work was even put into the DX11 version. Even the Doom games which are regarded as having the best implementation of a low level API show performance regressions on these Nvidia GPUs when not CPU limited.
And how is nVidia responsible for DX12 implementations? Maybe developers should do more. They wanted a low level API, maybe they should start to care about superior PC hardware instead of porting console games.
 
A hell of a lot of games do run better under low level APIs on AMD GPUS though. Both RDNA and older GCNs from the Paxwell era.
We have an entite thread dedicated to how DX12 destroys performance on both AMD and NVIDIA GPUs, from all the archs you listed.

When DX12 is bad, it's bad for both, NVIDIA is just hit worse, Intel too by the way. Any GPU that isn't AMD (Mantle like) will be hit harder. AMD GPUs themselves take a notable hit.

Nvidia marketed their architecture as designed for DX12 and other low level API during the Maxwell and Pascal era.
They were certainly more compatible with DX12 specs than AMD. They supported Conervative Rasterization and Raster Order, all GCN GPUs lacked such features completely.
Does TW3 run faster in DX12 than DX11 on RDNA?
Nope.
 
Back
Top