DegustatoR
Legend
Yeah, that was a big "woops" on their part. Not sure how that even happened.It wasn't intentional, Nvidia have confirmed a developer left the unlocked code in a beta driver which they've since removed.
Yeah, that was a big "woops" on their part. Not sure how that even happened.It wasn't intentional, Nvidia have confirmed a developer left the unlocked code in a beta driver which they've since removed.
People make mistakes, lack of communication or bad overseeing of critical required code forks for drivers already in development. Shit happens and gamers suffer more.Yeah, that was a big "woops" on their part. Not sure how that even happened.
I do recall this as well.It was visible since the first games. DX12 on nVidia hardware doesnt work right. For example Hitman 1 with DX12 is slower on my 4C/8T and 2060 notebook than DX11 in a CPU limited scenario. There is a software(API) overhead involved to get nVidia GPUs running. And without proper multi-threading the nVidia DX11 driver is just superior to DX12...
Part of it is the translation required I'm sure, but the way the data is presented could be clearer. In some of the graphs they switch the colours from Nvidia to AMD even!Would have been nice if they specified which api each of the games was using. I have no idea what api planet zoo uses, for example.
There's little point to present anecdotal one-off impressions to mean anything more than that, I think the thread has established that clearly by now.It was visible since the first games. DX12 on nVidia hardware doesnt work right. For example Hitman 1 with DX12 is slower on my 4C/8T and 2060 notebook than DX11 in a CPU limited scenario. There is a software(API) overhead involved to get nVidia GPUs running. And without proper multi-threading the nVidia DX11 driver is just superior to DX12...
So devs get driver source code ? and why do nv make 3rd party developer compiled code available to other peoplea developer left the unlocked code in a beta driver
An Nvidia driver developer, not a game developer. Sorry for confusion.So devs get driver source code ? and why do nv make 3rd party developer compiled code available to other people
hmm. I may have not followed this argument correctly so bear with me if I read your post wrong.In the 3DMark overhead test DX12 is 5x+ faster than DX11 MT on a nVidia GPU. Yet we are here talking about games which are clearly CPU limited with DX12 and are only 2x (best case in Shadow of the Tomb Raider) or not one frame faster.
Fact is that most DX12 implemenations are still CPU limited and are nothing else than brute force implementation to get the nVidia GPU running.
Here is an example from Shadow of the Tomb Raider:
This game shows what is limiting the performance. With DX12 it's clearly not the DX12 driver when the "GPU renders" a 720p image in ~3,7ms...
And another from Hitman 2:
DX11 is faster with less than 50% GPU usage...
Hitman 2 is considerably slower in DX11 here on 6850K on a 3080. Up to 50% slower in fact.Hitman 2 looks to run similar across both apis, assuming still on the 9900k?
A further in depth analysis on Intel CPUs is required, I don't remember old Intel CPUs showing the same behavior in DX12 as these AMD Zen CPUs.Zen, Zen+, Zen2 all had gaming performance issues because of latency between the CCX and cache latency in general.
A further in depth analysis on Intel CPUs is required, I don't remember old Intel CPUs showing the same behavior in DX12 as these AMD Zen CPUs.
Maybe because the slightly older Intel CPUs still could hit 5Ghz whereas Zen/Zen+ had terrible clock speeds.A further in depth analysis on Intel CPUs is required, I don't remember old Intel CPUs showing the same behavior in DX12 as these AMD Zen CPUs.
Years and several architectures later and developers still face issues. At what point does the blame begin to fall on Nvidia for their hardware design? Devs talking about the experience of working with low level APIs on each vendors GPUs would be great.In the 3DMark overhead test DX12 is 5x+ faster than DX11 MT on a nVidia GPU. Yet we are here talking about games which are clearly CPU limited with DX12 and are only 2x (best case in Shadow of the Tomb Raider) or not one frame faster.
Fact is that most DX12 implemenations are still CPU limited and are nothing else than brute force implementation to get the nVidia GPU running.
Here is an example from Shadow of the Tomb Raider:
This game shows what is limiting the performance. With DX12 it's clearly not the DX12 driver when the "GPU renders" a 720p image in ~3,7ms...
And another from Hitman 2:
DX11 is faster with less than 50% GPU usage...
Maybe because the slightly older Intel CPUs still could hit 5Ghz whereas Zen/Zen+ had terrible clock speeds.
If you're saying that AMD drivers are better optimized for Zen1/2 CCX structure then I don't see why this would be limited only to DX12.I know that in a very cpu heavy game like COD Warzone, intel is still king if you're overclocking (not sure about out of the box and with XMP profiles for 3200 RAM etc). I don't think it's so much the clock speed as it is the latency of the cache hierarchy and the ease of getting stable RAM overclocks and tight memory timings. I think the 5800x, 5900x are getting close but they're way more of a pain in the ass to overclock and tweak because there's a whole bunch of different clocks you have to worry about and ratios and whatever.
I'm imagining taking the same system and swapping gpus between AMD and Nvidia. Say the Nvidia drivers request data from memory 11:10 ratio compared to the AMD drivers. If the cache is getting hit hard and that 1 extra data request sometimes turns into a cache miss in L3, or a read across CCX, suddenly you have this extra "overhead," that wouldn't be visible whenever you were playing something with lots of headroom on the cpu.