Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

I often see people claiming Steam stats are flawed. What I never see is an explanation of why those flaws would favor one product over another in a large, random sample.
Unintentional sampling bias is always possible. However I agree with you that I have not seen any rigorous examination of these flaws or an evaluation of their statistical significance. Given that Steam's results are largely consistent with other studies, the only three possibilities are:
(1) the flaws are statistically insignificant
(2) the flaws are statistically significant but cancel each other out so perfectly that the results are aligned with other studies
(3) all studies are flawed in statistically significant but correlated ways

Of these, (1) seems to be the most likely explanation to me. The others are also possible (I've personally conducted experiments in which (2) has occurred in hilarious ways), but less likely.
 
HUB looks further into Nvidia's sub par driver performance.

AMD driver 20-30% faster when CPU limited under low level APIs.

The D3D12 binding model causes some grief on Nvidia HW. Microsoft forgot to include STATIC descriptors in RS 1.0 which then got fixed with RS 1.1 but no developers use RS 1.1 so in the end Nvidia likely have app profiles or game specific hacks in their drivers. Mismatched descriptor types are technically undefined behaviour in D3D12 but there are now cases in games where shaders are using sampler descriptors in place of UAV descriptors but somehow it works without crashing! No one has an idea of what workaround Nvidia is applying.
 
Last edited:
Has nothing to do with driver overhead and everything to do with the fact that these "low level APIs" happen to be in AMD sponsored titles. (Haven't watched the video yet.)

On Vulkan, the binding model isn't too bad on Nvidia HW. The validation layers on Vulkan would catch the mismatched descriptor type preventing a lot of headaches for their driver team compared to D3D12 ...
 
Has nothing to do with driver overhead and everything to do with the fact that these "low level APIs" happen to be in AMD sponsored titles. (Haven't watched the video yet.)
Except it happens in Watch Dogs Legion too, and in pretty much all low level API games they have tested.

The D3D12 binding model causes some grief on Nvidia HW. Microsoft forgot to include STATIC descriptors in RS 1.0 which then got fixed with RS 1.1 but no developers use RS 1.1 so in the end Nvidia likely have app profiles or game specific hacks in their drivers. Mismatched descriptor types are technically undefined behaviour in D3D12 but there are now cases in games where shaders are using sampler descriptors in place of UAV descriptors but somehow it works without crashing! No one has an idea of what workaround Nvidia is applying.

It must be a decade give or take Nvidia has known about DX12 specifications. Whose fault is it at this point that their hardware has issues?
 
Last edited:
Has nothing to do with driver overhead and everything to do with the fact that these "low level APIs" happen to be in AMD sponsored titles. (Haven't watched the video yet.)
Of course, because NVIDIA can never be at fault, right? :rolleyes:
They tested in Horizon Zero Dawn and Watch Dogs Legions, latter being NVIDIA sponsored title.
Yes, you said you didn't watch the video yet, but still you had to go and blame it being biased test just in case.

edit: just for the sake of it, it's showing even more dramatic difference in Watch Dogs Legions, where even 5600XT beats RTX 3070 on Ryzen 5 1600X, 2600X and Core i3-10100 at 1080p medium settings (1080p ultra 5600XT becomes the bottleneck regardless of CPU)

When I skimmed it they were doing 1080p/1440p...RTX3090 at 1080p is doing it wrong ;)
Not when you're examining CPU load between cards and they used cards from other performance points too.
 
Of course, because NVIDIA can never be at fault, right? :rolleyes:
They tested in Horizon Zero Dawn and Watch Dogs Legions, latter being NVIDIA sponsored title.
Yes, you said you didn't watch the video yet, but still you had to go and blame it being biased test just in case.

edit: just for the sake of it, it's showing even more dramatic difference in Watch Dogs Legions, where even 5600XT beats RTX 3070 on Ryzen 5 1600X, 2600X and Core i3-10100 at 1080p medium settings (1080p ultra 5600XT becomes the bottleneck regardless of CPU)


Not when you're examining CPU load between cards and they used cards from other performance points too.

So how does the results look at 4K, Ultra settings?
You know, how a person with a RTX3090 most likely will play the game?
 
So how does the results look at 4K, Ultra settings?
You know, how a person with a RTX3090 most likely will play the game?
CPU is unlikely to be the bottleneck in any way at those settings, however it's completely irrelevant to this discussion. It's not RTX 3090 issue, not even Ampere issue since RTX 2080 Ti behaved the same way.
 
How does saying its a software fault preclude there being possible driver issues? How do you rule out the possibility?
DX12 driver is precisely made in a way to not be an issue. Considering the amount of attention which low level APIs get and the fact that you need them for RT I think it's rather unlikely to be a driver issue of such magnitude.

As Lurkmass said above this is likely a mismatch of h/w capabilities and s/w which is a result of a badly engineered s/w in the first place.
 
DX12 driver is precisely made in a way to not be an issue. Considering the amount of attention which low level APIs get and the fact that you need them for RT I think it's rather unlikely to be a driver issue of such magnitude.

As Lurkmass said above this is likely a mismatch of h/w capabilities and s/w which is a result of a badly engineered s/w in the first place.
The issues with "badly engineered s/w" aren't present on AMD in this case, so clearly it is NVIDIA specific issue for now (until Intel comes with their cards and we get results for those)
NVIDIA designed their hardware and drivers to support the API knowing it's faults, and AMD isn't suffering from similar issues, which makes this either NVIDIA driver or hardware issue.
 
It must be a decade give or take Nvidia has known about DX12 specifications. Whose fault is it at this point that their hardware has issues?

It could go multiple ways ...

Microsoft chose the binding model which favors AMD HW so other vendors don't really have a choice. Similarly, Microsoft standardized dumb features like ROVs which run badly on AMD. Sometimes Microsoft takes a "no compromise" approach to these things so hardware vendors are bound to find some features difficult to implement ...

Developers can also stop relying on "undefined behavior" because that's what they're doing so it's not unreasonable to see performance pitfalls or even crashes ...

Maybe Nvidia could very well be at fault since they keep making hacks in their drivers instead of designing their HW to be closer inline with AMD's bindless model ...

Who really knows at this point since the ship has sailed ...
 
The issues with "badly engineered s/w" aren't present on AMD in this case, so clearly it is NVIDIA specific issue for now (until Intel comes with their cards and we get results for those)
Shocking news - AMD GPUs aren't the same as Nvidia's and vice versa.
Another shocking news to you - not everything AMD does in GPUs is always great and thus not everything can and should be copied.
In other words - this means about nothing.

NVIDIA designed their hardware and drivers to support the API knowing it's faults
Every h/w has faults, it's the developers who should be aware of them and avoid them.

and AMD isn't suffering from similar issues, which makes this either NVIDIA driver or hardware issue.
Nope, it makes it purely a s/w issue - since there are no such faults when using other APIs on the same h/w.
 
Back
Top