Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

These results are obviously CPU limited since everything between 5700XT and 6800 show the same fps. 6800XT bring a bit faster then the rest of the bunch is an interesting artefact though.

Yah, you're right. Not sure how I didn't notice that.

I wonder if there's some situation where cpu limited and "driver limited" are not the same thing. Like setting the game to low might decrease the amount of draw calls per frame and shift the bottleneck somewhere else in the driver? I don't know.
 
What is there to provide? The results show a CPU limited case from a DX12 game which is known to be very CPU limited. They also don't show anything like what HUB is showing.
I've said that more tests are needed. This is another such test which shows that the issue isn't really universal and may not even be CPU related.
They're using one CPU in that graph though - a 10700k. That's a 5ghz 8core, 16 thread CPU, one of the most performant gaming CPU's out at the moment. The top-end CPU HU used was a 5600x, a 6-core 12 thread CPU.

A driver potentially requiring 'more CPU overhead' doesn't mean the actual % impact to the final framerate will stay the same as you significantly increase CPU power, and especially cores/threads - it's quite possible whatever supposed increased load some DX12 games require on the CPU when paired with Nvidia GPU's could be spread across threads/cores that aren't particularly saturated from the game itself. HU's focus was on older CPU's with less cores, there is little argument that a top-end CPU will make any supposed increased demand required by the Nvidia driver far less noticeable, even in lower resolutions.

Yes, more tests are necessary to make HU's click-baity video title actually reflect a consensus (and one they seemed more careful to parse in the actual video) so we are not in disagreement here. I would def like to see Digital Foundry and Gamers Nexus tackle this as well, lord knows Steve from GN is not adverse to throwing screens of 100 bar graphs in his videos.
 
I wonder if there's some situation where cpu limited and "driver limited" are not the same thing. Like setting the game to low might decrease the amount of draw calls per frame and shift the bottleneck somewhere else in the driver? I don't know.
It depends entirely on the game, which is why we need more tests. Often though a game's 'medium vs. ultra' will def put an increased load on both the GPU and CPU, but yes that could definitely stress different aspects of the driver differently.
 
They're using one CPU in that graph though - a 10700k. That's a 5ghz 8core, 16 thread CPU, one of the most performant gaming CPU's out at the moment. The top-end CPU HU used was a 5600x, a 6-core 12 thread CPU.

A driver potentially requiring 'more CPU overhead' doesn't mean the actual % impact to the final framerate will stay the same as you significantly increase CPU power, and especially cores/threads - it's quite possible whatever supposed increased load some DX12 games require on the CPU when paired with Nvidia GPU's could be spread across threads/cores that aren't particularly saturated from the game itself. HU's focus was on older CPU's with less cores, there is little argument that a top-end CPU will make any supposed increased demand required by the Nvidia driver far less noticeable, even in lower resolutions.

Yes, more tests are necessary to make HU's click-baity video title actually reflect a consensus (and one they seemed more careful to parse in the actual video) so we are not in disagreement here. I would def like to see Digital Foundry and Gamers Nexus tackle this as well, lord knows Steve from GN is not adverse to throwing screens of 100 bar graphs in his videos.
It doesn't matter what CPU it is. The results themselves are CPU limited. The fact is - in this case there is no sign of Nv driver doing any worse than AMD one.

Why? Great question which HUB should've try to answer instead of spreading FUD again. This issue needs more investigation to say anything with certainty.
 
Shadow of the Tomb Raider can render over 200 fps in 1080p on a 3090. But the DX12 renderer and the game are CPU bound. It doesnt matter which CPU someone uses in 1080p and lower a Radeon should be much faster than a 3080/3090. Older Ryzen CPUs had/have problems in Rise and Shadow of the Tomb Raider with DX12 on nVidia hardware. That is a software problem and should have fixed a long time ago by Nixxes...

/edit: Here is a comparision between a Ryzen 7 3700X and I7-10700 in Shadow:

That is most cpu bound part in the Shadow Benchmark. The Intel processor is more than 30% faster. I dont think that Intel has 30% higher IPC than the AMD processor...
There other DX12 games which shows same behavior like Valhalla.
 
Last edited:
@troyan That Tomb Raider benchmark is likely measuring the cache and memory latency differences between zen2 and comet lake. Probably has nothing to do with ipc or clock speed. zen2 is probably stalling while waiting for memory accesses.
 
Personally I think these are really interesting results and well worth highlighting. This would seem to be quite relevant to esports players and perhaps it helps to explain why Ampere seemingly loses performance to RDNA2

The problem with these results is that they tested zero esports titles with 300+ fps. To me the potential difference in very high fps cases is the most interesting aspect, but it's purely in the realm of speculation and assumptions now. Concrete data would be welcomed. It would be good to see a top tier CPU being used as well to see how far the issues potentially go.
 
@troyan That Tomb Raider benchmark is likely measuring the cache and memory latency differences between zen2 and comet lake. Probably has nothing to do with ipc or clock speed. zen2 is probably stalling while waiting for memory accesses.

When the GPU is switched the Ryzen processor will perform much better. Paul's hardware shows this with the Ryzen 3950X.

/edit: DX11 and DX12 comparision with the Shadow of the Tomb Raider Benchmark in 720p on a i7-9900 and 3090:



With DX12 the nVidia path isnt optimized enough to be on par with DX11. I guess nobody has cared because there werent faster GPUs on the market to even think about >150FPS...
 
Last edited:
At this point I think Nvidia drivers using more CPU is probably 99% confirmed. This user is going from an R9 390 to a GTX1660ti which should be a huge upgrade, but instead he gets much worse performance because his cpu maxes out with the nvidia drivers.


I would consider this is a real issue for Nvidia. People make assumptions that no one would pair a new gpu with an old cpu, but people do it all the time. You buy a gpu you can afford at the time and save a cpu/motherboard upgrade for later. I bought an RTX3080 at the end of 2020 knowing I'd be cpu limited in many of my use cases, but plan on buying a new cpu and motherboard in 2022. Not sure if this driver overhead issue is really affecting me, but it might. For lower budget or mid-range gamers, this type of situation may actually be very common.
 
@troyan He's using dx11 with future frame rendering on and all settings to low, which is basically the esports setup for battlefield. Not sure why you wouldn't believe that this person is being honest about their video. It's from 2019, well before this issue was highlighted by hardware unboxed to the broader community. It's likely he posted it hoping for real answers. If you spend money on a new gpu and it seemed to run much worse you'd probably be very disappointed.

To be more clear the user goes from GPU limited to fully CPU limited with much lower performance. The R9 390 is quite a bit faster and bottlenecks on gpu utilization. He switches to the gtx1660ti and performance drops quite a bit because the cpu hits 100% and can't feed the gpu fast enough. GPU utilization is 55-60% range.
 
I would consider this is a real issue for Nvidia. People make assumptions that no one would pair a new gpu with an old cpu, but people do it all the time. You buy a gpu you can afford at the time and save a cpu/motherboard upgrade for later. I bought an RTX3080 at the end of 2020 knowing I'd be cpu limited in many of my use cases, but plan on buying a new cpu and motherboard in 2022. Not sure if this driver overhead issue is really affecting me, but it might. For lower budget or mid-range gamers, this type of situation may actually be very common.
I'm a big stickler for frametime consistency, so in addition to see if it's actually a bottleneck for max# frames/ec, I'd also like to see tests done with a vsynced frame cap where the GPU isn't 100% so it's not the bottleneck, but then the CPU usage per thread is recorded over time, along with frametime stats. I game primarily on a TV without VRR so yeah I'm prob not in the majority of PC gamers with VRR monitors, but it would also be a fairly straightforward way I think of showing any potential increased CPU load required to do the same amount of work, and would be easier to do with configs that more people would likely pair these mid-tier CPU's with (eg, a 5600xt vs 2600 Super on a 6-core CPU) to negate the "Well yeah no dummy is gonna use a Zen 2 with a 3090!" reactions. Even if the relative frames/sec are roughly equal, I still want to know if one vendor is leaving me more CPU headroom than another to do the same job.
 
@Flappy Pannus Yah, I think that would be an interesting way of doing it. They could just pick a game with a built-in framerate limiter. You could just go 30fps, 60fps, 90fps, 120fps etc and plot the cpu usage of AMD vs Nvidia. A lower end cpu would make differences more visibile.

Edit:

You can get some pretty nice performance on BFV with an AMD GPU on an old cpu like the ryzen 5 1400, and the nvidia gpus are considerably behind. I don't think it's unrealistic that someone would pair a ryzen 5 1400 with an RTX2060 or a GTX 1660 variant.
 
Last edited:
the best way to test is to just test a lot of different CPUs. Most of the time the GPU benchmarks are about pushing the bottleneck over to the GPU by pairing it with a massive CPU. But it would be interesting to see the reverse and see where the bottlenecks occur on the CPU side of things.
 
@troyan He's using dx11 with future frame rendering on and all settings to low, which is basically the esports setup for battlefield. Not sure why you wouldn't believe that this person is being honest about their video. It's from 2019, well before this issue was highlighted by hardware unboxed to the broader community. It's likely he posted it hoping for real answers. If you spend money on a new gpu and it seemed to run much worse you'd probably be very disappointed.

To be more clear the user goes from GPU limited to fully CPU limited with much lower performance. The R9 390 is quite a bit faster and bottlenecks on gpu utilization. He switches to the gtx1660ti and performance drops quite a bit because the cpu hits 100% and can't feed the gpu fast enough. GPU utilization is 55-60% range.
Just another data point, I can confirm BFV runs absolutely atrocious on my 1080ti from a CPU perspective, although it’s not great GPU wise either. Constant stuttering and massive FPS fluctuations. It’s so bad that it ruins the experience. I had assumed it was just the poor state of the game but maybe it has been an Nvidia issue this entire time.
 
Just another data point, I can confirm BFV runs absolutely atrocious on my 1080ti from a CPU perspective, although it’s not great GPU wise either. Constant stuttering and massive FPS fluctuations. It’s so bad that it ruins the game. I had assumed it was just the poor state of the game but maybe it was been an Nvidia issue this entire time.

Honestly, I was in the same boat when I had my GTX1060. Haven't tried it since I bought my RTX3080. I really don't want to download it :)
 
@troyan He's using dx11 with future frame rendering on and all settings to low, which is basically the esports setup for battlefield. Not sure why you wouldn't believe that this person is being honest about their video. It's from 2019, well before this issue was highlighted by hardware unboxed to the broader community. It's likely he posted it hoping for real answers. If you spend money on a new gpu and it seemed to run much worse you'd probably be very disappointed.

To be more clear the user goes from GPU limited to fully CPU limited with much lower performance. The R9 390 is quite a bit faster and bottlenecks on gpu utilization. He switches to the gtx1660ti and performance drops quite a bit because the cpu hits 100% and can't feed the gpu fast enough. GPU utilization is 55-60% range.
The Techgasm video has a link to a user with 3 videos of DX11 games from 2016 between a 480/970/1060 in Witcher 3, Crysis 3 and Black Ops 3, using an i5-3570. All 3 show higher CPU usage with the Nvidia cards to reach very similar performance (it varies in severity as the video progresses, Crysis 3 seems pretty similar though unlike Witcher/BO3).

F33jj9L.jpg
K25wLrS.jpg
w5Cg7g8.png


This is where a full frametime graph is crucial though, rather than just avg frames/sec. Perhaps the AMD GPU is starved on one thread at some points, leading to stutters, while Nvidia has a higher CPU usage overall, but it's more consistently spread across all cores and doesn't have these lows that would barely show up on an avg frametime over 4 mins but could be felt in gameplay - lower CPU usage is not always preferable if it can't scale. Or, perhaps the opposite - with Nvidia the CPU is constantly at its limit and when the game's CPU demands spike it suffers?
 
@troyan He's using dx11 with future frame rendering on and all settings to low
Well, this isn't DX12 still so the argument is valid - this video shows that BFV in DX11 has a higher CPU load on GFs but generalizing this into "At this point I think Nvidia drivers using more CPU is probably 99% confirmed" is an even bigger stretch than what HUB did.
For all we know so far this may be a combination of a game's renderer doing something and the platform doing or not doing something. Drivers are possibly also at fault of course.
Note that BFV is a game known to favor AMD GPUs in all instances, even enabling RT in it isn't that big of a performance hit on RDNA2 AFAIK.
 
Note that BFV is a game known to favor AMD GPUs in all instances

Maybe, but here im at everything maxed including DXR 4k60 in 64 player matches on a 2080Ti and a Zen2 cpu.No idea what the fuss is about, BFV doesnt run bad and gaming at 1080p resolutions on modern high end hardware isnt all that intresting outside of Esports perhaps.
 
Maybe, but here im at everything maxed including DXR 4k60 in 64 player matches on a 2080Ti and a Zen2 cpu.No idea what the fuss is about, BFV doesnt run bad and gaming at 1080p resolutions on modern high end hardware isnt all that intresting outside of Esports perhaps.

High refresh rate monitors are becoming more and more common. If you're a 60Hz gamer, this probably won't be an issue. If you're a high refresh gamer, this information is going to be pretty valuable. I don't think it's fair to assume that a person with low to midrange hardware is necessarily going to only play at 60Hz. There are options for 1080p low to try to hit 144. I know because that's what I did. Playing low settings in Battlefield 5 in multiplayer is probably the most common setup.
 
Back
Top