Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

Looks fine for both GPUs. 3080 is ~2x as fast with the same CPU utilisation.
So a pure path-traced game isn't affected due to being basically zero CPU usage. And also not even similar to any other games, even RT ones. Given that this issue is coming up in CPU-bound scenarios, what is this contributing towards the discussion? Discounting scenarios?
 
So a pure path-traced game isn't affected due to being basically zero CPU usage. And also not even similar to any other games, even RT ones. Given that this issue is coming up in CPU-bound scenarios, what is this contributing towards the discussion? Discounting scenarios?

Hm, when the discussion is about something superstition like "dx12 driver" and "vulkan driver" i think a comparision in a Vulkan game is on topic...
 
6 RT games being compared here @1080p and 1440p on a 6 core CPU, the 3090 comes out on top by a significant margin vs the 6900XT, despite the toll RT takes on the CPU performance.

Ya Nvidia is much stronger at RT. These tests also show the higher CPU usage Nvidia causes, sometimes dramatically so. Legion is often double the CPU usage. Frame times are also not great on the 3080 in comparison. Shades of SLI microstutter.

Hm, when the discussion is about something superstition like "dx12 driver" and "vulkan driver" i think a comparision in a Vulkan game is on topic...

HUB benchmarked R6 Siege which is a vulkan title in the link i previously posted. Same result.
 
Last edited:
HUB benchmarked R6 Siege which is a vulkan title in the link i previously posted. Same result.

And when you compare the numbers to a video released in november his new numbers are down from this one

Rainbox Six old vs new:
1600: 291 vs 263 - +10.6%
2600: 325 vs 308 - +5,5%
3600: 415 vs 393 - +5,5%
5600: 506 vs 518 - -2,4%
 
Higher fps causes higher CPU usage! nothing out of the ordinary here.
Some 10-15% higher fps does not cause double the CPU usage.


And when you compare the numbers to a video released in november his new numbers are down from this one

Rainbox Six old vs new:
1600: 291 vs 263 - +10.6%
2600: 325 vs 308 - +5,5%
3600: 415 vs 393 - +5,5%
5600: 506 vs 518 - -2,4%

Only guesses i could hazard are changes to the test sequence or motherboard/ram setup. They have more games and tests coming in the next couple weeks.
 
Last edited:
You can compare the numbers from this video to the one from above, too:

A 6800 with a i5-10400F doesnt deliver more performance than a 3090 with the same processor. In Rainbox Six for example the 3090 is 66FPS or 17.6% faster.
 
You can compare the numbers from this video to the one from above, too:

A 6800 with a i5-10400F doesnt deliver more performance than a 3090 with the same processor. In Rainbox Six for example the 3090 is 66FPS or 17.6% faster.

Why would you expect it to? The i5-10400F is an exceptionally fast gaming CPU, the issues are with much slower CPU's (for games) such as zen and zen+. Hardware unboxed didn't see any issues with the Zen 2 based R5 3600 which is slower than i5-10400F in many games.

I would be interested in seeing results for the 8 core Zen and Zen+ parts, I would assume they would not suffer like the 6 core parts due to Nvidias even core loadings?
 
Perhaps another good reason to normalize a test at a high fps cap that both GPUs are capable of and see what CPU usage is?
Along the lines of this, a whole suite of new benchmarks would have to be designed. As we are no longer necessarily testing the GPU or CPU. We are also now doing tests on driver versions.
Which typically we use the latest version of the driver to run a game, but that may no longer be desirable given a specific combination of CPU and GPU.
 
Some 10-15% higher fps does not cause double the CPU usage.
What 10-15% fps? the 3090 is anywhere from 24% to 36% faster fps at the Watch Dogs test, and that's at 1080p mind you. Every engine handles high fps differently, so it's completely normal to require double the CPU usage for an extra 1/3 of fps.

Again nothing out of the ordinary here
 
I just listened to a bit of hardware unboxed's latest video and to be honest I find their perspective a bit weird. I think reviewers are out of touch with reality about how people use their hardware. They actually refer to that BFV video where the guy upgrades from a r9 390 to a 1660ti. They say something like, "I doubt he was trying to play at 1080p low, and was just doing that to highlight the issue." Ummm, that's how that game is played. Most players play on low to get the highest frames and the highest visibility. Battlefield V multiplayer should be tested at low setitngs because that's how people actually play it. Sort of makes me question how many reviewers actually play games to know what software benchmarks they should be running.

Anyway it sounds like it'll be another week or two before they have a follow up video with testing data, but they're working on it.

Edit: Lol ... comment on the youtube video.

upload_2021-3-15_11-46-26.png
 
Last edited:
6 RT games being compared here @1080p and 1440p on a 6 core CPU, the 3090 comes out on top by a significant margin vs the 6900XT, despite the toll RT takes on the CPU performance.
Does CPU usage go up significantly when enabling RT in these titles, compared to RT off?
 
In non AMD sponsored games CPU usage goes up with Raytracing. Makes sense because they have to build the AS and must process more geometry.
 
Igotslab doesn't fully agree with HBU's conclusion on overhead of drivers.

https://www.igorslab.de/en/driver-o...on-rx-6900xt-gaming-x-and-your-own-drivers/7/

I didn’t expect these findings (or maybe I did?) and they even contradict a little bit the benchmarks of the colleagues from Hardware Unboxed, who observed the same phenomenon, but also tested with different platforms and older generations. Indeed, a much weaker card tested for comparison in the form of the MSI Radeon RX 5700XT Gaming X Trio was unable to outperform the GeForce RTX 3090 even with only 2 or 4 cores. So it’s not that extreme.

According to these tests, I wouldn’t attribute the recognizable behavior to a poorly optimized driver and a general overhead, but rather to disadvantages of the NVIDIA cards in asynchronous compute. Depending on the optimization of an engine, this might explain the difference between DirectX 12 games.

https://www.igorslab.de/en/driver-o...on-rx-6900xt-gaming-x-and-your-own-drivers/2/
 

But Igor is using a Zen 3 part, The Zen 3 parts have much better performance per core than Zen2, which even with 6 cores dosen't display these issues. It's been shown in gaming benchmarks even the R5 5600X Zen 3 part is better in most games than the previous Zen 2 based R9 3950X, due to the faster, more efficient cores and much quicker cache.

I wonder if the latency to cache is a factor here? Zen1 are by far the worst effected, with Zen+ being next, while the 4 core i3 (10th gen i think) was less of a bottle neck to the Nvidia cards. Something related to distributing driver workloads across multiple cores?
 
Back
Top