GPU Ray Tracing Performance Comparisons [2021-2022]

A fewer-core CPU can absolutely outperform a more-core count CPU. The CPU with fewer cores with higher IPC will be able to handle more per core per clock. Its also the added features that help out performance, quite greatly in some cases. That i3 may have half the core count but maybe do double the amount of math on each core, at much higher clocks, more cache, more advanced architecture and hardware block assistance.

Obviously if a game is coded to take advantage of a 8 core CPU (or 6 then in PS5 case) then the game will easily perform better on a CPU with more cores. The same story is vice versa.
So no, you cant just compare a native PS5 AAA game because that game will be optimized for the slow clocked 6.5 cores the PS5 has available to games.

However, im sure that i3 will hold its own most of the time, if not outperforming the PS5 cpu in most multiplatform games. (it already does atm)

Where did I talk about the PS5 CPU cores? I talk about the more powerful 3700x. This has nothing to do with exclusives games or multiplatform games but the quality of multithreading code. @Rootax complained about being CPU limited with only two cores being fully used. I explained it is because the game is not optimised enough. If the game engine was more performant with multithreading he would have much better framerate maybe being 60 fps. The 12100f would go faster too but I don't think he would care. He has a 3700x.

Id Tech is great and Doom Eternal is multiplatform...
 
3700x isn’t a good gaming cpu. You can’t type your way out of that.

its a equal to-PS5 CPU, its the greatest gaming CPU.

Where did I talk about the PS5 CPU cores? I talk about the more powerful 3700x. This has nothing to do with exclusives games or multiplatform games but the quality of multithreading code. @Rootax complained about being CPU limited with only two cores being fully used. I explained it is because the game is not optimised enough. If the game engine was more performant with multithreading he would have much better framerate maybe being 60 fps. The 12100f would go faster too but I don't think he would care. He has a 3700x.

Id Tech is great and Doom Eternal is multiplatform...

3700x's cores arent 'more powerfull' in that sense, its the same architecture, same amount of cores (just fewer available to devs on PS5). Its (ps5 cpu) is lower clocked, has its cache slashed in half and some other instruction set reductions and tweaks. Its IPC should be roughly the same though. I'd bet the 3700x is faster, but thats mainly due to clocks and secondly more cache.

The discussion was that a 12100f vs 3700x where the latter is a worse gaming cpu by definition, today and in the future for most games.

Id tech's Enternal indeed is a great showpiece for hardware benchmarks, and the 12100f destroys the aging Zen2 3700x in that very game.
 
Its that Chris seems to claim that the 3700x is actually a very good gaming CPU due to its higher core count, which is false.
No I did not say this, I said than 3700x performance in this particular game is very low because multithreading code is bad. if it was better the game would run better on 3700x and 12100f. The 12100f is not a dual core CPU... It is 4 cores 8 threads CPU it would take advantage of the benefit of having better multithreading.
 
No I did not say this, I said than 3700x performance in this particular game is very low because multithreading code is bad. if it was better the game would run better on 3700x and 12100f. The 12100f is not a dual core CPU... It is 4 cores 8 threads CPU it would take advantage of the benefit of having better multithreading.

And if a particular game is optimized for the 12100f it will exploit its much higher IPC, its much faster clocks, larger cache and hardware blocks etc. It would run poorly on the 3700x.

I think you need a reality check in your high end bubble. It's faster than most processors used for gaming.

2019 Zen2 3700x is still capable, more so than the PS5 cpu, however its nowhere near what more modern CPU's are doing today, at a lower cost to begin with. The i3 aint high-end, far from it, its a budget CPU outperforming the 3700x in virtually every game today.
 
And if a particular game is optimized for the 12100f it will exploit its much higher IPC, its much faster clocks, larger cache and hardware blocks etc. It would run poorly on the 3700x.



2019 Zen2 3700x is still capable, more so than the PS5 cpu, however its nowhere near what more modern CPU's are doing today, at a lower cost to begin with. The i3 aint high-end, far from it, its a budget CPU outperforming the 3700x in virtually every game today.

This is not how it is working no one optimized the game code for a CPU. You have two type of engine OOP Engine with bad multithreading and data oriented engine coded around task with good multithreading like Id Tech engine.

For the same game if you develop it as an OOP engine or an ECS engine, it will have better framerate as an ECS engine and with current CPU it will scale from 4 to 16 cores very well.
 
I'm not surprised you're CPU limited with that CPU.
do you have a NASA's CPU then? Aaaaaaah, I didn't know.

The 3700X is okay for most cases. Intel CPUs usually perform better at games. However, imo, the main disadvantage of Ryzen CPUs -at least from my experience with the old Ryzen 1500X and now with the 3700X- is the sudden spikes in temperature and fan rpm. They suddenly get hot and then you get a few seconds of crazy -and loud- CPU fan spinning. In fact, I undervolted my CPU and locked it to 4.0GHz to avoid that. That makes it perform even worse than stock compared to the 12100f.
 
What was the point of this comment?
is it worth to achieve 45fps? I mean, why don't you lock it at 30? You have plenty of room. It's like swimming and die close to the shore. As for your question, the point is that the 3700X is an okay cpu
 
1440p with all RT enabled with DLSS ultra performance mode and my lowly 12100f is GPU limited.

So my 3060ti is limiting my frame rate and not my CPU.
 

Attachments

  • witcher3_2022_12_18_23_56_22_055.jpg
    witcher3_2022_12_18_23_56_22_055.jpg
    3.7 MB · Views: 16
1440p with all RT enabled with DLSS ultra performance mode and my lowly 12100f is GPU limited.

So my 3060ti is limiting my frame rate and not my CPU.
kinda impressive if you ask me. I use DLSS Auto but it doesn't matter. At 4K I enabled FSR 2 ultra performance but it looked very bad, worse than in your screengrab for sure. I am going to enable SMT in the BIOS and see what happens, just out of curiosity.
 
kinda impressive if you ask me. I use DLSS Auto but it doesn't matter. At 4K I enabled FSR 2 ultra performance but it looked very bad, worse than in your screengrab for sure. I am going to enable SMT in the BIOS and see what happens, just out of curiosity.
Yea I tried FSR 2.0 to try and get to 60fps but it looked like ass compared to DLSS.
 
But that not th start of the "argument". I just said it shouldn't tank like that in W3 RT in big cities. More recent cpu are too since they are very underutilized.

A lot of games definitely don't scale well to eight cores and will have single thread that will bottleneck if you're trying to push high frame rates. Clock speed can tend to go a long way in a lot of games still on pc. I think especially on pc it can be hard to come up with good threading models for games, relative to consoles. Consoles have native threading libraries that are more optimal, or at least Playstation did.

I'm basically a 120+ gamer, and I run into cpu limitations all of the time, and it's the main reason I won't use ray tracing. I have a very unbalanced system in that I'm pairing a 3080 with a 5600x (was a 3600x), and it just hits a wall with ray tracing on. I can easily crank every setting to ultra and get 60fps with ray tracing, but if I start mixing ray tracing with other high and medium settings, or with dlss I start to see areas in games where I'll hit a hard cpu limit. Like when I posted earlier playing control, back when I had my 3600x. That main hub area with the inverted pyramid was tremendously limited with ray tracing on, but other areas in the game I'd easily push very high frame rates. It's just a less consistent experience.

Unfortunately I would have been much better off buying an intel cpu to get the raw clock speed and the easy memory overclocking, but I didn't and here I am lol.
 
In our tuning server, I tested my old 5800x3d vs adl and it was very interesting to see when RT is used, the cpu suffered and the L3 misses kicked up heavily and it was back to relying on clock speed/ipc/mem and ADL would walk away comfortably. The more RT heavy the app, the wider the gap. It was still better than vanilla 5800x’s since the L3 hit rate advantage wasn’t completely wiped out but easy to see in the data.

We will test zen4 x3d when it’s out as well and maybe this time make pretty charts lol
 
1440p with all RT enabled with DLSS ultra performance mode and my lowly 12100f is GPU limited.

So my 3060ti is limiting my frame rate and not my CPU.

We were talking about inside big cities
A lot of games definitely don't scale well to eight cores and will have single thread that will bottleneck if you're trying to push high frame rates. Clock speed can tend to go a long way in a lot of games still on pc. I think especially on pc it can be hard to come up with good threading models for games, relative to consoles. Consoles have native threading libraries that are more optimal, or at least Playstation did.

I'm basically a 120+ gamer, and I run into cpu limitations all of the time, and it's the main reason I won't use ray tracing. I have a very unbalanced system in that I'm pairing a 3080 with a 5600x (was a 3600x), and it just hits a wall with ray tracing on. I can easily crank every setting to ultra and get 60fps with ray tracing, but if I start mixing ray tracing with other high and medium settings, or with dlss I start to see areas in games where I'll hit a hard cpu limit. Like when I posted earlier playing control, back when I had my 3600x. That main hub area with the inverted pyramid was tremendously limited with ray tracing on, but other areas in the game I'd easily push very high frame rates. It's just a less consistent experience.

Unfortunately I would have been much better off buying an intel cpu to get the raw clock speed and the easy memory overclocking, but I didn't and here I am lol.

I just want 30/40 in big cities in W3 RT, that was the whole point at first. Oc course you'll hit cpu / code limitation at some point. But we were talking about a précise problem, even a 3700 shouldn't have.
 
Back
Top