Value of Hardware Unboxed benchmarking *spawn

<snip> Particularly because an added complication is that future games may leverage/stress the different pros/cons of various architectures.
I mean, if we're going to dig to the bottom of that particular pendandicism hole, we could just say "futureproofing doesn't exist" because there's literally no way for us to truly know what the next interesting technology will depend upon. The next thing might somehow be using all those int4 operations in NPU's to somehow accelerate, I dunno, game AI seems the most apt right? So if you didn't buy an NPU, then your fancy-shmancy new CPU is basically half speed now right?

However, to get back to the real world, we can make inferences about what's likely to continue happening in the near future. And that future is going to continue bringing faster GPUs in both raster as well as raytracing, which will need to be fed by faster CPUs. Which actually is a great segway to:
Does the 5800x3d increasingly pull ahead of the 5800x in 2024 games with rtx 4090 at 1440p/4k as reflected by 720p/1080p rtx 3090 tests in 2022?
It does! Check out the GN review of the 9800X3D, where they include prior X3D parts and their non-X3D bretherin:

I linked specifically to the Stellaris portion of the review, because simulation time actually matters in that game and is irrespective of GPU power. And in Stellaris, the 5800X3D variant is 13% faster than the 5800X, despite the 400MHz base / 200MHz turbo clock disadvantage to the X3D.

Even at 1440p or 4k, there will be scenes in modern games where even the fastest GPUs will still be waiting on the CPU to send data. And in those scenarios, a faster CPU will matter, such as the X3D line. It probably isn't going to increase maximum framerate, it will instead drag up the average on the low-end.
 
I mean, if we're going to dig to the bottom of that particular pendandicism hole, we could just say "futureproofing doesn't exist" because there's literally no way for us to truly know what the next interesting technology will depend upon. The next thing might somehow be using all those int4 operations in NPU's to somehow accelerate, I dunno, game AI seems the most apt right? So if you didn't buy an NPU, then your fancy-shmancy new CPU is basically half speed now right?

However, to get back to the real world, we can make inferences about what's likely to continue happening in the near future. And that future is going to continue bringing faster GPUs in both raster as well as raytracing, which will need to be fed by faster CPUs. Which actually is a great segway to:

We can just focus on the specific scenario mentioned wihtout going into a broader discussion though. Is there enough data to conclusively say that running multiplatform graphics driven SP games at 720p/1080p today translates to meanginful real world differences for future games at 1440/4k?

What specific games of that type for example specifically in real usage terms play that much better on the 5800x3d vs 5800x at 1440p/4k?

It does! Check out the GN review of the 9800X3D, where they include prior X3D parts and their non-X3D bretherin:

I linked specifically to the Stellaris portion of the review, because simulation time actually matters in that game and is irrespective of GPU power. And in Stellaris, the 5800X3D variant is 13% faster than the 5800X, despite the 400MHz base / 200MHz turbo clock disadvantage to the X3D.

Even at 1440p or 4k, there will be scenes in modern games where even the fastest GPUs will still be waiting on the CPU to send data. And in those scenarios, a faster CPU will matter, such as the X3D line. It probably isn't going to increase maximum framerate, it w

I don't think you're referring to the same thing. What I'm referring to was the comment that running 720p/1080p predominately multiplatform graphics driven SP games that are used in the GPU reviews would showcase how future similar titles would make use of those faster CPUs at resolutions like 1440p/4k with high (or maxed) out graphics. That is for example the 5800XD would play say Alan Wake 2 much better than a 5800X because it ran Far Cry 5 (2018) something like 30% faster at 720p using Techpowerup's test in 2022. However instead with TPUs 2024 9800X3D review it showed them at basically the same fps using a 4090 at 1440p.

Using Stellaris if anything would further my last point that you can already test real usage scenarios to illustrate CPU differences. Stellaris an old game at this point. You did not need to bench those multiplat graphics driven SP games at 720p to showcase how CPUs would make a difference with Stellaris would in 2024 at 1440p/4k, you could've just tested Stellaris at 1440p/4k from the start. Games like Stellaris exist, you can just use those. Using GPU driven games from your GPU test suite at 720p as an representative instead seems like a strange choice.
 
Back
Top