Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Just to lock the game at 60fps without RT enabled is a tough task and to lock it at 120fps you better have the absolute fastest CPU.

High refresh with RT enabled? Forget about it......

No RT

cpu no rt.png

With RT


CPU.png
 
I've been thinking that but then what puzzles me is that the 6700 XT/7900 XTX are fully loaded whereas the 4090/3060 Ti aren't. At least according to the performance metrics. The 7900 XTX also seems to easily outperform the 4090 as long as you don't turn on RT. A friend of mine has a 5800X3D+4090+32GB of VRAM and this game just flies on his computer from what he's told me. 120fps in Hogwarts at Ultra settings/RT off WITHOUT frame generation which I find hard to believe but he assures me that's what he's getting. He must have toggled DLSS on. There's no way he's getting that at native 4K.

Without more data points and just speculating from that anecdotal data it could be a situation in that the workload (with RT off) fits rather "optimally" into the 5800X3D's extra cache size.

RT workload might be better tested on a 13700k/13900k with very fast DDR5 (at least 6400+ if not 7200) as the 7700X certainly isn't that much slower otherwise when looking at broader gaming tests.
 
Oof, sorry for the long list of bullet points but the tl;dr is that the 3060 Ti and 4090 struggle to be properly utilized. The 4090 especially has extremely low usage to the point of being outperformed by a 7900 XTX without RT. The 6700 XTX and 7900 XTX exhibit no such problem and consistently outperform their NVIDIA counterparts at similar settings and resolutions.
Low level renderer is optimized for the consoles. Which result in worse performance on modern nVidia GPUs.
And when your renderer is single core limited, Raytracing will just destroy performance...
 
Results from GameGPU shows differently, using a high end CPU, Hogwarts Legacy, at max RT settings @4K, the 4090 runs 2 times faster than 7900XTX, while the 4080 is 60% faster, the 3080Ti is also 20% faster.

4090: 45 fps
4080: 35 fps
3080Ti: 27 fps
7900XTX: 22 fps
6900XT: 15 fps
2080Ti: 14 fps


Results from PCGH shows the 4090 about 2.6x times faster than 7900XTX in Ray Tracing, while being 20% faster in Rasterization.

 
Wow......turning on RT with ultra textures at 1440p increases VRAM use by nearly 2GB...I thought Doom Eternal had a large RT VRAM cost but this game :eek:

Also comparison of all 4 texture settings....spot the difference?

Texturen Raten.png

According to PCGX the VRAM difference between low and ultra is 2GB.
 
Last edited:
So, the promise of DX12 was to reduce the CPU dependency. Works really good on a modern nVidia GPU: https://www.dsogaming.com/news/amd-...r-than-the-nvidia-rtx4090-in-hogwarts-legacy/

This API is worse than DX10. Basically a 1:1 port from the console.

And yet we have benchmarks that show the 4090 in some cases monsters the 7900XTX.

But with the old CPU they used in the article no wonder they had such a difference, people running these $1100+ GPU's are using 4.5yr old CPU's.
 
With RT in 4K. But without (and even with) the whole renderer is broken on nVidia GPUs. That is nothing new, older games have shown the same behaviour. But like RT these developers do not seem to learn from the past...
 
With RT in 4K. But without (and even with) the whole renderer is broken on nVidia GPUs. That is nothing new, older games have shown the same behaviour. But like RT these developers do not seem to learn from the past...

In that article they used a 4.5yr old 9900K....people with 4090's and 7900XTX's don't use CPU's that old.

With a more modern CPU the 4090 wins pretty much every test.
 
That shouldnt matter. DX12 is from 2014. The API was designed to reduce CPU bottlenecks.

I went back to the blog post from 2014:
Direct3D 12 enables richer scenes, more objects, and full utilization of modern GPU hardware. And it isn’t just for high-end gaming PCs either – Direct3D 12 works across all the Microsoft devices you care about. From phones and tablets, to laptops and desktops, and, of course, Xbox One, Direct3D 12 is the API you’ve been waiting for.

What makes Direct3D 12 better? First and foremost, it provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilization. In addition, games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. And that’s not all – Direct3D 12 also introduces a set of new rendering pipeline features that will dramatically improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.
 
Last edited:
Amazing stuff. Then consider me done with this eurogamer if that’s the case.

Thank you for finding this.
if you know where to look, at launch date it has also been, released Hogwarts Lewdgacy today, imho a game with a much superior "story". This is the one Eurogamer will be covering. :)
 
It seems Hogwarts isn't being covered at all at EG. Check the comments on this unrelated article:

View attachment 8288
As long as it's for individual principled reasons I guess that's how it will be. There are plenty of other devs that will check PC performance on a range of cards.

Not gonna shame anyone for whatever they decide to do on this game personally speaking, it's within their right.

I don't think any other yt channel will get to the same amount of detail or nuanced perspective on the technicals as DF though.

I'm much more interested in seeing how they cover Metroid Prime, my favorite GameCube game ever. That has to be a John video
 
Results from GameGPU shows differently, using a high end CPU, Hogwarts Legacy, at max RT settings @4K, the 4090 runs 2 times faster than 7900XTX, while the 4080 is 60% faster, the 3080Ti is also 20% faster.

4090: 45 fps
4080: 35 fps
3080Ti: 27 fps
7900XTX: 22 fps
6900XT: 15 fps
2080Ti: 14 fps


Results from PCGH shows the 4090 about 2.6x times faster than 7900XTX in Ray Tracing, while being 20% faster in Rasterization.

the 4090 is sitting on its laurels, I guess.

Wish Eurogamer also did a technical review. :cry: I am gonna miss that a lot, 'cos I like the native vs pixel reconstruction techniques comparisons and so on and wanted to know how this game behaves with XeSS and stuff like that.

I have a good ol' 3700X and I wonder what's with AMD not improving single thread performance that much on a new 7700X. Also wish there was a demo so I could test performance myself. According to this guy the game runs perfectly on a very similar CPU compared to mine -the 3900X- and the A770 (which is included in the recommended settings).

 
I mean, yeah. If a game is in dire need of a technical analysis, it's this one. John on the DF Patreon said that Oliver might do it because he expressed interest in the game. It's a shame Alex isn't the one doing the coverage, we need his expertise.
 
Status
Not open for further replies.
Back
Top