Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
No, it is a DX12 problem because this is enforced by DX12 itself.
Are there any engines that have been built from the ground up for DX12 or are they all DX11 engines that have had DX12 support bolted on?

If it's the latter then it's not DX12's fault.
Calisto Protocoll has the same raytracing implementation with the UE4 engine. Only optimized for consoles and then it get ported to the PC without any adjustments.

That's a developer issue.
 
Last edited:
It has nothing to do with DX11. Prio the current generation every raytracing implementation was optimized for Turing (nVidia). All these run very good and scales very good. Since developers have started to implement raytracing on consoles, too, we are seeing these unoptimized mess. Now Raytracing isnt optimize for nVidia anymore and performance gets lost in emulating unoptimized game engines.

Normally the most effcient and optimized way would be used. But these consoles are so outdated from the GPU perspective, that we cant even archive Turing level of optimization.

I can play Portal RTX just fine - a real Pathtracing game. You can make real time CGI Movies with Patchtracing with a 4090. But playing Hogwarts with more than 100 FPS in 1080p with Raytracing? Barely possible.
 
The "API" is in the engine. That was one marketing point of low level APIs. The engine is driving the GPU.
So it comes back to the developer being ultimately responsible.

Nvidia says, under their Do list, "Accept the fact that you are responsible for achieving and controlling GPU/CPU parallelism"

 
Now Raytracing isnt optimize for nVidia anymore and performance gets lost in emulating unoptimized game engines.
That's not the situation in Hogswart, it scales pretty well when GPU bound, hell a 2080Ti beats all AMD GPUs in RT in this game, which shouldn't be possible, however the game has a problem in a certain CPU limited area (Hogsmead) is tested, NVIDIA GPUs quickly get CPU limited there at anything below 4K, it's a bug, it should be fixable.
 
No, this is not a bug. This is unoptimized Raytracing for nVidia hardware. RDNA2 and RDNA3 are pre Turing level when it comes to Raytracing. Callisto Protocol shows the same behaviour. High FPS are not archivable on nVidia GPUs because the renderer cant schedule workload in an efficient way.
 
It has nothing to do with DX11. Prio the current generation every raytracing implementation was optimized for Turing (nVidia).
Which often included low ray counts per pixel and reduced objects within the BVH, this was evidenced and seen in BFV's patches which increased RT performance by decreasing RT quality. A developer optimisation issue, not a DX12 one.
All these run very good and scales very good. Since developers have started to implement raytracing on consoles, too, we are seeing these unoptimized mess. Now Raytracing isnt optimize for nVidia anymore and performance gets lost in emulating unoptimized game engines.
This is a developer issue, most recently evidenced by The Witcher 3 ray tracing update using higher ray counts per pixel on PC than the console versions. The game has since been updated with a lower ray tracing setting that reduces rays per pixel to the same level as consoles and offers higher performance on PC as a result. So a developer optimisation issue, not a DX12 one.
Normally the most effcient and optimized way would be used. But these consoles are so outdated from the GPU perspective, that we cant even archive Turing level of optimization.
What is this nonsense?
I can play Portal RTX just fine - a real Pathtracing game.
'Fine' is personal preference.
You can make real time CGI Movies with Patchtracing with a 4090
You could also do that on a 2060 Super.
But playing Hogwarts with more than 100 FPS in 1080p with Raytracing? Barely possible.
You're really trying to use a game that's busted as shit to prove your point? 🙄
RDNA3 are pre Turing level when it comes to Raytracing.
You're taking the piss here, right?
 
Last edited:
Can we wait a little ?
A patch is confirmed to come for forespoken and i have little doubt there will be for hogwarts too.
It's difficult to doubt pc versions are not the focus for latest releases...but were they at some point ? it seems we were misled because of how weak were PS4/XONE especially the cpu.
There is also a transition ongoing. Sure it sucks but things can only improve from here (i hope)
 
Which often included low ray counts per pixel and reduced objects within the BVH, this was evidenced and seen in BFV's patches which increased RT performance by decreasing RT quality. A developer optimisation issue, not a DX12 one.

No, Battlefield 5 did not reduced the quality. In fact they used up to 40% of the resolution for rays which is way more than the latest console ports uses. And the performance after first patch was really good on Turing:
2018-12-06-image-5-p_1100.webp

 
No, Battlefield 5 did not reduced the quality. In fact they used up to 40% of the resolution for rays which is way more than the latest console ports uses. And the performance after first patch was really good on Turing:


Sigh, taken from the article you provided...

However there is a visual quality downgrade that I spotted in this area, this new patch has introduced some artifacting to the way ray traced reflections are handled. Previously, everything that was supposed to be reflected, was reflected, so you got these nice, clean and for the most part supremely accurate reflections that blew away screen space reflections for visual quality. However with the latest patch, some screen space reflection-like artifacts have crept back into ray traced surfaces.

If you look closely at the water surface, at times an object will move across the reflected area like an AI character or a falling leaf, and for a brief moment you'll spot the classic screen space reflection-like streak caused by that object obstructing the reflection's path. My guess here is that DICE have chosen to more aggressively cull rays from objects not in view, which has improved performance significantly, but it's at the cost of the occasional artifact where something that should still be in view is getting culled erroneously. It makes the reflections slightly more ugly but it's still a significant upgrade on basic screen space reflections where this issue is much more widespread.
 
The first implementation didnt use SSRs with Raytracing. So certain objects like leafs were not existing in the BVH. So with the patch they uses the SSRs implementation in combination with Raytracing. No downgrade.

/edit: Eurogamer has an interview about the optimizing process for DXR in Battlefield 5: https://www.eurogamer.net/digitalfoundry-2018-battlefield-5-rtx-ray-tracing-analysis
That was four years ago. And this is a reason why this game scales with today hardware just fine.
 
The first implementation didnt use SSRs with Raytracing. So certain objects like leafs were not existing in the BVH. So with the patch they uses the SSRs implementation in combination with Raytracing. No downgrade.
They literally state there's a downgrade.

"However there is a visual quality downgrade"

Eurogamer has an interview about the optimizing process for DXR in Battlefield 5: https://www.eurogamer.net/digitalfoundry-2018-battlefield-5-rtx-ray-tracing-analysis
That was four years ago. And this is a reason why this game scales with today hardware just fine.

Even the DF article talks about reducing ray counts and being smart with how/where they shoot rays.
 
While I’m certain DX12 could be a lot better(this is Microsoft after all), this is ultimately a developer and Nvidia architecture issue.
 
Different source, with actual benchmarks numbers and details instead of anecdotal evidence. This excludes CCX latency as a potential problem on Zen CPUs, since the 7950X is running on a single CCX.

For inline thread reference:
Later, HUB claimed in the video they made on the game that performance of the high end Intel CPUs is no different than the 7700X, that's why they used the 7700X to bench dozens of GPUs, this new source contradicts their new claim.
 
Last edited:
Hogwarts is very CPU limited on Zen 4 CPUs vs Raptor Lake.


The possible bug aside it's not exactly CPU limited per say but might be showcasing Raptor Lake's higher achievable memory speeds -


This isn't as apparent from typical reviews because they tend either pair Raptor Lake with DDR5 normalized to Zen 4 recommended speeds (DDR5 6000) or slightly higher (DDR5 6400) and also the relative lack of RT tests due to the GPU limited perception.

As a related aside ever since Zen 1 launched I feel what hasn't really been addressed is what memory speed should actually be used when comparing Intel/AMD for reviews due to the dissimilar behaviour between them.
 
The amount of stupid "lol it's running at 540p" comments in that twitter thread is hurting my head.

Uhhh, he's isolating CPU performance guys....

Still, would have been nice to see both benched in their standard configs with the same memory as well. Plus a rebar on test.
 
Status
Not open for further replies.
Back
Top