Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
The amount of stupid "lol it's running at 540p" comments in that twitter thread is hurting my head.

Uhhh, he's isolating CPU performance guys....

Still, would have been nice to see both benched in their standard configs with the same memory as well. Plus a rebar on test.

That CapFrameX is a massive Nvidia and Intel fanboy and has been caught loads of times on Twitter skewing his testing to make AMD look bad.
 
Different source, with actual benchmarks numbers and details instead of anecdotal evidence. This excludes CCX latency as a potential problem on Zen CPUs, since the 7950X is running on a single CCX.

And? It doesn't justify your shit stirring.

Your source is also well known on Twitter for being a huge intel fanboy and has a history of skewing test methods to make AMD look bad.
 
And? It doesn't justify your shit stirring.

Your source is also well known on Twitter for being a huge intel fanboy and has a history of skewing test methods to make AMD look bad.

Going to risk opening a whole can of issues but isn't this effectively balancing out in practice if we have an allegedly Intel/Nvidia biased source against an allegedly AMD biased source for data?

For instance it is interesting that HUB is bringing up and framing Zen 4 performance as a problem (bug) with the game while choosing to frame the VRAM issue and descreptency (eg 6650XT 8GB data) as a problem with the hardware.
 
And all that is on the side of developers, who are in control of all of that.

Just wanting to circle back a bit to the DX12 discussion and responding just specifically to this reply but I feel this issue might be a bit more nuanced/complex.

From an actual technical stand point DX12 puts more optimization agency onto the developer, so in that sense it is more of the developers responsibility.

But from a broader stand point I've always felt that it's a problematic approach given the realities of the PC market. The reason being that the IHVs are the ones that primarily benefit financially from showcasing their hardware and iterating on said hardware, as such the incentive structure inherently would favor IHVs bearing more of the onus on making sure their products are showcased properly. Instead we're shifting more of the onus on the game developers whom are mostly detached from that aspect of the business (well game developers strictly, engine developers are bit different), and effectively they don't really directly benefit (at least without other incentives) from optimizing/favoring Nvidia/Intel/AMD or pushing each companies newer architectures/products over older ones.

In that sense you can say there is a fault in the approach of DX12.

As such I've always find this dichotomy rather fascinating with the basis of the DX11->DX12 shift in this aspect given the broader implications as well as how it factored into the differing wider strategic synergies of the IHVs involved.
 
For instance it is interesting that HUB is bringing up and framing Zen 4 performance as a problem (bug) with the game while choosing to frame the VRAM issue and descreptency (eg 6650XT 8GB data) as a problem with the hardware.
HUB is dancing around the issue, first they claimed it's a Zen 4 bug, then they backtracked and claimed it's not an issue of Zen 4, but a DLSS 3 problem (DLSS 3 activates itself on their Intel system unintentionally), so they went back and used a 7700X in their review of the game.

We are left wondering if it's a bug or a CPU limitation on weaker CPUs, trying to shoot down the potential cause is not so easy anymore amongst all that noise.

Your source is also well known on Twitter for being a huge intel fanboy and has a history of skewing test methods to make AMD look bad.
Disagree with that statement, he is a deveoper and knows what he is doing, he is also keen to pick up the most intensive areas to do CPU benchmarking while also pushing CPU platforms to the limits, contrary to other hit and run reviewers.
 
Last edited:
No, it is a DX12 problem because this is enforced by DX12 itself.

...
There is so much performance lost in emulation this game renderer on nVidia hardware.

Calisto Protocoll has the same raytracing implementation with the UE4 engine. Only optimized for consoles and then it get ported to the PC without any adjustments.
Isn't usually the other way around?
 
Going to risk opening a whole can of issues but isn't this effectively balancing out in practice if we have an allegedly Intel/Nvidia biased source against an allegedly AMD biased source for data?

There's no such thing as balance when it comes to biased sources.

Best thing is to just not use them or put disclaimers in the posts.
 
Last edited:
Going to risk opening a whole can of issues but isn't this effectively balancing out in practice if we have an allegedly Intel/Nvidia biased source against an allegedly AMD biased source for data?
Yes. So long as enough sources are referenced, the biases should balance out. Consider it like a law court - the two sides are 100% biased. The truth is to be gleaned from hearing them both, knowing each side is skewing as hard as possible.

It'd be nice to have level-headed, fact-seeking sources, but we can work with what we've got none-the-less so long as no source is taken verbatim in isolation, as seems to be happening here.
 
Just wanting to circle back a bit to the DX12 discussion and responding just specifically to this reply but I feel this issue might be a bit more nuanced/complex.
The concept of DX12 was to help more developers achieve max performance from CPUs and GPUs. That was back in 2015.

Fast forward 8 years later, and that goal isn't achieved yet, on the contrary it backfired especially on the CPU front which also affected the GPU front, so we ended up with both operating at way less than 100% utilization, and we ended up with more pitfalls and problems than we started with.

A more balanced approach needs to be taken, especially in the current times, as developers themselves seem to be overburdened with supporting multiple consoles, versions, features, and are less focused than ever before.
 
I would also like to add that DX12 puts the burden of VRAM management on the developers, DX11 and all the prior DXs had the driver handle all that work, but now that's gone, and developers have to come up with solutions for the varied VRAM configurations on PCs.

This partly explains the exploding VRAM requirements that we see recently. Developers simply didn't put the work necessary to properly manage VRAM, RAM, and storage Pagefiles.
 
I don’t find this to be particularly an issue over the long term. Due to the way engines are multiplatform they will eventually optimize for all 4 or 5 different vendors.

We are seeing growing pains as there is finally a significant and real shift towards dx12. I largely suspect after this generation of consoles, these issues should largely be behind us at least for studios that regularly maintain multi platform engines.
 
Devs really wanted DX12, "we know what engine should be doing at any given time"
It made driver thinner, simpler and easier.
So was seen as a win that after initial early hump would see nice wins.

But I've always said I'd liked to have seen:
1. Full fledged engines (unreal, unity etc) using DX12
2. Teams who aren't as capable but want own engine DX11/DX12 hybrid although probably could only be 11 with RT etc added
3. Bigger studios with own engine DX12.

This would've meant continued support for DX11 which I'm sure MS and GPU manufactures just would not want to do.
 
Most developers just don’t have the time, knowledge and ability to create high performing DX12 code. Particularly on Nvidia hardware where it seems to be more challenging as they refuse to align their hardware with a modern binding model. I don't things will ever improve on this front. Faster hardware will just blunt the performance loss.
 
“Dx12 is slow” stuff is a gamer fantasy. Ask yourself why you never post the same thing about vulkan, or literally every console game.

There is one kernel of truth: offering automatic wrappers around dx11 and gating features behind dx12 was an inducement for devs to make bad renderers — many games that offer dx12 modes have no business having the option.
 
This shift from DX11 model API to a DX12 model was always going to have to come at some point. There's growing pains for developers, but it's a necessary step, and it's better we do it now than later.
 
Most developers just don’t have the time, knowledge and ability to create high performing DX12 code. Particularly on Nvidia hardware where it seems to be more challenging as they refuse to align their hardware with a modern binding model. I don't things will ever improve on this front. Faster hardware will just blunt the performance loss.
Lovelace is so fast that real time Pathtracing is here. The problem is not the hardware its the API.
 
Lovelace is so fast that real time Pathtracing is here. The problem is not the hardware its the API.
No it's not. Portal RTX consists of small corridors and still needs DLSS3 and heavy upscaling to be playable at decent framerates on a 4090.

Path Tracing in a title like Red Dead Redemption 2 is still far away.

And don't mention Cyberpunk overdrive. It barely looks better than the regular RT implementation, this is not in any way real path tracing like it's done in Quake 2 and Portal RTX.
 
Status
Not open for further replies.
Back
Top