Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Funny you say that since devs in the day had to write their own engine for the most part. A lot of devs today use 3rd party engines like Unreal and deliver subpar products.

Finally, complexity is almost never the cause of poor performance. Poor performance usually occurs as a result of poor trade offs, lack of technical competence and/or poor craftsmanship. All of these are trademarks of Bethesda.
Thats a great standup material :D
 
I think people generally remember last-gen games looking better than they actually do, and all of the PS5/Series X remasters add to the confusion. Plus there were mid-gen consoles that brought a lot of visual improvement.

In terms of gameplay that's a longer problem. We're stuck in a cycle where we're playing genres that were defined in the PS360 era, with a lot of the games being direct sequels to PS360 franchises.
That's all true. But I mostly play on PC now. And on PC, there is usually a bump in graphics fidelity with the launch of new consoles, and system requirements bump up in response. This time, I feel like we got the bump in system requirements, but not really the increase in graphics.
 
Nothing does this generation, if I'm being honest.

I still play lots of last gen games and a few PS3 era games too and I have to agree. The scale and complexity of environments have increased and particles/volumetrics are nice but aside from a few standout titles none of the new stuff seems particularly innovative.
 
I think people generally remember last-gen games looking better than they actually do, and all of the PS5/Series X remasters add to the confusion.

I think this is definitely true, albeit that does somewhat reinforce the point that games this gen are perhaps not delivering the end-result visual impact from the resources demanded. If people are impressed by last-gen games being given a slight uplift in asset quality + higher res/60fps, then that speaks to the base rendering technology not necessarily being the gating factor, at least in the eyes of some.

Like most everyone here, I 'get' the advantages that something like UE5 brings, both in development time and the feasibility of realtime lighting. But at the moment, if I took a person off the street and wanted to show them what a PS5 could do, I would probably pick Ratchet and Clank, Demons Souls, maybe Horizon:Forbidden West. All games which came out very early in this gen's lifespan and one that's even cross-platform!
 
But at the moment, if I took a person off the street and wanted to show them what a PS5 could do, I would probably pick Ratchet and Clank, Demons Souls, maybe Horizon:Forbidden West. All games which came out very early in this gen's lifespan and one that's even cross-platform!
if you asked me I’d say the same thing — it’s been a bad couple of years for technically cutting edge games, of course the high budget AAA console seller games that got most of their development in before Covid look the best! We have several more years of this generation, great looking ue5 games will come out.
 
if you asked me I’d say the same thing — it’s been a bad couple of years for technically cutting edge games, of course the high budget AAA console seller games that got most of their development in before Covid look the best! We have several more years of this generation, great looking ue5 games will come out.

I'm sure they will, but my main concern is then image quality. We'll see what the PS5 Pro can bring, but you're just asking too much from reconstruction algorithms when you're starting from 720-900p, and I don't see that really changing with UE5 games on this gen of consoles going forward. You can have great lighting and asset quality but if there's tons of shimmer/flicker, that's a step back imo.
 
I'm sure they will, but my main concern is then image quality. We'll see what the PS5 Pro can bring, but you're just asking too much from reconstruction algorithms when you're starting from 720-900p, and I don't see that really changing with UE5 games on this gen of consoles going forward. You can have great lighting and asset quality but if there's tons of shimmer/flicker, that's a step back imo.
Yeah… I think your concerns are well founded, but I think for the average player who isn’t *super* sensitive to IQ it’s going to be a positive trade off. I don’t think later AAA ue5 (and similar style) games are going to render at immortals of avernum res, but I do expect they’ll be lower res than other engines.
 
Nothing does this generation, if I'm being honest.
I agree. Problem is on PC, we've been spoiled.

Personally speaking, demos on PC like the StarWars Ray Tracing Reflection demo, Marbles demo, and RTX Racer demo are all spoilers for what next gen graphics should be as an overall package.

Then we have actual games with specific next gen features, for reflections we have games like Control, Watch Dogs Legion, Marvel's Guardian Of The Galaxy, Ratchet and Clank Rift Apart and Wolfenstein Youngblood that did ray traced reflections solidly. For shadows we have games like Shadow of Tomb Raider and Call of Duty Black Ops Cold War which did ray traced shadows well. For Global Illumination we have games like Metro Exodus, Icarus, Fortnite, Warhammer 40K: Darktide, Dying Light 2 and The Witcher 3 that did dynamic lighting excellently. There is none of that in Starfield.

Then we have games that delivered the total dynamic package (shadows + reflections + global illumination), like LEGO Builder's Journey, The Witcher 3, Cyberpunk 2077, Bright Memory Infinite and even Dying Light 2.

Then we have Path Traced games, that delivered everything except next gen geometry, like Cyberpunk 2077, Minecraft RTX, Portal RTX. We even have DESORDRE that combines path tracing with next gen Geometry (Nanite).

Obviously Starfield has none of that.

For next gen geometry we have many UE5 demos, we have the Total Package for it all in the form of the Matrix UE5 demo. Which embodies the meaning of unlimited details and extended level of detail with good dyanmic global illumination and reflections. And Starfield lacks all of that.

Starfield doesn't even have the complex rendering of materials or animations that some games excelled at (like the Last of Us Part 1), and it lacks complex world simulations (weather, fluids, clothes, physics, AI).

So in summary, Starfields lacks next gen shadows, global illumination, lighting, reflections (reflections are even worse than last gen), complex materials, and simulations. It's graphics is inconsistent in the vast majority of areas, Why is it even considered a candidate for a next gen title?
 
So in summary, Starfields lacks next gen shadows, global illumination, lighting, reflections (reflections are even worse than last gen), complex materials, and simulations. It's graphics is inconsistent in the vast majority of areas, Why is it even considered a candidate for a next gen title?
Yeah, but I think expectations are important in every case when regards to video game graphics. I don't think they ever shows Starfield off in a way that made it look graphically better than it does on real hardware. And your criticisms about Starfield looking last gen were largely true of Fallout 4 at release. It most ways, it looked like a high res 360 game. BGS has always been behind in pure graphics. BGS games have never been the most performant titles.

It looks next gen compared to other BGS games.
 

Chips and Cheese breaking down Starfield using profiler data. Just reading now.
Incredible read. Lots of work here. First paragraph of summary below:
Imo; this reads a bit as the result of Xbox ATG heavily optimizing for Xbox and those optimizations carrying over.

In summary there’s no single explanation for RDNA 3’s relative overperformance in Starfield. Higher occupancy and higher L2 bandwidth both play a role, as does RDNA 3’s higher frontend clock. However, there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.
 
Last edited:

Chips and Cheese breaking down Starfield using profiler data. Just reading now.

Fantastic article. I hadn’t realized how big RDNA 3’s register file was. The 3 shaders profiled all have great SIMD utilization on RDNA 3 which is a sign they’re well optimized for that arch. Also interesting (though not surprising) that the 4090 and 3090 profile almost identically except for the 4090 hitting a hard bandwidth wall on the 3rd shader.
 
Incredible read. Lots of work here. First paragraph of summary below:
Imo; this reads a bit as the result of Xbox ATG heavily optimizing for Xbox and those optimizations carrying over.

In summary there’s no single explanation for RDNA 3’s relative overperformance in Starfield. Higher occupancy and higher L2 bandwidth both play a role, as does RDNA 3’s higher frontend clock. However, there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.
It really does not explain the power usage being dramatically lower on all NV GPUs with Starfield, which I think is indicative of something this article here does not even touch upon which is far more important than looking at the execution time of indivdual shader on a timeline. Why is this game out of hundreds showing anomylous power readings on NV as well as anomylous performance visavisa Radeons? Surely two anomylies that might be related...have some relevancy.

A bit of missing the forest for a tree's leaves here in that article conclusion...
 
I think it explains it. Limitation is the register size which lowers throughput. Execution units cant do anything so the power consumption goes down.

However, there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.
That doesnt make any sense. With raytracing and pathtracing the workload is massively increased and a 4090 is 3x+ faster than a 6900XT. With Starfield it is 1.5x which is a rasterizing game.
Even in UE4 games without raytracing a 4090 goes easily >200 FPS which means that the GPU is fully used.
 
Last edited:
I think this is definitely true, albeit that does somewhat reinforce the point that games this gen are perhaps not delivering the end-result visual impact from the resources demanded. If people are impressed by last-gen games being given a slight uplift in asset quality + higher res/60fps, then that speaks to the base rendering technology not necessarily being the gating factor, at least in the eyes of some.

Like most everyone here, I 'get' the advantages that something like UE5 brings, both in development time and the feasibility of realtime lighting. But at the moment, if I took a person off the street and wanted to show them what a PS5 could do, I would probably pick Ratchet and Clank, Demons Souls, maybe Horizon:Forbidden West. All games which came out very early in this gen's lifespan and one that's even cross-platform!
Interestingly all those games are running at 60fps + very good image quality. It shows we are clearly at diminushing returns at only 30fps when things are done properly (custom efficient engines + custom clean reconstruction, no FSR destroying IQ for the sake of resolution).
 
I think it explains it. Limitation is the register size which lowers throughput. Execution units cant do anything so the power consumption goes down.


That doesnt make any sense. With raytracing and pathtracing the workload is massively increased and a 4090 is 3x+ faster than a 6900XT. With Starfield it is 1.5x which is a rasterizing game.
Even in UE4 games without raytracing a 4090 goes easily >200 FPS which means that the GPU is fully used.
With respect, they are discussing the SM not the ray tracing accelerators.
 
It really does not explain the power usage being dramatically lower on all NV GPUs with Starfield, which I think is indicative of something this article here does not even touch upon which is far more important than looking at the execution time of indivdual shader on a timeline. Why is this game out of hundreds showing anomylous power readings on NV as well as anomylous performance visavisa Radeons? Surely two anomylies that might be related...have some relevancy.

A bit of missing the forest for a tree's leaves here in that article conclusion...
We could ask on that. If I had to guess based on this information, the lower SM saturation as a result of lower warps as being the reason for the lower power usage. Instead of having 8 wavefronts of information being held on AMD, all that information has to be kept refreshed and worked on, Nvidia cards have a bunch of 0 activity warps sitting around due to a lack of register allocation.
 
What do gpu clocks in Starfield look like? I noticed Remnant 2 clocks really high compared to other games for me. I'll be basically at my gpu power limit, but getting ~1950 MHz (I think) when in most games when I'm power limited my gpu clock will drop to 1860 MHz. I could double check, but it's definitely abnormal.
 
Incredible read. Lots of work here. First paragraph of summary below:
Imo; this reads a bit as the result of Xbox ATG heavily optimizing for Xbox and those optimizations carrying over.

In summary there’s no single explanation for RDNA 3’s relative overperformance in Starfield. Higher occupancy and higher L2 bandwidth both play a role, as does RDNA 3’s higher frontend clock. However, there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.

I read it as Bethesda choices that favours heavily AMD architecture which is probably result of ATG stepping in and optimising for Xbox consoles (as you noticed as well).
Anyway it was a fantastic read and as everything in life the answer to why it runs as it runs is complex and there is no single setting that affect performance but combination of many factors that play roll.
I think this could be indication on how future Xbox firs party games may run on PC if properly optimised on consoles.
 
Status
Not open for further replies.
Back
Top