Digital Foundry Article Technical Discussion [2024]

Hmm... DLSS = FSR, in avarage costumer view. PC is not better in this term... Because Most games not have nativ 4K/60FPS, maybe on 2000$ system...

DLSS is widely accepted as superior to FSR and is certainly an advantage in most scenarios. You can essentially get equal or better image quality at a lower internal resolution, hence a fairly significant performance advantage when equalising image quality. That's why there is so much fuss over PSSR in the Pro.

And the bar for "better" doesn't start at 4k/60. If a game is running at 720p/60 with FSR on a console, then something like 1080p/60 with DLSS is already going to be a significant improvement, before you start to consider core graphics/RT improvements.
 
It's not really comparible though, the 3600 on PC will enjoy higher clocks, lower latency memory and more bandwidth.

PS5 would have the advantage of having more CPU cores to throw at the game, lower level optimisation and a less intensive OS with no real background sucking apps.
Nah. It’s optimization.

Going to give credit to Xbox era poster here who found this nugget of an old interview with Saber.


Yes, we are planning on using multiple new hardware features in the upcoming games. We are working on global illumination and realtime reflection technologies based on DirectX raytracing. We are planning on unifying our geometry and LOD authoring pipelines with Mesh shaders. Sampler feedback will be used to help optimize sparse textures and variable rate shaders will deliver a performance boost. We are excited about new technologies coming up in this DirectX update and are always looking for ways to optimize our games by taking advantage of the newest technologies.

These features quite frankly don’t exist on PS5.
 
Nah. It’s optimization.

Going to give credit to Xbox era poster here who found this nugget of an old interview with Saber.




These features quite frankly don’t exist on PS5.

Mesh shaders (Or the PS5 API equivalent) work on PS5 - (Alan Wake 2 showed this)

I doubt the transistors for sampler feedback where ripped from PS5's silicon or that it doesn't have it's own version.

There are games on PS5 running VRS, it just doesn't have hardware for it, but it can still be done.
 
What about the AVX units on the CPU. If I remember correctly, it was said that the vector units on the PS5 were cut in half while the Series consoles have the full rate AVX256.
 
Mesh shaders (Or the PS5 API equivalent) work on PS5 - (Alan Wake 2 showed this)

I doubt the transistors for sampler feedback where ripped from PS5's silicon or that it doesn't have it's own version.

There are games on PS5 running VRS, it just doesn't have hardware for it, but it can still be done.
Primitive shaders don’t have amplification shaders. Which can be critical addition here if they are being used. And amplification shaders while can be emulated by compute shaders will be significantly slower.

If you are missing way less on a texture fetch with sampler feedback, you have way less wasted bandwidth and time. You can select nearly the exact textures you need at every angle. So if they are using sparse textures for all deployments this a massive consideration. @Lurkmass and @Andrew Lauritzen and others have spoken of its challenges; it likens to be suboptimal compared to software streaming.

PS5 doesn’t have VRS, so you’re relying on how well optimized their own variant is.

Looks like Alex’s joke oddly came true. lol. His ELI5 is sort of funny here.
 
Last edited:
Yeah it's nuts. I think this generation the console technological leap was not as much as previous gens, while devs are targeting hard on that missing console performance difference on PCs with more powerful GPUs/CPUs that are also supporting features such DLSS. Which is also missing on consoles.
Its mostly amd fault, both sony and microsoft get almost most they could in 500$ 2020 box
 
Primitive shaders don’t have amplification shaders. Which can be critical addition here if they are being used. And amplification shaders while can be emulated by compute shaders will be significantly slower.

Which didn't make a blind bit of different in Alan Wake 2 when it came to PS5 vs XSX.

If you are missing way less on a texture fetch with sampler feedback, you have way less wasted bandwidth and time. You can select nearly the exact textures you need at every angle. So if they are using sparse textures for all deployments this a massive consideration.

Do we know exactly what PS5's GPU or API for this?

PS5 doesn’t have VRS, so you’re relying on how well optimized their own variant is.

PS5 does have VRS, there are games on the system that use VRS, it just doesn't have the actual VRS hardware (which I've already said above) and requires a software approach, which according to some developers is superior to the hardware approach.
 
Which didn't make a blind bit of different in Alan Wake 2 when it came to PS5 vs XSX.
They went on record that they skipped amplification shaders. And that they intend to use it for their next major release.
Do we know exactly what PS5's GPU or API for this?
It would fall under sparse textures for OGL. Tiled resources under DX.
PS5 does have VRS, there are games on the system that use VRS, it just doesn't have the actual VRS hardware (which I've already said above) and requires a software approach, which according to some developers is superior to the hardware approach.
Right. So if you don’t have the hardware for VRS, you must develop it on the compute shader lane for it to run. You have a chance here to make better performance or at least equivalent with more control.

Trying to emulate VRS on 3D pipeline vs VRS hardware that runs on 3D pipeline is unlikely to be as performant.

If they made VRS on compute, they likely would be running that version for all deployments.
 
They went on record that they skipped amplification shaders. And that they intend to use it for their next major release.

So until then, the real world differences can't be discussed as there's very little to no data for it.

It would fall under sparse textures for OGL. Tiled resources under DX.

Yes it would.

Right. So if you don’t have the hardware for VRS, you must develop it on the compute shader lane for it to run.

Or you use the MSAA hardware that's still contained in every GPU.
 
Maybe they are using all those features on Xbox series to great effect, which would be one of the few examples (sampler feedback would be a first?)where it's used to great effect, but series x has never outperformed a 3600 CPU, like ever? And now it's running up to 25% faster. Even if series x was outperforming the PS5 in GPU tasks by that much, it would show up in resolution, not framerate, as the CPU is the bottleneck here.
 
So until then, the real world differences can't be discussed as there's very little to no data for it.
How will this ever be figured out ever? I’m confused on this point. No such benchmarks will ever exist.
Or you use the MSAA hardware that's still contained in every GPU.
sure, but the expectation here is that you can beat or perform equivalently to hardware VRS.

What COD did was way back before the next generation of consoles were released. We’ve not had a proper comparison point on the level of plumbing hardware VRS can have in systems as the consoles have matured.

I feel like UE5.2 is the only one that has managed that, and I'm fairly positive that is compute shader based (though welcome to be corrected).

As far as I'm concerned, PS5 and PC are running similar code base.
@Dictator can confirm here and ideally investigate on this one!

The following features did not show up on the settings menu:
VRS, TR, SFS, or Mesh Shading

This game does not require a DX12U Card.

Warhammer 40,000: Space Marine 2 minimum requirements

  • Memory: 8 GB
  • Graphics Card: NVIDIA GeForce GTX 1060
  • CPU: Intel Core i5-8600K
  • File Size: 75 GB
  • OS: Windows 10 or higher

Series consoles are the only anomaly in terms of where they sit in performance.
 
Last edited:
I would never predict before this consoles generation that we slowly getting more and more games with lower internal resolution in performance modes than on ps4 ;d
Is that really surprising? It seems clear to me that most games are still targeting 30 FPS on console. That meant current-gen consoles could achieve 60 FPS with good rendering resolutions on cross-gen games, but now that the cross-gen period is over games are back to 30 FPS only or 30 FPS by default with a 60 FPS mode with serious compromises.
 
Is that really surprising? It seems clear to me that most games are still targeting 30 FPS on console. That meant current-gen consoles could achieve 60 FPS with good rendering resolutions on cross-gen games, but now that the cross-gen period is over games are back to 30 FPS only or 30 FPS by default with a 60 FPS mode with serious compromises.
This.

On consoles, 720p to achieve 60fps is so common now, with visuals that are not state of the art to begin with. We are reaching the limits of rasterization, scaling of the visuals is disproportionately scaling with hardware requirements. Just look at the extreme requirements of unreal engine 5 games (Hellblade 2 with it's slow gameplay runs at 2304x963p with ~30fps!).

Visually, consoles have depressingly fallen in the image quality aspect vs PCs, far worse than any recent time I would argue, consoles rely on FSR2 to upscale from low resolution and achieve horrendous results vs a modern PC equipped with a strong upscaler (DLSS) than can handle low res upscaling quite capably.
 
This.

On consoles, 720p to achieve 60fps is so common now, with visuals that are not state of the art to begin with. We are reaching the limits of rasterization, scaling of the visuals is disproportionately scaling with hardware requirements. Just look at the extreme requirements of unreal engine 5 games (Hellblade 2 with it's slow gameplay runs at 2304x963p with ~30fps!).

Visually, consoles have depressingly fallen in the image quality aspect vs PCs, far worse than any recent time I would argue, consoles rely on FSR2 to upscale from low resolution and achieve horrendous results vs a modern PC equipped with a strong upscaler (DLSS) than can handle low res upscaling quite capably.
One could argue that this generation next gen features were ai upscaling and robust ray tracing acceleration, and consoles have neither of them.
Outside of graphics, they have fast loading, so at least there is that.
 
Primitive shaders don’t have amplification shaders. Which can be critical addition here if they are being used. And amplification shaders while can be emulated by compute shaders will be significantly slower.
Personally, I think Work Graphs with the mesh nodes extension may be a closer model to how modern AMD HW works than amplification shaders are. All an amplification shader really triggers on AMD HW is a synchronized gfx/compute queue submission (AKA gang submit feature) but the concept itself still hides the fact that the shader stage is implemented on top of compute shaders. Work Graphs just like amplification shaders are also capable of doing programmable amplification and a future use case of this involves implementing Nanite Tessellation with it and Work Graphs don't expose amplification shaders at all even with the mesh nodes extension!
If you are missing way less on a texture fetch with sampler feedback, you have way less wasted bandwidth and time. You can select nearly the exact textures you need at every angle. So if they are usin sparse textures for all deployments this a massive consideration. @Lurkmass and @Andrew Lauritzen and others have spoken of its challenges; it likens to be suboptimal compared to software streaming.
The main reason why tiled/reserved resources and by extension sampler feedback hasn't really taken off for major usage scenarios like texture streaming is down to the slow page table mapping updates on the main graphics hardware vendors (AMD & NV) ...
SFS Flowchart

The flowchart above is taken from the DirectX specifications on sampler feedback. When you look in the circular graph at the bottom step, programmers are virtually forced to use horrendously slow APIs like UpdateTileMappings/vkQueueBindSparse in the hot paths (per-frame) for this texture streaming system to work ...
 
Is that really surprising? It seems clear to me that most games are still targeting 30 FPS on console. That meant current-gen consoles could achieve 60 FPS with good rendering resolutions on cross-gen games, but now that the cross-gen period is over games are back to 30 FPS only or 30 FPS by default with a 60 FPS mode with serious compromises.
Yes, for me its surprising, for recolection ps4pro and one x were apparently for 4k, and now we are back to below 1080p with to be honest not that much push in graphic
 
And all this thanks to the high-end multi-thousand dollar PC graphics cards used by 1% of the market! And for the developers who can't exercise self-control and take these high-end GPUs as their target platform... Bravo!

How about 1440p plus FSR (this combination provides 4K image quality) and 60 FPS on console standard? Then, on the PC, the extras provided by the more expensive hardware would also be included for this console development, e.g. full ray tracing or stable 120FPS etc.

Instead, 90% of gamers can play in crappy 720p-1080p image quality. Bravo!

And don't let anyone tell me that you can't make beautiful and spectacular games in high resolution with graphics optimized for console hardware! Because it can.
 
Last edited:
Back
Top