Current Generation Games Analysis Technical Discussion [2024] [XBSX|S, PS5, PC]

This game seems to have very serious problems with screen tearing on console in the full release.

(cued to relevant section)

I assume John's DF video from today only showed the PC version.
Never even heard of this title. Interesting.
Series S holding well for its target, Series X more to be desired. PS5 looks like a horrible experience.
 
The pc port of Horizon Forbidden West is another example of how outdated rendering techniques can hold back modern GPUs:

The 4090 is only 66% faster than a 6900XT in 4K. In Alan Wake 2 1080p is enough for the same difference : https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/6.html
Power consumption of the 4090 is less than 350W in 4K. In Alan Wake 2 even in 1080p the GPU uses 100W more. There is 20%+ performance just vanished here...

Like with UE5 these games do not scale with lower resolutions on modern nVidia GPUs. So playing in 4K with DLSS performance gets nearly the same performance with much better image quality.
 
The pc port of Horizon Forbidden West is another example of how outdated rendering techniques can hold back modern GPUs:

The 4090 is only 66% faster than a 6900XT in 4K. In Alan Wake 2 1080p is enough for the same difference : https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/6.html
Power consumption of the 4090 is less than 350W in 4K. In Alan Wake 2 even in 1080p the GPU uses 100W more. There is 20%+ performance just vanished here...

Like with UE5 these games do not scale with lower resolutions on modern nVidia GPUs. So playing in 4K with DLSS performance gets nearly the same performance with much better image quality.
66% is within the norm at 4K. The 4090 is generally 60-70% faster than the 3090 so this checks out since the 3090 is around 10% faster than the 6900 XT at 4K.
 
The pc port of Horizon Forbidden West is another example of how outdated rendering techniques can hold back modern GPUs:

The 4090 is only 66% faster than a 6900XT in 4K. In Alan Wake 2 1080p is enough for the same difference : https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/6.html
Power consumption of the 4090 is less than 350W in 4K. In Alan Wake 2 even in 1080p the GPU uses 100W more. There is 20%+ performance just vanished here...

Like with UE5 these games do not scale with lower resolutions on modern nVidia GPUs. So playing in 4K with DLSS performance gets nearly the same performance with much better image quality.

Techpowerup have the 4090 as being 80% faster than the 6900XT.

This 66% performance difference is only 12% below the average, which is explained by the game running better on AMD GPU's than Nvidia ones.

So there is nothing wrong with the game and how it's performing, so any point you're tying to make or imply doesn't exist.

IMG_20240324_153741.jpg
 
It’s a PS4 game.

Looks quite good for that as a base too.

The sky is stunning and it's approaching a similar polycount to current gen only titles (of course like half the art was designed for that).

I do hope they got someone to work on the animation systems. The climbing makes me appreciate what Ubisoft has accomplished for a long time now, the amount of clipping straight through things in Forbidden West during climbing is very high. The masses of dense vegetation really highlight how little any of it actually animates, there's some vertex shader for character interaction but it's minimal, so characters just clip straight through masses of knee or waist high plants like they're not there.
 
I'm VERY impressed with the port of Horizon Forbidden West so far. I think this is the best "out of the gate" port that Nixxes has ever done. There's only some very minor issues that I've spotted thus far. Mind you I'm only a handful of hours in.

Tried it out on the Steam Deck too! Runs very respectably there as well. It can drop into the mid 20s during busy times, but hell... it's asking a lot of a portable gaming device. It's crazy seeing a current generation game running and looking this good in the palm of your hands.

Here's a some images I downscaled to about the size of the screen of the Steam Deck in terms of actual size. This is basically what you're looking at while playing it portably. Crazy impressive IMO.

download.jpg
download-1.jpg

20240324181347-1.jpg
20240324181411-1.jpg

20240324182145-1.jpg
20240324182249-1.jpg

20240324181233-2.jpg
20240324181310-1.jpg

download-3.jpg
20240324181226-1.jpg
 
Yeah that person contacted us and we are for sure interested in it. Testing such a thing with all the reconstruction techniques/upscalers I think is the big challenge!

I was thinking along the lines of (for example) your recent HFW analysis where the base PC performance vs PS5 looked pretty bad if you look at internal pixel counts and assume PS5 is running at its full 1800p CB (it likely isn't but we don't really know).

It would be great to understand the real resolution the PS5 is running in a matched sequence to give more context there. Even if there are no sub 60 segments on the PS5, seeing what resolution it's using at the 3060's lows would still provide some great info.

Also seeing the behavioral differences between PC and console DRS when using equally performing GPU's would be fascinating
 
I was thinking along the lines of (for example) your recent HFW analysis where the base PC performance vs PS5 looked pretty bad if you look at internal pixel counts and assume PS5 is running at its full 1800p CB (it likely isn't but we don't really know).

It would be great to understand the real resolution the PS5 is running in a matched sequence to give more context there. Even if there are no sub 60 segments on the PS5, seeing what resolution it's using at the 3060's lows would still provide some great info.

Also seeing the behavioral differences between PC and console DRS when using equally performing GPU's would be fascinating
Every reconstruction method has a cost though. For CBR it's sometimes 30%. For instance native resolution + 30% cost = reconstructed resolution with hopefully >30% better results than native resolution. So comparing with native res would be quite unfair. I think we should compare subjective results which are pretty good in CBR 60fps in Horizon.

I mean DLSS is being compared this way, isn't it?
 
  • Like
Reactions: snc
Every reconstruction method has a cost though. For CBR it's sometimes 30%. For instance native resolution + 30% cost = reconstructed resolution with hopefully >30% better results than native resolution. So comparing with native res would be quite unfair. I think we should compare subjective results which are pretty good in CBR 60fps in Horizon.

I mean DLSS is being compared this way, isn't it?
How do we even know CBR sometimes cost 30% of the performance?
 
Trying to understand the maths there. By native, you mean framebuffer res? So for 1920x1080 output screen res, you'd render 960x1080, say, then add 30%? Which would be 1.3x half the res, 1.3 x 0.5 = 0.65 so 65% native res, a 35% saving over rendering 1080p. :?:
 
Every reconstruction method has a cost though. For CBR it's sometimes 30%. For instance native resolution + 30% cost = reconstructed resolution with hopefully >30% better results than native resolution. So comparing with native res would be quite unfair.

I completely agree with this. But understanding the base resolution is still critical to the overall picture. The additional performance hit from CB is relatively small compared to a drop from 1800p CB to 1500p CB for example.

I think we should compare subjective results which are pretty good in CBR 60fps in Horizon.

I mean DLSS is being compared this way, isn't it?

I do agree with this too, but by this measure a 3060 is arguably matching the PS5 in HFW and likely matching or beating it in most other games despite its base performance disadvantage. So yes the final end user experience is absolutely the most important measure, but it would be great to understand the base, pre-upscaling performance to understand how the architectures and platforms compare directly.
 
Trying to understand the maths there. By native, you mean framebuffer res? So for 1920x1080 output screen res, you'd render 960x1080, say, then add 30%? Which would be 1.3x half the res, 1.3 x 0.5 = 0.65 so 65% native res, a 35% saving over rendering 1080p. :?:
If only. I was shocked to see how expensive it was (at least in that game). ~30% of whole frame rendering on Pro. Say it costs 10ms to render, CBR will add 3ms so 13 ms in total. Very costly but it was worthy as it passed for native for DF and textures look actually sharper than the X1X version, dunno why. I guessed they did use some PS4 Pro exclusive hardware textures tricks on Pro that could improve CBR reconstruction (one of the exclusive hardware trick only present on Pro and likely PS5, less known than ID Buffer but I digress).

See my old post about gradient textures (Cerny patent)

I also think they do use that trick on Pro version of Monster Hunter World as textures are also quite much sharper than the native X1X version. CBR is also so good in that game on Pro (and PS5). This is why I am infuriated when I see devs using shitty FSR2 on PS5 for about the same rendering cost. What a shame when they could use specific hardware / API on Pro / PS5 and get much better results.

My thread about Dark Souls remastered (I hope I remembered main facts right! 27% cost, not bad memory!)
 
Last edited:
If only. I was shocked to see how expensive it was (at least in that game). ~30% of whole frame rendering on Pro. Say it costs 10ms to render, CBR will add 3ms so 13 ms in total. Very costly but it was worthy as it passed for native for DF and textures look actually sharper than the X1X version, dunno why.
Isn't that the same as I described? I assume 960x1080 renders in half the time as 1920x1080. thus 13ms versus 20ms rendering at native, 65% of the render time, a 35% render time saving. If it's not saving you a lot, there's no point in going to the effort to implement CBR.

To me, 30% overhead is expensive but still clearly worth doing as significantly cheaper than native rendering.
 
Isn't that the same as I described? I assume 960x1080 renders in half the time as 1920x1080. thus 13ms versus 20ms rendering at native, 65% of the render time, a 35% render time saving. If it's not saving you a lot, there's no point in going to the effort to implement CBR.

To me, 30% overhead is expensive but still clearly worth doing as significantly cheaper than native rendering.

Yeah, not knocking what some developers did using checkerboarding but compared to FSR it's slow. FSR might not always look pretty but consoles can and do use it at 2x2 scaling level (performance on PC) or greater.

2160p CBR is reconstructed from half res so it's lalways going to cost you > 50% of native. 2160p FSR2 performance from 1/4 res is only going to cost you > 25%, and you can scale from any native resolution you chose.

FSR can use multiple previous frames, can be used for AA, frame interpolation, and doesn't use or reject pixels based on an arbitrary coverage pattern. FSR2 quality with 1.3x scaling on each axis is probably the closest thing to CBR performance wise and I think any type of CBR would have a tough time matching it in most circumstances.

Same is true for DLSS, except DLSS is even better.
 
Back
Top