Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
It depends what the FPS is in the "worst case" situation. If the FPS is close to 30 there, then you will need a near 2X boost. I can see FPS being around 60 on the barren planets and then tanking in the cities, like we saw in The Witcher 3 and like Alex showed with Star Citizen.
But a 2X boost to what?

I've yet to see anything demonstrated in the hour of Starfield that's they've shown off that looks like the game is stressing the CPU. Fallout 4 could get quite hectic in some missions, particularly the Battle for Bunker Hill and - if you wet that path - the Brotherhood ending with a fall on assault of the Institute with Liberty Prime. In the deep dive, they showed off new mechanics but there was nothing obvious that screamed "needs a powerful CPU". For context, Starfield's CPU requirements on Steam are AMD Ryzen 5 2600X (minimum), and AMD Ryzen 5 3600X (recommended). I don't think it's the CPU.

If the GPU is stretched there are several options to reduce the ask of the GPU then remember they have chosen to target native 4K on Series X and 1440p on Series S. A lower resolution on Series X, dynamic resolutions, VRR, and adjusting asset quality would make a massive difference, but likely not being appreciable to a lot of people.

Once it's launched, the PC build will provide an insight on how much headroom the Xbox Series consoles might have.
 
But a 2X boost to what?

I've yet to see anything demonstrated in the hour of Starfield that's they've shown off that looks like the game is stressing the CPU. Fallout 4 could get quite hectic in some missions, particularly the Battle for Bunker Hill and - if you wet that path - the Brotherhood ending with a fall on assault of the Institute with Liberty Prime. In the deep dive, they showed off new mechanics but there was nothing obvious that screamed "needs a powerful CPU". For context, Starfield's CPU requirements on Steam are AMD Ryzen 5 2600X (minimum), and AMD Ryzen 5 3600X (recommended). I don't think it's the CPU.

If the GPU is stretched there are several options to reduce the ask of the GPU then remember they have chosen to target native 4K on Series X and 1440p on Series S. A lower resolution on Series X, dynamic resolutions, VRR, and adjusting asset quality would make a massive difference, but likely not being appreciable to a lot of people.

Once it's launched, the PC build will provide an insight on how much headroom the Xbox Series consoles might have.
Well, they have procedural generation of content, lighting and weather simulation, and potentially larger NPC counts (New Atlantis being the biggest city they have made with "tons of NPCs" according to IGN).

If those requirements are for 60 FPS, then yes it's likely not the issue, but I don't believe this has been confirmed.
 
For the 'average gamer', is there a significant difference between bilinear upscaling and any form of reconstruction at all? We can hand-wave away a ton of image quality improvements over the years by referring to some hypothetical 'average gamer' who only tuns on their console for CoD and Madden.

It's about as handwavey as someone stating that most gamers would easily notice the difference between FSR2.x and DLSS 2.x. Yes, some can to a greater or lesser degree.

Likewise it's about as handwavey as someone stating that DLSS 2.x or FSR 2.x is always an improvement over "native". Again, true for some people, maybe even most, but not true for all. I myself rarely find either one an improvement due to artifacts and instability when in motion. The only cases where it has been an improvement is when it's better than a game's own "forced" temporal solution. So, as that's likely to increase in usage as time goes on, then my view of DLSS/FSR/XESS will likely change. But currently almost all titles I play do not rely on a temporal solution for the final output render.

When it comes to something like this that is very much opinion based on personal perception, it behooves everyone to remember that not everyone sees things the same. Especially when it comes to the downsides of any particular tech. :p Then downsides can often completely negate any upside of any given tech depending on how sensitive an individual (how easily they see it and how distracting it is) would be to any of the negative downsides of a tech implementation.

For example, I'm fine with chromatic aberration, but many aren't. Many are fine with motion blur and DOF, but those make me physically ill. Motion artifacts from DLSS/FSR/XESS are another one of those, some are fine with them and don't notice them, but others (like me) constantly notice them to the point where it's like a constant barrage of distractions.

BTW - this wasn't directed at you. :) There's just too many people using their own opinions to over generalize something.

Regards,
SB
 
Or developers will still aim for 30fps and use the extra power to push the visuals harder.

The problem with 30 FPS is that temporally based rendering techniques becomes less effective meaning there's a greater need for "native" rendering in order to avoid low motion artifacting in temporally based solutions.

But then that means that while you gain some rendering time by going with a lower framerate, you lose rendering time savings that a good temporal solution could bring.

It's not a simple calculation of 30 FPS = more rendering time = better visuals. Because the flipside is that 60 FPS means potentially more aggressive use of a temporal solution = less resources used per frame = better visuals. Less noticeable temporal artifacting in motion at higher framerates, rule of thumb is that 60 FPS is the point where most temporal artifacts are unnoticeable or at least minimally intrusive to most people if the temporal solution is well implemented.

Not to mention that you've already halved your motion resolution by going to 30 FPS from 60 FPS meaning that things will always inherently look worse in motion without resorting to heavy blurring (motion blur, for instance) when in motion.

Regards,
SB
 
The problem with 30 FPS is that temporally based rendering techniques becomes less effective meaning there's a greater need for "native" rendering in order to avoid low motion artifacting in temporally based solutions.

But then that means that while you gain some rendering time by going with a lower framerate, you lose rendering time savings that a good temporal solution could bring.

It's not a simple calculation of 30 FPS = more rendering time = better visuals. Because the flipside is that 60 FPS means potentially more aggressive use of a temporal solution = less resources used per frame = better visuals. Less noticeable temporal artifacting in motion at higher framerates, rule of thumb is that 60 FPS is the point where most temporal artifacts are unnoticeable or at least minimally intrusive to most people if the temporal solution is well implemented.

Not to mention that you've already halved your motion resolution by going to 30 FPS from 60 FPS meaning that things will always inherently look worse in motion without resorting to heavy blurring (motion blur, for instance) when in motion.

Regards,
SB
If the average time to get through FF and scene setup is 8ms every frame: That leaves 8ms for post processing which is typically what makes the image look good.

30fps you get 24ms of post processing, it’s going to be significantly better and you don’t need to spend the power on higher resolution or leverage a TAA solution. With more time comes more options.
 
The problem with 30 FPS is that temporally based rendering techniques becomes less effective meaning there's a greater need for "native" rendering in order to avoid low motion artifacting in temporally based solutions.

But then that means that while you gain some rendering time by going with a lower framerate, you lose rendering time savings that a good temporal solution could bring.

It's not a simple calculation of 30 FPS = more rendering time = better visuals. Because the flipside is that 60 FPS means potentially more aggressive use of a temporal solution = less resources used per frame = better visuals. Less noticeable temporal artifacting in motion at higher framerates, rule of thumb is that 60 FPS is the point where most temporal artifacts are unnoticeable or at least minimally intrusive to most people if the temporal solution is well implemented.

Not to mention that you've already halved your motion resolution by going to 30 FPS from 60 FPS meaning that things will always inherently look worse in motion without resorting to heavy blurring (motion blur, for instance) when in motion.

Regards,
SB

It was never a problem on PS4 Pro or One X.
 
If the average time to get through FF and scene setup is 8ms every frame: That leaves 8ms for post processing which is typically what makes the image look good.

30fps you get 24ms of post processing, it’s going to be significantly better and you don’t need to spend the power on higher resolution or leverage a TAA solution. With more time comes more options.
I would not say so. TAA definitely performs worse on 30fps and some TAA techniques don't apply well under 30fps. To what extent is another question though.

Camera movement is around 2x the distance in 30fps mode than 60fps, so image discrepencies are more likely to happen. A good example is CoD's fimic smaa t2x, which has a temporal component featuring both: 1. A two frame jitter accumulation; 2. and another exponential history buffer accumulation.

When performing the first two frame jitter accumulation, they reject the sample by comparing current frame sample and the sample from last last frame (so 2 frames ago). This is because they want to compare the samples on the same jitter position in the entire jitter sequence to avoid flickering due to jittering (common reason why TAA fails sometime). Now CoD targets 60fps on console. The problem is, if the framerate drops to 30, the time gap between the frames is 2x larger, and it is more likely for the history sample to fail (i.e. needs to be clamped/clipped in the color bound). This is also why they only perform 2-frame-jitter @ 60fps instead of other common 8-jitter halton sequence (which provides better supersampling schemes) -- otherwise they need to trace back 9 frames back, which is not only vram inefficient, but also the history sample is probably gonna be rejected anyway.

In general I would say TAA does benefit frame higher framerate. Even for a common exponential accumulation implementation, high framerate would mean less history buffer rejection rate because the frames are closer now.
 
I would not say so. TAA definitely performs worse on 30fps and some TAA techniques don't apply well under 30fps. To what extent is another question though.

Camera movement is around 2x the distance in 30fps mode than 60fps, so image discrepencies are more likely to happen. A good example is CoD's fimic smaa t2x, which has a temporal component featuring both: 1. A two frame jitter accumulation; 2. and another exponential history buffer accumulation.

When performing the first two frame jitter accumulation, they reject the sample by comparing current frame sample and the sample from last last frame (so 2 frames ago). This is because they want to compare the samples on the same jitter position in the entire jitter sequence to avoid flickering due to jittering (common reason why TAA fails sometime). Now CoD targets 60fps on console. The problem is, if the framerate drops to 30, the time gap between the frames is 2x larger, and it is more likely for the history sample to fail (i.e. needs to be clamped/clipped in the color bound). This is also why they only perform 2-frame-jitter @ 60fps instead of other common 8-jitter halton sequence (which provides better supersampling schemes) -- otherwise they need to trace back 9 frames back, which is not only vram inefficient, but also the history sample is probably gonna be rejected anyway.

In general I would say TAA does benefit frame higher framerate. Even for a common exponential accumulation implementation, high framerate would mean less history buffer rejection rate because the frames are closer now.
But you don’t need to use TAA with 30fps titles. You could get away with other AA techniques that don’t require temporal information.

MSAA is still a decent solution, I suppose one could leverage FSR1.0 as well.
 
But you don’t need to use TAA with 30fps titles. You could get away with other AA techniques that don’t require temporal information.
I personally couldn't agree with this statement. Complex PBR materials bring high frequency shading aliasing which can only be reduced forms of supersampling. In addition many other dithering effects would require a temporal resolving to smooth out anyway (the alpha-tested hair for example).

That being said we still have many 30fps games with high quality TAA in last gen and this gen, so that why I say "To what extent is another question though"
 
MSAA is still a decent solution, I suppose one could leverage FSR1.0 as well.
MSAA is quite heavy in deferred rendering setup. It's basically doing super sampling on selected pixels. It is also heavily invasive to the rendering pipeline because you have to write a separate lighting pass that is super complicated. FSR1.0 itself doesn't apply any form of antialiasing (it does quite the opposite, bring up the contrast of edges) and AMD's whitepaper clearly states the image needs to be antialiased before passed into FSR1.0 pipeline.
 
I personally couldn't agree with this statement. Complex PBR materials bring high frequency shading aliasing which can only be reduced forms of supersampling. In addition many other dithering effects would require a temporal resolving to smooth out anyway (the alpha-tested hair for example).

That being said we still have many 30fps games with high quality TAA in last gen and this gen, so that why I say "To what extent is another question though"
Between doubling the frame rate and having higher image quality through temporal solutions versus halving the frame rate and spending more computing per pixel with more high frequency issues; I believe the latter is more likely to produce a next generation image than the former.

there’s just not enough time available for these consoles to do significant work in improving the image at 16.6ms
 
But a 2X boost to what?

I've yet to see anything demonstrated in the hour of Starfield that's they've shown off that looks like the game is stressing the CPU. Fallout 4 could get quite hectic in some missions, particularly the Battle for Bunker Hill and - if you wet that path - the Brotherhood ending with a fall on assault of the Institute with Liberty Prime. In the deep dive, they showed off new mechanics but there was nothing obvious that screamed "needs a powerful CPU". For context, Starfield's CPU requirements on Steam are AMD Ryzen 5 2600X (minimum), and AMD Ryzen 5 3600X (recommended). I don't think it's the CPU.

If the GPU is stretched there are several options to reduce the ask of the GPU then remember they have chosen to target native 4K on Series X and 1440p on Series S. A lower resolution on Series X, dynamic resolutions, VRR, and adjusting asset quality would make a massive difference, but likely not being appreciable to a lot of people.

Once it's launched, the PC build will provide an insight on how much headroom the Xbox Series consoles might have.
I think DF found 1296p + FSR2 from the gameplay shown running on XSX.
 
MSAA is quite heavy in deferred rendering setup. It's basically doing super sampling on selected pixels. It is also heavily invasive to the rendering pipeline because you have to write a separate lighting pass that is super complicated. FSR1.0 itself doesn't apply any form of antialiasing (it does quite the opposite, bring up the contrast of edges) and AMD's whitepaper clearly states the image needs to be antialiased before passed into FSR1.0 pipeline.
Yea I forgot FSR was fidelity super resolution. Yea you still need to AA it. Have we seen any game with MSAA mixed with FSR?
 
Yea I forgot FSR was fidelity super resolution. Yea you still need to AA it. Have we seen any game with MSAA mixed with FSR?
I don't think so? MSAA is quite uncommon nowadays. Last game I know uses MSAA is Forza Horizon 5. But I think they use a forward lighting setup (probably f+ with thin gbuffer), so theoritically MSAA doesn't supersample the lighting calculation anyway, meaning only the geometry edges get antialiased.
 
But a 2X boost to what?

I've yet to see anything demonstrated in the hour of Starfield that's they've shown off that looks like the game is stressing the CPU. Fallout 4 could get quite hectic in some missions, particularly the Battle for Bunker Hill and - if you wet that path - the Brotherhood ending with a fall on assault of the Institute with Liberty Prime. In the deep dive, they showed off new mechanics but there was nothing obvious that screamed "needs a powerful CPU". For context, Starfield's CPU requirements on Steam are AMD Ryzen 5 2600X (minimum), and AMD Ryzen 5 3600X (recommended). I don't think it's the CPU.

If the GPU is stretched there are several options to reduce the ask of the GPU then remember they have chosen to target native 4K on Series X and 1440p on Series S. A lower resolution on Series X, dynamic resolutions, VRR, and adjusting asset quality would make a massive difference, but likely not being appreciable to a lot of people.

Once it's launched, the PC build will provide an insight on how much headroom the Xbox Series consoles might have.

Fallout 4's AMD CPU requirements are - Min - Phenom II X4 945 , Rec - AMD FX-9590 4.7 (which let's just say was generous if 60 fps consistency was your target) which are lower than Doom Eternal's (1200x and 1800x respectively). I think it's safe to say that Fallout 4 is much more CPU limited than Doom Eternal on the PC side.

The issue with Fallout 4 also wasn't just game systems/mechanics that was directly user side but they traded off performance optimizations basically for content creation optimization, which I think is likely going to be the case for Starfield as well.
 
In terms of the 30fps/60fps console discussion my feeling is what's going to be important going forward is the adoption rate of 120hz/VRR and to some extent OLED TVs. 120hz/VRR being essentially the target standard would simply give much more flexibility in terms of pushing >30fps whiteout worrying about necessarily hitting 60 fps consistently. OLED adoption might change the general acceptability of 30 fps vs 60 fps.
 
I don't think so? MSAA is quite uncommon nowadays. Last game I know uses MSAA is Forza Horizon 5. But I think they use a forward lighting setup (probably f+ with thin gbuffer), so theoritically MSAA doesn't supersample the lighting calculation anyway, meaning only the geometry edges get antialiased.
Man right. Good discussion, been a while since I’ve actually discussed this stuff. It’s all coming back to me now. MSAA only has good performance on forward rendering, but forward sucks with the number of dynamic lighting thus f+.

Yea very few games are like it; FH5 is probably the last one, and I guess VR titles. I would be curious to see what Fable is now that we are on that topic.

@Dictator not sure if you have any additional info to add about FH5 we may have overlooked
 
Status
Not open for further replies.
Back
Top