Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I realized perhaps I'm being a bit unfair with my comparisons since they are at a fairly high resolution which isn't exactly representative of what an Xbox Series X would target, so here's a quick one of Gears 5 at my monitors native res 3840x1600, and "Off" and "Quality" remain mostly indistinguishable... "Performance" definitely shows a noticeable deterioration to texture quality and IQ. The game also doesn't benefit as greatly from VRS performance-wise in this particular scene.

Off:
g2.jpg


Quality:
g1.jpg


Performance:
g3.jpg



Continuing on with Dirt 5... another game which supports VRS, which is known to have been a bit of a mess on Xbox and a not so great implementation of VRS. Series consoles resulted in an image which looked decidedly lower detail and lower resolution on Series X than PS5 at launch.

3860x1600p native (game has 3 different res settings for various buffers) Dynamic Res off, and all settings to Ultra Quality.

VRS Off:
20230924124108-1.jpg


VRS On:
20230924124122-1.jpg


There's noticeable degradation in detail on the track surface. Also with this game I believe VRS is dynamically adjusting coarseness with the speed of the camera movement. When VRS is engaged and camera movement is settling down I can notice slightly more detail resolve right as it stops compared to just before. Not a motion blur issue as I turned it off and the same thing happens.

That said, I love Dirt 5. It performs absolutely beautifully on PC and just looks very nice in motion. You can control all the settings with a fine toothed comb to your liking, and at very high resolutions it looks extremely clean.

Any other games out there that we know support VRS and allow users to toggle on and off?
Nice work. On Gears I can easily see a huge degradation for me (with pixelating, blocky textures, complete loss of details) between native and performance. But what you could do is give us average performance of the scene. Is is really worth the huge loss of details?
 
Nice work. On Gears I can easily see a huge degradation for me (with pixelating, blocky textures, complete loss of details) between native and performance. But what you could do is give us average performance of the scene. Is is really worth the huge loss of details?

You must have bionic eyes. I can see a slight difference in some limited areas of the screen by flicking back and forth between them with my face a few inches away from my 38" monitor, but in actual gameplay these would be invisible differences for around 5-7% more performance. Certainly if you are struggling to keep a locked framerate target then those percentage points could be a huge win for this level of visual compromise IMO.
 
You must have bionic eyes. I can see a slight difference in some limited areas of the screen by flicking back and forth between them with my face a few inches away from my 38" monitor, but in actual gameplay these would be invisible differences for around 5-7% more performance. Certainly if you are struggling to keep a locked framerate target then those percentage points could be a huge win for this level of visual compromise IMO.
I think it's more a feel of less clarity. You can't necessarily point to an area and highlight the VRS reduced shading, but the feel is a little bit watered down, like some lower textures. The significant losses are in high frequency detail, akin to missing a detail texture.

Numerically, VRS must be making some 'significant' downgrades or otherwise the shaders are just really inefficient and could be pared down anyway! If VRS is making no visual difference, swap out the higher complexity shaders with the simpler ones for all-round performance improvements at zero visual impact and don't even use VRS. ;)

Hmmm, I wonder if a reason for little VRS use is precisely that? If the visual difference for a 5% performance improving shader is largely imperceptible, just use the simpler one for an all-round 5% improvement? VRS would only then make sense when the impact on performance is significant, which will be accompanied with a notable reduction in quality that perhaps is felt too strong and is avoided? There doesn't seem to me a logical reason to have a low-visual-impact VRS system.

An analogy might be image compression in an animation stream that dynamically adapts compression. You can have raw RGB and stream every frame at full quality, until you start to hit IO issues. Then you swap to 1% JPEG frames for a few frames for a much lower BW requirement while the visual difference wouldn't be noticeable...so why ever use raw RGB? Encode all the frames at 1% JPEG. If you start to hit massive BW limitations and need to reduce to 50% JPEG compression, now you're getting into visual difference territory such that you might like to optimise elsewhere.
 
Last edited:
Hmmm, I wonder if a reason for little VRS use is precisely that?
Vrs is widely used. The reason for little hardware vrs is the divided ecosystem — half or more of your players are on platforms that don’t have it, why spend dev time on that rather than something which benefits them too?
 
Hmmm, I wonder if a reason for little VRS use is precisely that? If the visual difference for a 5% performance improving shader is largely imperceptible, just use the simpler one for an all-round 5% improvement?
VRS only solves compute bound issues. If you’re bandwidth is nuked it’s not going to make a difference.

So with VRS you’re going to get a lot more return the higher the resolution is and significantly less noticeable degradation. We are still a far way from seeing massive adoption into VRS, but I find it unlikely people notice it, if I’m being honest. If you all can’t see DRS in action, there’s no way you can see DRS and VRS together in action.

IE: going from 1440p and dropping to 1080p is a 43% reduction in pixels (or 43% reduction in higher frequency). There’s no way VRS degrades 43% if it’s being selective on where to degrade. And if you can’t see a resolution drop like this, you’re not likely to see selective degradation.
 
Last edited:
VRS only solves compute bound issues. If you’re bandwidth is nuked it’s not going to make a difference.

So with VRS you’re going to get a lot more return the higher the resolution is and significantly less noticeable degradation. We are still a far way from seeing massive adoption into VRS, but I find it unlikely people notice it, if I’m being honest. If you all can’t see DRS in action, there’s no way you can see DRS and VRS together in action.

IIRC, higher resolution resolutions make it easier to get a good performance boost whilst having a negligible reduction in IQ. I think this is down to the shaders being busier and changes in luminance between adjacent pixels tending to be lower. I think @Remij's tests show this to be true.

This could mean that the Series S is going to get less of a performance boost from VRS for a given level of IQ, and it might also mean that games that don't effectively utilise the Series X's wide shader arrays will benefit less also.

It's interesting that games like Gears 5 and Starfield, which have been heavily optimised for the Series consoles, have gone in for using VRS.

It will be interesting to see if Forza uses it too, and if it does what kind of tradeoffs it offers there.
 
IIRC, higher resolution resolutions make it easier to get a good performance boost whilst having a negligible reduction in IQ. I think this is down to the shaders being busier and changes in luminance between adjacent pixels tending to be lower. I think @Remij's tests show this to be true.

This could mean that the Series S is going to get less of a performance boost from VRS for a given level of IQ, and it might also mean that games that don't effectively utilise the Series X's wide shader arrays will benefit less also.

It's interesting that games like Gears 5 and Starfield, which have been heavily optimised for the Series consoles, have gone in for using VRS.

It will be interesting to see if Forza uses it too, and if it does what kind of tradeoffs it offers there.
I think larger heavy shader work is going to be an ideal place for it. Starfield I feel it was used appropriately. We know compute is the bottleneck and it happens very quickly.

Not that many games should be using it, it’s easy to steer away from if you just bake everything which Gears 5 does. But if we continue down the path of UE5 and Starfield like titles, yea it’s going to be necessary.
 
VRS can also vastly reduce bandwidth, it is in no way limited to compute only. Once you're running only 1/4th the computation on a set of a pixels you'll often be fetching only 1/4th the data, depending on what's being run.

VRS is a vastly better option than temporal reconstruction techniques, reconstructing shading is much easier with full res g/z buffers and at this point drawing the g/z buffers is becoming an increasingly cheap part of the frametime compared to everything else. VRS is one of the major reasons that Fable teaser looked so jaw dropping (the other being that it was a pre-built cinematic. Games like Star Wars Outlaws and FFXVI show how much difference a good cinematic setup can make even in realtime).

Anyway for Starfield the major area it's used is for particle effects. Just watch a ship land and you'll see VRS go overboard, I think they need a finer tiled implementation there. But in principle using VRS to reduce the sudden spikes caused by tons of overlapping transparency is great. Games have struggled with frame time spikes from tons of particles ever since they existed, now there's something of a solution.
 
VRS can also vastly reduce bandwidth, it is in no way limited to compute only. Once you're running only 1/4th the computation on a set of a pixels you'll often be fetching only 1/4th the data, depending on what's being run.
Interesting. I was under the assumption because of you say do 1x4 VRS. You compute the 1 pixel and reapply for the remaining 3. But you still need to write that value even if you don’t compute it, so there wouldn’t be bandwidth savings. I guess you save the read and write bandwidth of computation though.
 
Interesting. I was under the assumption because of you say do 1x4 VRS. You compute the 1 pixel and reapply for the remaining 3. But you still need to write that value even if you don’t compute it, so there wouldn’t be bandwidth savings. I guess you save the read and write bandwidth of computation though.

write and read are different, if you're doing 1/4th res SSAO you're still not reading from wherever the depth buffer is in memory at the moment, and for the result you're just writing the result. Most bandwidth usage and a lot compute underutilization comes from waiting while trawling memory for what you're looking for.
 
Vrs is widely used. The reason for little hardware vrs is the divided ecosystem — half or more of your players are on platforms that don’t have it, why spend dev time on that rather than something which benefits them too?
Maybe I'm misunderstanding how VRS works. If people aren't able to tell the difference between VRS On and VRS Off, why use it as opposed to using fixed rate shading with a simpler shader?
 
Maybe I'm misunderstanding how VRS works. If people aren't able to tell the difference between VRS On and VRS Off, why use it as opposed to using fixed rate shading with a simpler shader?
Simpler shader affects all pixels run. You may still want a complex shader but not enough throughout, so leverage VRS to select very like-pixels so that coverage is extended and less compute required.
 
Maybe I'm misunderstanding how VRS works. If people aren't able to tell the difference between VRS On and VRS Off, why use it as opposed to using fixed rate shading with a simpler shader?
The promise of VRS -in the beginning at least- is to be used in areas of the screen that are not noticeable by the player. For example, areas in the dark, areas of high speed motion (like during racing), areas in heavy depth of field or motion blur, areas that are very far away, ... etc. VRS would reduce shading in these areas and save performance. I don't remember anyone talking about reducing overall details under normal viewing conditions (well lit areas, up close, ... etc).
 
Maybe I'm misunderstanding how VRS works. If people aren't able to tell the difference between VRS On and VRS Off, why use it as opposed to using fixed rate shading with a simpler shader?
Like others said: the purpose is to run a complex shader on less pixels. Complexity is usually required for your scene to look how you need -- same idea as why you'd use dynamic resolution scaling (or temporal upscaling) rather than removing something like "lights" or "transparency" from your scene.
The promise of VRS -in the beginning at least- is to be used in areas of the screen that are not noticeable by the player.
Not noticable isn't the right target. The promise is that it's less noticable than scaling down the whole image (like with drs).
 
Nice work. On Gears I can easily see a huge degradation for me (with pixelating, blocky textures, complete loss of details) between native and performance. But what you could do is give us average performance of the scene. Is is really worth the huge loss of details?

What huge loss of details? Playing video games is significantly different than studying image quality by perusing still frames for academic purposes. Gears main gameplay loop isn't about destroying enemy AIs disguised as stair rails or garbage cans. Most playing the game aren't going to miss the loss of detail in those objects because they are not focal points.

As long as the disparity between VRS being off or on isn't readily discernible in terms of overall imagery while gaming and saving tangible amounts of performance that can be used to increase framerate or other areas of rendering, it's doing its job.

If foveated rendering becomes a major feature in gaming, are we going to focus on how horrible the frame captures will be? Or obsess about how continually moving one's visual focus fast and randomly enough can break the eye tracking mechanic even though it functions perfectly well for everybody during normal gameplay?
 
Last edited:
What huge loss of details? Playing video games is significantly different than studying image quality by perusing still frames for academic purposes. Gears main gameplay loop isn't about destroying enemy AIs disguised as stair rails or garbage cans. Most playing the game aren't going to miss the loss of detail in those objects because they are not focal points.

As long as the disparity between VRS being off or on isn't readily discernible in terms of overall imagery while gaming and saving tangible amounts of performance that can be used to increase framerate or other areas of rendering, it's doing its job.

If foveated rendering becomes a major feature in gaming, are we going to focus on how horrible the frame captures will be? Or obsess about how continually moving one's visual focus fast and randomly enough can break the eye tracking mechanic even though it functions perfectly well for everybody during normal gameplay?

But you can say that still image comparisons (I have issues with this, so it isn't something I agree with) in that type of essentially academic comparison is used as a part of the marketing for games and therefore is relevant in terms of how it's received.

While at the same time if the performance gains are low (say in 5% range mentioned earlier) than they too you would argue as so low you wouldn't notice them outside of an academic comparison either. And as of now that 5% performance gain isn't really something marketable.
 
Should everyone start shipping games at 57fps?

They already ship them at 30fps (with dips)?

5% performance can be clawed back in many others ways, including just relying on dynamic resolution.

At which point we're also going into a discussion of which techniques have the most benefit not just from a visual impact vs performance stand point but also an implementation one.

Aside from which this isn't really were I'm going with this. Only that it doesn't make sense to easily dismiss the visual comparisons as academic if the performance difference can also be argued as such.
 
Last edited:
VRS isn't the type of rendering technology that games would advertise and market, really. The entire idea behind it is that you ideally don't know it exists or is happening. As amazingly good marketing tools as DLSS, XeSS, and FSR and other reconstruction techniques are, they are all lead by vendors/developers pushing for their adoption in games, and they want you to know what their amazing technology is doing. VRS isn't quite that, IMO. It skirts the line, which is to render smarter and not harder. It may help developers claw back juuuust enough performance to hit their targe, but it's less useful overall, and is really scene dependent.

@Dictator I checked Returnal but couldn't find a user toggleable VRS setting.
 
Status
Not open for further replies.
Back
Top