Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
according to John VRS entire point is to degrade the image. It's not there to improve IQ. Exact opposite.

It's there to smartly degrade the image in places people won't notice. But if it's not implemented well you will very much notice it.

FSR1 and DLSS 1 though is good example of "technology going against it's intended function" since they were supposed to be an IQ booster but destroyed iq in many cases with ghosting
It is supposed to degrade in unnoticeable areas so there are more resources to allocate on noticeable areas without hindering performance or lowering resolution.
Or to put it differently, put more perceived detail out there without hindering performance.
At this point, it has no benefit to performance, while destroying overall IQ
 
I don't know about Gears implementation but i guess in that case ignorance is bliss as people can't actually try and compare with VRS off on their console to see if the performance improvements resulting in a very small resolution increase (almost completely hidden by DRS + TAA) are worth it.

This what I also noticed in all comparisons with hardware VRS On on Xbox, Off on PLaystation. The image looks noticeably better for me on PS even when Tiers 2 is used. Granted I like sharp textures, but still. Even John thinks the textures are visibly worse in Doom Eternal on Xbox compared to PS (and the game actually outputs at a lower average resolution on Pro vs X1X!). And he is finally wondering if the whole technique is actually worth it.
 
Last edited:
I don't know about Gears implementation but i guess in that case ignorance is bliss as people can't actually try and compare with VRS off on their console to see if the performance improvements resulting in a very small resolution increase (almost completely hidden by DRS + TAA) are worth it.

This what I also noticed in all comparisons with hardware VRS On on Xbox, Off on PLaystation. The image looks noticeably better for me on PS even when Tiers 2 is used. Granted I like the textures being sharp, but still. Even John thinks the textures are visibly worse in Doom Eternal on Xbox compared to PS (and the game actually outputs at a lower average resolution on Pro vs X1X!). And he is finally wondering if the whole technique is actually worth it.
Doom overall is significantly higher resolution on Xbox and manages to maintain much more consistency in resolution vs PlayStation. We have this in the thread where I look at spectrum analysis.

Ultimately we don’t play games in screenshots. And during developer interview John does agree that without 3x zoom it’s neigh impossible to notice. We just don’t play games like that. Our implementations for AA and upscaling Algol’s are just the same, trade offs in which we get temporary artifacts and edge artifacts in exchange for higher overall resolution, or slightly less better AA.

Native 4K with no temporal component will always look the same as 4K TAAU at a standstill screen shot, but motion is a completely different animal. Alex covers a lot of ghosting artifacts in many videos caused by occlusion or just looking left and right.

VRS will look worse in standstill to any trained eye, but in motion assuredly the overall higher resolution is going to be more noticeable.

The challenge is that as we continue to mix all sorts of technologies together, one has to ask if it continues to make sense. Compounding techniques could be an issue that requires additional navigation.

The reality is, this is still one of few (less than 10ish?) VRS titles ever released versus the hundreds of games that are using DRS, temporal AA and upscaling. There are more games using FSR than and RT than VRS. So this idea that we’ve seen peak VRS implementation is unlikely. People aren’t going to engage in it until they need it, and apparently it’s not needed yet as there are so many other areas in the game engine that needs upgrading prior to freeing up more compute. Wave 3/4 games may require it though, when the limits of these consoles are pushed at edge : you will want an additional tools to smooth out issues.

It’s important to note, VRS can also be dynamic, and no developer has implemented such a system yet. Right now it’s a constant application, but it doesn’t have to be. VRS if it ever becomes mainstream will have better implementations as times go on.
 
Last edited:
I don't know how reliable it is, but someone on twitter in that thread said they checked the gears VRS implementation on PC and said the performance increase they got with it was around 3 percent. Maybe in a very constrained scenario that matters, but these consoles haven't been pushed at all to such level where such bottom of the barrel savings are even neccesary.

Some people looked at the raw numbers the consoles were putting out and assumed that was the issue, but I think people underestimate how nice around 1080p with decent reconstructed upscaling can look. I think that leaves good headroom for the consoles going forward in terms of GPU load. And even switch 2 if it gets dlss
 
I don't know how reliable it is, but someone on twitter in that thread said they checked the gears VRS implementation on PC and said the performance increase they got with it was around 3 percent. Maybe in a very constrained scenario that matters, but these consoles haven't been pushed at all to such level where such bottom of the barrel savings are even neccesary.

Some people looked at the raw numbers the consoles were putting out and assumed that was the issue, but I think people underestimate how nice around 1080p with decent reconstructed upscaling can look. I think that leaves good headroom for the consoles going forward in terms of GPU load. And even switch 2 if it gets dlss
It’s true. But most people accept ghosting and temporal artifacts nowadays in lieu that frame rate is high enough (ie 60fps and above).

When we inevitably fall back to 30fps, I want to know how excited people are for temporal artifacts again - almost no one plays demon souls at 4K@30 over 1440p@60 for a reason. The static resolution does not make up for the massive loss in motion resolution and the amount of ghosting at 30.

Quite frankly, it’s a terrible experience.

In any event, you are right that DRS makes it seem pointless to require VRS. From a technical perspective DRS relieves all bottlenecks, most of them at least. VRS only relieves compute largely.

Knowing when to use one or the other is going to be a step up over just using one.

Secondly,
It’s very easy to train people to spot VRS degradation with a static screenshot. But most people haven’t the slightest clue how to spot a resolution dips in DRS because you need to see a clip of in motion on repeat loop versus a clip that does not drop.

And no one does this.

But we can measure it (my tool that Rich has been trialling to compare graphical fidelity of upscaling algorithms and resolution. it’s been over a year or so and I don’t think he’s going to bother pursuing this further so this is how far we got - I hope he doesn’t mind me sharing this WIP.


But the point stands; when things static and side by side people can spot things. But in motion, your eyes don’t stand a chance.
 
There will inevitably be more and more 30fps games as the gen wears on, but I remain unconvinced that devs ambitions are currently hampered by 60fps to the point it will be a huge shift from majority games at 60. Game design and genres won't shift that significantly I think to require that.

Although it may seem like it sometimes, it's not like devs on the whole go down to capping games at 15fps if they want to get more ambitious than 30fps(lol).

I think the consoles CPUs we currently have will be an interesting challenge of how many devs try and optimize their way to 60fps on console at all cost while expanding their ambition as opposed to hard capping the fps at 30 and calling it a day going forward

The fact that it's possible at all that we have games like cyberpunk, dead space and such running at solid 60 at all proves these consoles performant...even if the GPU has to be treated somewhat kindly to do so 😂
 
Last edited:
In addition to Iroboto's excellent post, VRS is the equivalent of dynamic resolution scaling but for shader detail - dynamic shader scaling. It'll vary from 'native res' to 'pretty blurry' depending on scene complexity and need to reduce processing time to get the frame out promptly. That only needs be enough to eliminate a long frame and a bit of judder. If the end result is slightly soft detail sometimes for a rock-solid 60 fps, it'll be much appreciated. The end result is no worse in theory to all the compression we see in streamed movies, that everyone's happy with.

That said, VRS is kinda tangential to VRR. With VRR you keep detail the same and change the frame pacing but smooth the output through the display adapting. There's not much point having both and VRS might never gain traction as more limited in target platforms.
 
In testing the Coalition found VRS increased resolution scaling by ~10% in Gears 5 (83% to 92% of 4k)

See this for an exmaple

85e871bc68f10ab872ab489ace443e958228b858.png


I do feel that as a technology it's really immature and new at the moment and is something engines need to be build around to truly take advantage of its benefits.

I also recommend this presentation for those who are interested as it has a lot of data and frame time comparisons comparing VRS on vs off.

 
Last edited:
The end result is no worse in theory to all the compression we see in streamed movies, that everyone's happy with.
In theory, and then again is everyone happy in practice? Surely if compression is minimal no worries, and we are watching some youtube video. But I doubt everyone who want to have the movie experience, would want to watch a compressed video of Avatar 2 on their 4k TV.
In the case of Deep Space it is very noticeable. Considering that this should be like a tradeoff between some miniscule sacrifice in detail in unimportant areas in favor of maintaining performance and resolution, it didnt work as intended. The game has a native resolution below 4k at 30fps and at best 1080p at 60fps, in a game with small environments. Fingers crossed the implementation will improve and make more sense in the future
 
I don't know how reliable it is, but someone on twitter in that thread said they checked the gears VRS implementation on PC and said the performance increase they got with it was around 3 percent. Maybe in a very constrained scenario that matters, but these consoles haven't been pushed at all to such level where such bottom of the barrel savings are even neccesary.

Some people looked at the raw numbers the consoles were putting out and assumed that was the issue, but I think people underestimate how nice around 1080p with decent reconstructed upscaling can look. I think that leaves good headroom for the consoles going forward in terms of GPU load. And even switch 2 if it gets dlss


That could be it. We need to have in mind that:

“VRS is an optimization that reduces the amount of pixel shader invocations. As such, it will only see improvement on games that are GPU bound due to pixel shader work.”

In other words you may see no gains or minimal gains. I have seen article about vrs implementation in mobile game where performance boost was more than 50%. When new games arrive that were designed with vrs in mind we could see different results.
 
In testing the Coalition found VRS increased resolution scaling by ~10% in Gears 5 (83% to 92%)

See this for an exmaple

85e871bc68f10ab872ab489ace443e958228b858.png


I do feel that as a technology it's really immature and new at the moment and is something engines need to be build around to truly take advantage of its benefits.

I also recommend this presentation for those who are interested as it has a lot of data and frame time comparisons comparing VRS on vs off.


I can only add that his presentation on Variable Rate Compute Shaders is excellent as well.
 
In theory, and then again is everyone happy in practice? Surely if compression is minimal no worries, and we are watching some youtube video. But I doubt everyone who want to have the movie experience, would want to watch a compressed video of Avatar 2 on their 4k TV.
In the case of Deep Space it is very noticeable. Considering that this should be like a tradeoff between some miniscule sacrifice in detail in unimportant areas in favor of maintaining performance and resolution, it didnt work as intended. The game has a native resolution below 4k at 30fps and at best 1080p at 60fps, in a game with small environments. Fingers crossed the implementation will improve and make more sense in the future
Implementation is always going to be the factor here. No one wants to watch a movie with low bitrate as once decompressed it looks like blocky low detail garbage.

Most people are willing to take the trade off, or many people would have 500+ dollar blu ray players and UHD Dolby vision discs.

I do wonder what the results are if you increase VRS as motion vectors increase and completely remove VRS when the player comes to a static stand still. So once you correlate where you perform VRS with where the motion vectors are, the areas that need detail and the player can see statically stay sharp, and the areas with more movement get VRS applied to them more.
 
I do wonder what the results are if you increase VRS as motion vectors increase and completely remove VRS when the player comes to a static stand still.

VRS should naturally be increasing in effect during movement as motion blur creates easy targets for reduced shading rates.

So what you describe should be an automatic process where it's able to identify these type of situations.
 
VRS should naturally be increasing in effect during movement as motion blur creates easy targets for reduced shading rates.

So what you describe should be an automatic process where it's able to identify these type of situations.
Depends how you plumb it in. If you’re doing VRS at the very end or if you are running it on selective buffers
 
I think VRS technology is inherently a big problem when used with reconstruction technologies. Because the artefacts were actually aliasing that was a result of VRS downscaling from a quite lower native resolution. The lower the native resolution, the biggest the artefacts. This is similar to a problem COD developers had with VRS in general. The blocky textures created by VRS (at least 2x2 pixels sized) could not be processed by the TAA leading to bad final image so they had to use their own advanced solutions in order to de-block the textures (upscaling them to the previous resolution sort off) before being taken care by the rest of the pipeline.

So for instance in the case of PS5 with FFS 2.1 (it would be similar on PC with agressive DLSS) VRS would maybe downscale some textures from 1080p to say 540p effective resolution (4x less). But then those 540p-ish textures would be upscaled with the macro aliasing because the TAA (or others upscaling / cleaning solutions) would be unable to detect and correct this resulting in 540p like aliasing in an otherwise clean 1440p like image.

The solution would be to take care of the 2x2 or higher macro-blocks before the rest of the pipeline but here COD devs could not do it properly using hardware VRS and had to implement their own ingenious solutions. There is a nice thread with a good summary about it on Gaf and link to the COD slides.

 
That could be it. We need to have in mind that:

“VRS is an optimization that reduces the amount of pixel shader invocations. As such, it will only see improvement on games that are GPU bound due to pixel shader work.”

In other words you may see no gains or minimal gains. I have seen article about vrs implementation in mobile game where performance boost was more than 50%. When new games arrive that were designed with vrs in mind we could see different results.
As long as it's applied smartly on places you as the viewer won't see it, I don't think anyone will mind VRS. DRS already does similar thing when the GPU is stressed ( in busy scenarios with a lot of movement ) with the entire image and the actual amount of times people notice it is relatively small unless the AA solution is bad especially now with FSR2 recleaning the image back up.
 
VRS should naturally be increasing in effect during movement as motion blur creates easy targets for reduced shading rates.

So what you describe should be an automatic process where it's able to identify these type of situations.
You can see that in the Gears of War demonstration in the DF video. When standing still, there were fewer green parts on the screen, but as soon as the character moves, a great deal of the screen goes green. Motion blur seems to be the perfect use for VRS, IMO. I'm not saying that I'm an expert, though. I'll see if I can snip that portion out and post it here in a bit.
 
Great posts here, glad the quality of discussion about vrs has improved a little since yesterday, sorry about my more knee jerk post. A couple of key points about VRS and bottlenecks:
  • As some posters here already said, it only benefits you if you are fragment shader bound. It doesn't help at all with desnity of geometry, animation, or even compute resources, just rendering shaders. On a high end PC it's hard to imagine ever being bound on shaders unless you are rendering at like 12k resolution or something, which is why it's ridiculous it was forced on for PC. This also makes some user's offhand gears 5 test pretty silly, this is mostly a feature for consoles, laptops, etc, which really don't have the hardware to render to the 4k screens they're attatched to.
  • You have to provide the VRS system with an image of the screen with areas marked up for what rate you want to shade them at, which means you have to generate an image and run some hueristics to decide what areas to shade effectively. Depending on your renderer you might not have all of the relevant data on hand at this stage of the render naturally, so this may cost bandwith to do well -- it probably costs compute to analyze edges, etc. As we see in dead space and will probably see in the future, sometimes it's not practical for either development time reasons or maybe hardware performance reasons to get a really precise targeting of VRS which makes it almost invisible (as in gears). So seeing a lot of aggressive VRS can mean one of two things: Either the renderer is super pixel shader bound and needs to do hyper aggressive vrs everywhere, or the renderer can't easily produce a more precise image and so we end up with it being a little excessive and brute force than we really need to hit the target framerate
For dead space I personally speculate it's a little bit of both. On the one hand, there's a lot of very expensive pixel shader work going on -- I'm like ~80% sure there's some very subtle, very high sample count parralax mapping on many surfaces, there's definitely tons of very high resolution fog, there are definitely tons of lights, etc. On the other hand, It seems like direct lighting is the main factor which drives the VRS intensity, and we can see highly lit, high resoliution scenes which don't look like they're a lot cheapter to render than the darker rooms with more aggressive vrs, so maybe in a perfect world if the developers could provide more subtle markup for vrs it would look a lot more like gears. Hard to say, just using dead space as an example since it's the game we're talking about right now.

(And also, I still think some posters here are delusional -- it's easy to remember certain people deciding VRS was the worst feature ever on exactly the day they found out their preferred console didn't have hardware support. This is a feature that's designed to degrade image quality -- ideally gracefully -- by definition it will always look worse turned on than turned off, but it's (in theory!) better than dialing down the entire screen res just to fix a bottleneck that part of the screen could have been reduced to fix instead.)
 
Status
Not open for further replies.
Back
Top