VRS: Variable Rate Shading *spawn*

Unreal Engine uses deffered rendering, Cry Engine uses deffered rendering, Unity uses forward rendering by default but a deffered path exist, Frosbite uses deffered rendering, Rockstar North games uses deffered rendering. The engine at Sony use mostly deffered rendering (Decima Engine, Naughty Dog engine, Sucker Punch, Sony Santa Monica, Sony San Diego, if I remember well Insomiac engine too).

https://imgeself.github.io/posts/2020-06-19-graphics-study-rdr2/



EDIT:
https://www.destructoid.com/stories...-engine-made-fallout-4-so-pretty-319051.phtml

ID software use forward rendering but bethesda Fallout engine uses deffered rendering.

EDIT2:
https://aschrein.github.io/2019/08/11/metro_breakdown.html

Metro Exodus uses deffered rendering

Unreal Engine has had forward+ rendering support since 2016

Apparently forward+ comes from AMD.

https://takahiroharada.files.wordpress.com/2015/04/forward_plus.pdf
 
Last edited:
Unreal Engine has had forward+ rendering support since 2016
A lot of engines as stated earlier have come a long way from pure deferred. And it's impossible to guess the future of where development is headed. Having options is always more ideal than not having them, even if they are largely unused.

I don't really see the point in arguing not having VRS is a good thing. It was a compromise, and they'll need to settle it by either going with VRS software, or just ignoring it entirely.

MS went with VRS and all the other features as it is also in their best interest to baseline the graphical features for PC and console and ideally their cloud position in the future. Shipping with the most up to date feature set ensures that developers have a strong baseline to which they can develop their titles against in the long foreseeable future and still provide that functionality even if they aren't better than custom ones, because it's (a) free and (b) not every solution fits every problem. No harm in having more solutions.

Its a PS5 thread. It doesn't have it. That's fine. Let's move onto more interesting topics. I don't see anyone here dangling this as being the death knell for PS5. If PS5 doesn't have a full FPU, that's fine too. If it doesn't have mixed precision support, that's also fine. AMP was big deal in 2017+ but clearly it's now been surpassed by QAT and Post Training Quantization which is basically converting all FP32 into int8 representative. As long as it supports int8, it's fine. Maybe one day there will be QAT to int4 that is still highly accurate, or some form of partial quantization with int4 and other precisions to keep speed and accuracy, I dunno. But overall, I think the PS5 will be fine.

There's no reason to defend every single compromise Sony had to make with PS5, or MS with Xbox. Both had cost restrictions. Both had different objectives and requirements for their chips. This is the way it rolled out.
 
Last edited:
A lot of engines as stated earlier have come a long way from pure deferred. And it's impossible to guess the future of where development is headed. Having options is always more ideal than not having them, even if they are largely unused.

I don't really see the point in arguing not having VRS is a good thing. It was a compromise, and they'll need to settle it by either going with VRS software, or just ignoring it entirely.

MS went with VRS and all the other features as it is also in their best interest to baseline the graphical features for PC and console and ideally their cloud position in the future. Shipping with the most up to date feature set ensures that developers have a strong baseline to which they can develop their titles against in the long foreseeable future and still provide that functionality even if they aren't better than custom ones, because it's (a) free and (b) not every solution fits every problem. No harm in having more solutions.

Its a PS5 thread. It doesn't have it. That's fine. Let's move onto more interesting topics. I don't see anyone here dangling this as being the death knell for PS5. If PS5 doesn't have a full FPU, that's fine too. If it doesn't have mixed precision support, that's also fine. AMP was big deal in 2017+ but clearly it's now been surpassed by QAT and Post Training Quantization which is basically converting all FP32 into int8 representative. As long as it supports int8, it's fine. Maybe one day there will be QAT to int4 that is still highly accurate, or some form of partial quantization with int4 and other precisions to keep speed and accuracy, I dunno. But overall, I think the PS5 will be fine.

There's no reason to defend every single decision Sony had to make with PS5, or MS with Xbox. Both had cost restrictions. Both had different objectives and requirements for their chips. This is the way it rolled out.
Who wrote that is good thing that there is no hw vrs in ps5 ? it's just new info at least for me that in many cases software vrs is better than hw and talk about significance of hw vrs seems now little exagerated. Suprised that many seems to be triggered by this fact.
 
Last edited:
Unreal Engine has had forward+ rendering support since 2016

Apparently forward+ comes from AMD.

https://takahiroharada.files.wordpress.com/2015/04/forward_plus.pdf

Forward+ is a marketing term from AMD, yes. It refers to tiled forward, and sometimes is used to refer to clustered forward, which are the underlying techniques (not invented by AMD engineers). Ue4's forward path is used primarily for mobile and VR applications -- it doesn't have full parity with the deferred path, and was not used in gears 5.
 
Who wrote that is good thing that there is no hw vrs in ps5 ? it's just new info at least for me that in many cases software vrs is better than hw and talk about significance of hw vrs seems now little exagerated. Suprised that many seems to be triggered by this fact.
Let’s a) not use “triggered” in a tech thread (or in B3D in general) and b) stop making broad snap judgements with incomplete, new info.
 
Who wrote that is good thing that there is no hw vrs in ps5 ? it's just new info at least for me that in many cases software vrs is better than hw and talk about significance of hw vrs seems now little exagerated. Suprised that many seems to be triggered by this fact.

Some may be triggered because they may have issue that an Activision presentation that say "their" software VRS "may" be better than hardware VRS is being used to say that software VRS in general is better than hardware VRS.

Sisgraph 2020 was in August which is before RDNA2 and consoles were released. The only VRS enable GPUs on the shelf were Turing based 2000 and 1000 cards. In fact the activision presentation states in the notes that "IW7 used software based implementation of what would be VRS Tier1 in DX12". Right in the notes it says "High hopes we can deliver something that works on all platforms and beats HW implementation".

There is nothing in this presentation that Activision's software VRS is an explicitly faster implementation than what 2020+ hardware VRS (tier 2) offers.
 
Who wrote that is good thing that there is no hw vrs in ps5 ? it's just new info at least for me that in many cases software vrs is better than hw and talk about significance of hw vrs seems now little exagerated. Suprised that many seems to be triggered by this fact.

The significance of HW VRS is it's place as a baseline feature available for deployment without the need to build something entirely from the ground up. While it's not necessarily better performance than a custom solution, and neither is Tiled Resources, and we can probably say the same things about primitive and mesh shaders, at the end of the day, not everyone has the budget, time or talent to create finely tuned solutions. And these baseline features are available for teams to deploy still despite not being necessarily the best in class.

I think it's important for someone looking in at these features and realizing smaller teams can get away with heavier lifts because these features are included with DirectX. They aren't optimal, sure, 1 size fits all usually never are, but its still desirable to have as an option if a team is looking for something that works that they can deploy and don't have a runway time of months of rendering budget to figure out the best way to do it custom.

In the same vein, it's like saying RTRT isn't really all that important on consoles because nvidia cards do it so much better, or that DLSS if it ever arrives on console isn't really all that important because nvidia does it so much better; it's really missing the point of the difference between the offering a feature set and the performance of a feature set.

We need the feature set, because when the graphical feature set can change, and in doing so, can the game design. We haven't seen games particularly with a lot of destruction particular because on top of being challenging, its incredibly hard to get good dynamic GI and lighting. Supporting a feature set that can handle this, is exactly why we need RTRT and we can scale performance as we require. And if performance is still missing, VRS can be an option for some games to leverage to make up that performance loss for those heavier features to meet the cycle.

On the general note of how tech discussion tends to flow: I think the idea that the complete maximization of every feature is only time to qualify that the console is fully maximized is likely a naïve perspective on things. The developers will use the underlying tools that are ultimately available to them only if they require them. It's the job of the designers and platform holders to design a console and development kit, API, and performance debugging tools that can adequately support a wide diversity of games engines and types to studio teams of various skills and sizes.

The tldr; features aren't some checkbox items that if a game uses it, means it's better or more optimized, or extracting more performance. It probably means really just means they needed it and it was useful to them. And I think this idea is getting lost in the conversation. Try writing your own custom virtual texturing system; it's not easy. Neither is writing a compute shader to generate triangles, and I can't imagine making your own VRS solution easy either. It's already hard enough to use the features as is.
 
Last edited:
Some may be triggered because they may have issue that an Activision presentation that say "their" software VRS "may" be better than hardware VRS is being used to say that software VRS in general is better than hardware VRS.

Sisgraph 2020 was in August which is before RDNA2 and consoles were released. The only VRS enable GPUs on the shelf were Turing based 2000 and 1000 cards. In fact the activision presentation states in the notes that "IW7 used software based implementation of what would be VRS Tier1 in DX12". Right in the notes it says "High hopes we can deliver something that works on all platforms and beats HW implementation".

There is nothing in this presentation that Activision's software VRS is an explicitly faster implementation than what 2020+ hardware VRS (tier 2) offers.
Activision was example of forward + engine but general statement about defered engine and vrs was used by principal engineer at Unity Sebastian Aaltonen from yeasterday tweet.
 
The significance of HW VRS is it's place as a baseline feature available for deployment without the need to build something entirely from the ground up. While it's not necessarily better performance than a custom solution, and neither is Tiled Resources, and we can probably say the same things about primitive and mesh shaders, at the end of the day, not everyone has the budget, time or talent to create finely tuned solutions. And these baseline features are available for teams to deploy still despite not being necessarily the best in class.

I think it's important for someone looking in at these features and realizing smaller teams can get away with heavier lifts because these features are included with DirectX. They aren't optimal, sure, 1 size fits all usually never are, but its still desirable to have as an option if a team is looking for something that works that they can deploy and don't have a runway time of months of rendering budget to figure out the best way to do it custom.

In the same vein, it's like saying RTRT isn't really all that important on consoles because nvidia cards do it so much better, or that DLSS if it ever arrives on console isn't really all that important because nvidia does it so much better; it's really missing the point of the difference between the offering a feature set and the performance of a feature set.

We need the feature set, because when the graphical feature set can change, and in doing so, can the game design. We haven't seen games particularly with a lot of destruction particular because on top of being challenging, its incredibly hard to get good dynamic GI and lighting. Supporting a feature set that can handle this, is exactly why we need RTRT and we can scale performance as we require. And if performance is still missing, VRS can be an option for some games to leverage to make up that performance loss for those heavier features to meet the cycle.

On the general note of how tech discussion tends to flow: I think the idea that the complete maximization of every feature is only time to qualify that the console is fully maximized is likely a naïve perspective on things. The developers will use the underlying tools that are ultimately available to them only if they require them. It's the job of the designers and platform holders to design a console and development kit, API, and performance debugging tools that can adequately support a wide diversity of games engines and types to studio teams of various skills and sizes.

The tldr; features aren't some checkbox items that if a game uses it, means it's better or more optimized, or extracting more performance. It probably means really just means they needed it and it was useful to them. And I think this idea is getting lost in the conversation. Try writing your own custom virtual texturing system; it's not easy. Neither is writing a compute shader to generate triangles, and I can't imagine making your own VRS solution easy either. It's already hard enough to use the features as is.
Totaly not see parity in rtrt and vrs example. I will see if you show me one example of software rt used by modern engine that is faster than hw accroding to big aaa studio.
 
Totaly not see parity in rtrt and vrs example. I will see if you show me one example of software rt used by modern engine that is faster than hw accroding to big aaa studio.
You could compare dynamic compute based voxel based lighting solutions, plane reflections etc. Which can all be done traditionally without RTRT.

UE5 is a perfect example of a modern engine doing basically the equivalency of what we use RT for using pure compute.

Bloober is a really tiny studio that put out some great ray tracing graphics from a relatively small team. That's what API and feature support does.

You can run all sorts of RTRT based games on Pascal Nvidia cards that don't support RT hardware. Performance has much to be desired, but with a strong enough pascal, you can make it happen. 3x1080TI and flip on RT and it'll work. It's entirely software.
 
You could compare dynamic compute based voxel based lighting solutions, plane reflections etc. Which can all be done traditionally without RTRT.

UE5 is a perfect example of a modern engine doing basically the equivalency of what we use RT for using pure compute.

Bloober is a really tiny studio that put out some great ray tracing graphics from a relatively small team. That's what API and feature support does.

You can run all sorts of RTRT based games on Pascal Nvidia cards that don't support RT hardware. Performance has much to be desired, but with a strong enough pascal, you can make it happen.
ue5 is good example but 1) ue5 still doens't exist so we don't really know it cabailites in real games 2) its not same as triangle rt and has artifacts
But to be honest I think ue5 approach is the way to go for ps5/xsx generation as simply their hw rt capabilites are too weak. But as I said, there is no parity in ue5 lumen and triangle rt. And there is parity in software vrs vs hw couse depneding on situation some can be better than other.
 
ue5 is good example but 1) ue5 still doens't exist so we don't really know it cabailites in real games 2) its not same as triangle rt and has artifacts
But to be honest I think ue5 approach is the way to go for ps5/xsx generation as simply their hw rt capabilites are too weak.
It just comes down to what the developers want ultimately. The whole star wars demo was presented in Unreal with Volta hardware. There are no RT cores there either. But the API makes it easier for people to do the work.

the whole point of this discussion of haves and have nots really comes down to understanding the limitations on the developers themselves. We can talk about DX12 and mesh shaders all day long, really we could. But if you asked me to code a game, I'm going to use the traditional render path and DX11, cause everything else is just way too difficult and I'll never ship anything. At this point in time, I wouldn't bother with any render coding at all, I'm done with that. I'd go straight to UE or Unity and typically just not bother. I'd be lucky to ship something with a premade engine. It's already that hard to make a game.

and lastly, if I have to make the point clear 1 final time. It's not always about using VRS to save performance. You can also use it to increase quality too and degrade performance to resolve graphical issues at the cost of longer rendering times.

 
Last edited:
Last post on the matter while I go back to work and stuff.
Read this paper on VRS.
https://www.diva-portal.org/smash/get/diva2:1442799/FULLTEXT01.pdf
Background. The shading cost of a pixel is only getting more expensive with more realistic games. Resolution of games is equally pushed to display the all the details in a scene. This causes rendering a frame to be very expensive. Dynamic Resolution Rendering has been used to uniformly decreases resolution to gain performance but with the new release of image-based shading through Variable Rate Shading could be the new way to gain performance with less impact on image quality.

Objectives. The goal is to see if the adaptive shading possibilities of Variable Rate Shading can show equal or better results, in regards to performance and image quality, compared to the uniform shading of Dynamic Resolution Rendering.

Methods. This investigation is performed by implementing them into the Deferred Lighting pass in a Deferred Renderer. The performance is measured by the render pass time of the Deferred Lighting and the image quality is measured by comparing the final frames of Variable Rate Shading and Dynamic Resolution Rendering against the original resolution through SSIM.

Results. Overall Variable Rate Shading show comparable performance results to Dynamic Resolution Rendering but the image quality is closer to the original resolution.

Conclusions. Using image-based shading on the deferred lighting pass allow the possibility of extracting similar performance gains as dynamic resolution rendering but allows maintaining higher image quality

It's worth a read. Though it points out that VRS can be blocky, where blur is actually desired. But the opposite is true with DRS, it can be blurry where sharpness is desired.

Short summary, I'll just paste chapter 7 bullet notes:


Adaptive shading has and is a hot topic to research in real-time rendering because of hardware limitations and the desire to increase the rendering resolution. Current games are still mainly working with uniform shading and investigating more adaptive ways would prove beneficial for games. With the release of Variable Rate Shading it has recently been possible with adaptive shading in hardware through the rasterization pipeline. It still very early in its life cycle with only a small portion of games 39 40 Chapter 7. Conclusions and Future Work in the market supporting it, as of writing. To further investigate the use of Variable Rate Shading and its adaptive capabilities there are several interesting topics to look at. Where possible future work include:

  • Due to how Variable Rate Shading works it can cause pixelated results when using 2x and 4x shading rate. Further investigation would be to look at techniques removing these to improve image quality. With the help of the shading rate texture it will be possible to determine where to apply post-processing.
  • Variable Rate Shading has been mainly proposed as a technique to increase image quality. Another topic to investigate the use to enhance gameplay experience. Regions of interest on the screen would receive higher resolution to clearly display it in detail. In online multiplayer first person shooters enemy players are important to notice to win the game and increase resolution on them may increase the visibility of an enemy and will therefore be easier to defeat.
  • Investigate image-based shading’s dynamic performance and evaluate if there is an even enough correlation between image-based shading and dynamic resolution rendering and evaluate the performance of 4x shading rate. There could be conflicts when using heuristics based on the scene data. In this case an example could be to change resolution by using a type of foveated rendering as heuristics, where inside the radius of an ellipse would equal 1x1 shading rate and outside would result in 2x2 shading rate. When performance would be necessary the radius could decrease and then increase if image quality is preferred. The outer bounds could be further developed to include the possibility of one axis using use 1x to increase image quality since it would be less apparent. This could also be applied to less visible post-effects like SSAO or Global Illumination where the results may not be as noticeable.
- Filip Lundbeck


TDLR; you don't like Dirt 5 because VRS on 1440p looks worse than 1440p native. Well yea.
but VRS on 4K would look better than 1440p native visually. That's sort of what should have happened in terms of comparisons. But for whatever reason, they chose combined DRS with VRS and overall got worse results.
 
Last edited:
Unreal Engine uses deffered rendering, Cry Engine uses deffered rendering, Unity uses forward rendering by default but a deffered path exist, Frosbite uses deffered rendering, Rockstar North games uses deffered rendering. The engine at Sony use mostly deffered rendering (Decima Engine, Naughty Dog engine, Sucker Punch, Sony Santa Monica, Sony San Diego, if I remember well Insomiac engine too).

https://imgeself.github.io/posts/2020-06-19-graphics-study-rdr2/



EDIT:
https://www.destructoid.com/stories...-engine-made-fallout-4-so-pretty-319051.phtml

ID software use forward rendering but bethesda Fallout engine uses deffered rendering.

EDIT2:
https://aschrein.github.io/2019/08/11/metro_breakdown.html

Metro Exodus uses deffered rendering

Worth point out for those that won't read your links that even some of those deferred engines might have forward passes e.g. RDR2 for translucency.

And I guess it might be worth considering that even though 4A games have a working software VRS implementation for GCN and RDNA2 in PS4&5, they're opting for at least some hardware based VRS for XSX and PC.

looking at this list I'm not surprised Sony didn't invest into hw vrs

It quite possibly wasn't an option. The ROPs in PS5 for example seem older than XSX / Navi 21 and probably couldn't support VRS. Some part of PS5 might have come from earlier branches of the RDNA 1/2 development timeline before VRS, while AMD's first VRS implementation in Navi 21 is similar to Xbox, and MS in their own words had to "wait".

Graphics cards should really be thought of as collections of technology, and semi-custom products have some ability to defy being rigid copies of PC product "generations".


Worth noting that he doesn't explicitly mention "engines" there, probably because some passes can be deferred and others forward even within a mostly "deferred" engine.
 
I still think VRS, when used on any visible textures, will still be a divisive tech like motion blur or CA (for different reasons). You will always exchange sharpness against a few frames per seconds. You'll always lose something which is not the case of many others technique (or much less). When I look at games using VRS, I always think at games looking somehow blurry (and that was before I knew the reason): Gears 5, Halo Infinite or Dirt 5.

Also I think that it's preferable to use DRS instead of VRS in order to increase the framerate. Because DRS will only activate when it's needed and VRS will always make the image blurry, even if the game is capped at 30/60 fps and doesn't need to reduce the resolution of textures.

In the world where VRR (when the game still drop under the cap) and DRS exist, I think VRS, applied on any visible objects, is not worth it as it costs too much perceptible resolution for not much gain.

The only advantage, compared to DRS, is the game will have a higher official resolution than with DRS. So sure it's great for PR statements and checkbox lists.
 
The only advantage, compared to DRS, is the game will have a higher official resolution than with DRS. So sure it's great for PR statements and checkbox lists
DRS and VRS aren't mutual exclusive and Tier 2 VRS can provide good results with DRS together. I don't get the feature VS here. As with any technology the implementation is key and developers will get better. It's clear developer's think VRS is worth it, otherwise teams like A4 wouldn't go the extra mile to develop software solutions for PS and when done right imo most people won't notice the decreased quality of the texture during gameplay. These pictures sums up the idea perfectly.
vrs_grids_car_002.png


If I may ask and please don't google. Which side uses VRS Tier 2?
https://devblogs.microsoft.com/directx/wp-content/uploads/sites/42/2021/01/Header.svg
 
I still think VRS, when used on any visible textures, will still be a divisive tech like motion blur or CA (for different reasons). You will always exchange sharpness against a few frames per seconds. You'll always lose something which is not the case of many others technique (or much less). When I look at games using VRS, I always think at games looking somehow blurry (and that was before I knew the reason): Gears 5, Halo Infinite or Dirt 5.

Also I think that it's preferable to use DRS instead of VRS in order to increase the framerate. Because DRS will only activate when it's needed and VRS will always make the image blurry, even if the game is capped at 30/60 fps and doesn't need to reduce the resolution of textures.

In the world where VRR (when the game still drop under the cap) and DRS exist, I think VRS, applied on any visible objects, is not worth it as it costs too much perceptible resolution for not much gain.

The only advantage, compared to DRS, is the game will have a higher official resolution than with DRS. So sure it's great for PR statements and checkbox lists.
Read the paper. It's qualitatively proven that VRS is sharper. You can compare to the different VRS styles below vs DRR

What you see with VRS today is just the beginning for it. Today it's being used as a check box feature, in the future it will be used correctly, appropriately, have larger implications than what you see with the current set of titles.
 
Last edited:
I think that it's preferable to use DRS instead of VRS in order to increase the framerate. Because DRS will only activate when it's needed and VRS will always make the image blurry, even if the game is capped at 30/60 fps and doesn't need to reduce the resolution of textures.
This isn't true. VRS can be applied dynamically. If the developer chooses to only apply it below a target frame rate they can do that.
 
Its a PS5 thread. It doesn't have it. That's fine. Let's move onto more interesting topics. I don't see anyone here dangling this as being the death knell for PS5. If PS5 doesn't have a full FPU, that's fine too. If it doesn't have mixed precision support, that's also fine. AMP was big deal in 2017+ but clearly it's now been surpassed by QAT and Post Training Quantization which is basically converting all FP32 into int8 representative. As long as it supports int8, it's fine. Maybe one day there will be QAT to int4 that is still highly accurate, or some form of partial quantization with int4 and other precisions to keep speed and accuracy, I dunno. But overall, I think the PS5 will be fine.

There's no reason to defend every single compromise Sony had to make with PS5, or MS with Xbox. Both had cost restrictions. Both had different objectives and requirements for their chips. This is the way it rolled out.

I don't see it as the most important thing. It's neat, it's useful to have, but most efficient high end engines are replacing a lot of pure rasterization with compute shaders anyway, as you can overlap those, and VRS doesn't support compute shaders. Compute like VRS definitely seems more flexible overall anyway, the results from Modern Warfare are pretty solid.

Still, less developer work is also very nice. And of course VRS has the potential to be sharper, you can sample and shade thin and one pixel objects you'd miss just with upsampling. Though unlike upsampling the initial geometry sampling phase gets no faster.
 
I don't see it as the most important thing. It's neat, it's useful to have, but most efficient high end engines are replacing a lot of pure rasterization with compute shaders anyway, as you can overlap those, and VRS doesn't support compute shaders. Compute like VRS definitely seems more flexible overall anyway, the results from Modern Warfare are pretty solid.

Still, less developer work is also very nice. And of course VRS has the potential to be sharper, you can sample and shade thin and one pixel objects you'd miss just with upsampling. Though unlike upsampling the initial geometry sampling phase gets no faster.
Shading of pixels when done by pixel or compute shaders is still shading. VRS will work with compute based shading just as well as with pixel shaders. VRS won't help with compute based rasterization but that's a different matter entirely.
 
Back
Top