VRS: Variable Rate Shading *spawn*

There have been some software-based VRS implementations that haven't been horrible. I'm drawing a blank on the titles, but they exist.
 
The VRS can also be used for things like Variable Rate Compute as demoed by xbox ATG and Intel.
Used to speed up lighting.

XS has benefit of being able to do it on comoressed textures if I remember correctly
 
Last edited:
My only worry about VRS is the lack of (h/w) support for it on PS5. If that's the case then there won't be as much incentive for developers to try to find great usage for it, and we will continue getting "cheap" implementations like these we've seen so far.
Yea, it may just be a wash in the long run. VRS is only used on the 3D pipeline, and PS5 has a significant advantage in that particular area. If VRS is enabled on series consoles, it may help with resolving the series consoles imbalances when it comes to its own compute vs FF hardware bottlenecks. Or it may help with just dealing with more complex areas, the series consoles will need it especially the Series S. It's really just a place for it to grow when they've maxed out other areas. PS5 maybe able to make up the loss on the FF side of things, which is why we haven't seen the Series X be able to pull away from PS5. I think looking at it, the amount of extra compute is more less enough to keep it competitive with PS5. In particular resolution typically being the only advantage if it happens. But if you want to change that 'resolution' buff into something else, like bringing in higher rendering features, VRS is an ideal candidate to optimize those areas for and layer them in.

I hope it happens, @Dictator is right to say that comparisons are boring when we are only looking at min/max ranges for DRS. Use that power for something else that actually matters, VRS may be able to help do that here.
 
Front end FF is an interesting discussion that we may not see the results for few years after cross gen has passed.

In the long term moving towards mesh shaders, but between now and then seems like will be a lot of growing pains with it.
So we may not know for a while if XS FF is a medium to long term shackle, short term it is a bit.
Even if it is I wouldn't expect it to hand in huge losses in comparison to ps5 though.

Will also take devs time to fill in async compute work to make use of the added resources that VRS will give.
I'm not expecting that to result in higher resolution, but other improvements if devs can be bothered to make use of the higher compute of xbox.
 
Front end FF is an interesting discussion that we may not see the results for few years after cross gen has passed.

In the long term moving towards mesh shaders, but between now and then seems like will be a lot of growing pains with it.
So we may not know for a while if XS FF is a medium to long term shackle, short term it is a bit.
Even if it is I wouldn't expect it to hand in huge losses in comparison to ps5 though.

Will also take devs time to fill in async compute work to make use of the added resources that VRS will give.
I'm not expecting that to result in higher resolution, but other improvements if devs can be bothered to make use of the higher compute of xbox.
I was actually thinking the backend; fill rate
Is exceptionally lower on series consoles. I don’t know how much of that is a factor.
 
Explain what you mean by this, "game pixel error problem".

Your next statement is also a semantically wrong as VRS is entitely under the direct control of the developers. They could design their implementation so it always applies to a minimum set amount if it was absolutely desired. It's generally not.

The best systems are the genuinely dynamic and multifaceted that use VRS and DRS. As documented through blogs and developer interviews linked earlier, in most situations their VRS implementation works well enough to prevent DRS from needing to kick in. This provides for better image quality levels than having to lower the resolution.
sorry,is my mistake.i should call it "pixel degredation" not "pixel error problem".it is different
 
and i always have a question:i have see lots of game can use VRS tier2 (for example:halo i;Doom Eternal),but why?just xbox series can support it but PC Ver. cant?i think it not difficult to dx12 to do that
 
and i always have a question:i have see lots of game can use VRS tier2 (for example:halo i;Doom Eternal),but why?just xbox series can support it but PC Ver. cant?i think it not difficult to dx12 to do that
Series consoles are now the baseline for feature set. So that's why they use VRS Tier 2. They expect that any PC user buying a new GPU today, would also have these features standard.
 
Last edited:
There have been some software-based VRS implementations that haven't been horrible. I'm drawing a blank on the titles, but they exist.

Hardware VRS is super limited as the tile size is way too big. The only reason it even works in Gears V is that they're quadrupling the pixel count of a last gen 60fps game, geometry density on their next title even at 60fps is likely to be enough that hardware VRS becomes almost pointless.

Meanwhile software VRS like Call of Duty and Forbidden West parts offer up a better tradeoff between image quality and performance than upscaling solutions do. Any upscaling tech is going to start breaking down in thin geometry/highly dynamic scenes, EG DLSS dilates one pixel geometry so it can tradeoff image stability at the loss of detail and correctness. If you aren't updating a pixel this frame there's only so much you can do to guess what's there. VRS offers up the possiblity, as shown in Forbidden West, of trading off acceptable image quality loss for being able to better track per pixel and subpixel detail even with thin and dynamic scenes like a lot of foliage.

I wouldn't be surprised if we see software VRS usage go up a lot, especially if Unreal and Unity make it a standard and easy to use feature.
 
Hardware VRS is super limited as the tile size is way too big.

EDIT:
Is this what you're talking about? Taken from @ https://microsoft.github.io/DirectX-Specs/d3d/VariableRateShading.html

Where it looks like the tiles are 8x8 or 16x16, if I parsed the info correctly:


Tier 2​

  • Screen space image tile size is 16x16 or smaller


VRS Tiles​

On Tier 2 platforms, there is the notion of a “VRS Tile”. VRS tile size determines
  • The width and height of the render target area described by one texel of the screenspace image; it is likely to affect the size of the screenspace image an application will want to allocate
  • The width and height of the region of a target having uniform shading rate when drawing one primitive
Because of this, “VRS tile size” is synonymous with “shading rate image tile size”. For information on this queriable quantity, see the section “Tile size” under “Tier 2”.

An application may specify shading rates through the screenspace image or per-primitive or any type of combiners thereof, but it is not possible to have more than one shading rate used within the same VRS tile for one primitive.

Tiles and coarse pixels: sizing​

VRS tiles are sized such that their dimensions are an even multiple of coarse pixel sizes.

Tiles and coarse pixels: positioning​

The coarse pixel grid is locked to the VRS tile grid; that is, coarse pixels cannot straddle VRS tile boundaries.
 
My only worry about VRS is the lack of (h/w) support for it on PS5. If that's the case then there won't be as much incentive for developers to try to find great usage for it, and we will continue getting "cheap" implementations like these we've seen so far.

VRS isn't good fit regardless for modern AAA game deferred renderers ...

VRS doesn't work with compute shaders which is commonly used to compute the lighting pass in AAA games. GPU-driven pipelines such as nanite places fewer limits on artist pipelines and content itself. If we have to go back to using less powerful artist pipelines to make an argument in favour of the benefits behind a feature do you keep supposing that it's going to be the future or should it resign as footnote in history ?
 
EDIT:
Is this what you're talking about? Taken from @ https://microsoft.github.io/DirectX-Specs/d3d/VariableRateShading.html

Where it looks like the tiles are 8x8 or 16x16, if I parsed the info correctly:


Tier 2​

  • Screen space image tile size is 16x16 or smaller


VRS Tiles​

On Tier 2 platforms, there is the notion of a “VRS Tile”. VRS tile size determines
  • The width and height of the render target area described by one texel of the screenspace image; it is likely to affect the size of the screenspace image an application will want to allocate
  • The width and height of the region of a target having uniform shading rate when drawing one primitive
Because of this, “VRS tile size” is synonymous with “shading rate image tile size”. For information on this queriable quantity, see the section “Tile size” under “Tier 2”.

An application may specify shading rates through the screenspace image or per-primitive or any type of combiners thereof, but it is not possible to have more than one shading rate used within the same VRS tile for one primitive.

Tiles and coarse pixels: sizing​

VRS tiles are sized such that their dimensions are an even multiple of coarse pixel sizes.

Tiles and coarse pixels: positioning​

The coarse pixel grid is locked to the VRS tile grid; that is, coarse pixels cannot straddle VRS tile boundaries.

Hardware VRS is limited to things like 8x8 tile sizes right now. Tests and shipping games from multiple people working on VRS have shown that this sort of large tile size can quickly see performance benefits get minimal when content ends up smaller than these tiles. Meanwhile software VRS can do 2x2 pixel tiles, and both compute and pixel shaders together. Feature size, like say a distant branch or blade of grass or etc., is a lot more likely to fit into a 2x2 tile, or even a 1x2/2x1 tile, which can also be done and still shows some speedup.

One of these tests was even done by a developer in MS's advanced tech group, so they have hardware VRS, but even there with a relatively simple terrain/sky only scene he found software VRS and a 2x2 tile size offered a lot more speedup than hardware VRS.
 
Hardware VRS is limited to things like 8x8 tile sizes right now. Tests and shipping games from multiple people working on VRS have shown that this sort of large tile size can quickly see performance benefits get minimal when content ends up smaller than these tiles. Meanwhile software VRS can do 2x2 pixel tiles, and both compute and pixel shaders together. Feature size, like say a distant branch or blade of grass or etc., is a lot more likely to fit into a 2x2 tile, or even a 1x2/2x1 tile, which can also be done and still shows some speedup.

One of these tests was even done by a developer in MS's advanced tech group, so they have hardware VRS, but even there with a relatively simple terrain/sky only scene he found software VRS and a 2x2 tile size offered a lot more speedup than hardware VRS.
Tiles determine the chosen option for coarse shading and are decided by the application based on the properties of the content which each tile is covering. Thus a tile of 8x8 can perform shading in 2x1 "strips" if the content needs it. It's not obvious that a smaller tile size would be a significant win in performance for such cases - how often do you see a single blade of grass?
 
Hardware VRS is limited to things like 8x8 tile sizes right now.
Yeah, it was a bit confusing because they can do per primitive which could be smaller than 8x8. However the tile size limitation comes into play since you only get 1 shader rate per 8x8 tile. So any complex tiles with 2 different speeds will have to run at the most precise speed.
 
What happened to this technology? Is any game using it?
I think the future is going in a direction that obsoletes HW VRS ...

Hybrid visibility buffer + deferred renderers might start taking off like we see with UE5 Nanite which will give artists the freedom from being constrained by poly count budgets. A lot of modern deferred renderers use a compute shader based lighting pass as well which is incompatible with HW VRS too ...
 
What happened to this technology? Is any game using it?

The first post starts off with a look at two different games using it, Gears Tactics and Gears. Or do you mean any newer games using it? There have lso been a few DF game videos since then that have mentioned the game uses VRS.
 
Gears is made by a Microsoft's studio, I meant why such a smart solution isn't already widespread? Lurkmass explained it clearly. At this point don't expect major projects, outside of Microsoft, to implement it.
 
Back
Top