Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
VRS is a wrench on fanboy gears, the SeX 20% compute advantage should be enough to almost always stay ahead, but until now when it does is almost always resorting to these "cheats" (it's because it's compensating for the lower pixel fill rate?).

I still don't know what to think about VRS.
It's good to have a feature that improves performance, but at these high resolutions I would prefer to preserve IQ. I would prefer the so called "mesh shaders" if it's so magical as some promises it to be.
 
Interesting.

When X titles adopts VRS tier 2 widely later and gets better results in head to head, they don't matter since VRS is a cheat.

After witnessing how certain groups of people think and reason from 2016-2020 (especially in 2020), I can understand where you're coming from.
 
I still don't know what to think about VRS.
It's good to have a feature that improves performance, but at these high resolutions I would prefer to preserve IQ. I would prefer the so called "mesh shaders" if it's so magical as some promises it to be.

VRS starts breaking down when micropolygon rendering comes into play such as nanite geometry. Nvidia mentions that there's virtually no performance benefit if the primitive-to-pixel ratio is either close to 1 or greater. I don't imagine that there was a whole lot of thought process placed into how these features would interact with each other ...

If you want to render compressed geometry with mesh shading, VRS is practically useless and there's no known real-time option as well for compressing your scene representation (BVH) data to a reasonable degree either so ray tracing becomes difficult as well in this case ...

VRS as already mentioned doesn't provide any benefit to micropolygon rendering but it's not useful for ray tracing either since you can't use it to exploit cases where there's no locality of information. It's purely a rasterization/screen space only optimization technique. VRS with deferred rendering (most AAA games/engines) isn't a super compelling combination either because deferred renderers are usually bandwidth or fillrate bound. Shading/lighting cost is less of a concern with deferred rendering because we split our G-buffer pass from the lighting pass. By storing our material parameters in a G-buffer, we have the ability to specialize our shaders and thus we can avoid using ubershaders altogether including it's problems such as high register pressure/low shader occupancy. Splitting our lighting pass also means that we don't pay for overdraw costs either during the lighting pass ...

Mesh shading, VRS, and RT are features that don't play all that nicely with each other ...
 
Last edited:
Interesting.

When X titles adopts VRS tier 2 widely later and gets better results in head to head, they don't matter since VRS is a cheat.

After witnessing how certain groups of people think and reason from 2016-2020 (especially in 2020), I can understand where you're coming from.

Isnt a whole rendering process basicly a "cheating" ? Culling is removing hidden objects that should be there, to me its sounds like cheating! PS5 can dynamicly adjust clock speeds to shift power between cpu and gpu where needed, cheating!
Overcoming boundaries and finding clever solutions to problems can be seen as smart engineering or cheating imo.
The end result is what matters, better games faster loading times more complex geometry i say MORE CHEATING!
 
Interesting.

When X titles adopts VRS tier 2 widely later and gets better results in head to head, they don't matter since VRS is a cheat.

After witnessing how certain groups of people think and reason from 2016-2020 (especially in 2020), I can understand where you're coming from.

VRS don't work with deffered rendering, very few titles will use it and VRS don't work when there is many triangles it will not work with Unreal Engine 5 titles.

And it will be less efficient with next gen title with tons of polygons even the one not using UE5.

And I wait digitalfoundry article, Res is one thing but VRS undersample the image this is like rendering at an inferior res. I am not sure image quality is the best on XSX compared to PS5.
 
Last edited:
Power draw won't tell you much about the utilisation of the SoC when you compare XSX and PS5, XSX has significantly lower GPU clocks using the same arch (just more of it) and is more efficient as a result of that (and doesn't need exotic liquid metal cooling lol).

PS5 is closer in clocks to what desktop RDNA2 GPUs run at, which is kind of insane, but I bet it's really inefficient compared to XSX.
 
Isnt a whole rendering process basicly a "cheating" ? Culling is removing hidden objects that should be there, to me its sounds like cheating! PS5 can dynamicly adjust clock speeds to shift power between cpu and gpu where needed, cheating!
Overcoming boundaries and finding clever solutions to problems can be seen as smart engineering or cheating imo.
The end result is what matters, better games faster loading times more complex geometry i say MORE CHEATING!
Culling vs VRS are different as culling should not change the final image whether off or on it will look the same, VRS does change the the final image, so off vs on look different.
Its a compromise for better FPS & worse image. Its prolly a choice worth making as better FPS is usually better than better IQ. Though if my understanding is this is something the xbox can do and the PS5 can't then its an extra tool in the xbox developers toolkit, so its an advantage for the xbox.

edit: Just saw chris's post if VRS doesnt work with deferred, then that makes it nearly useless
 
Any comparisons for when X is running at a higher resolution, but not at max? That's more interesting to see the VRS tradeoffs vs VRS / resolution gains side by side. Currently both consoles are hard constrained to the max so the PS5 will show up better.
We only have those 2 pics that show identical scenes on both machines with the same resolution unfortunately. But yes it would be interesting to compare when XSX has a higher resolution to see if the tradeoffs are worth it.

But what would be more interesting actually is to compare VRS On and OFF on XSX cause the Xbox even without VRS is already supposed to have a higher resolution on average than PS5.
 
Power draw won't tell you much about the utilisation of the SoC when you compare XSX and PS5, XSX has significantly lower GPU clocks using the same arch (just more of it) and is more efficient as a result of that (and doesn't need exotic liquid metal cooling lol).

PS5 is closer in clocks to what desktop RDNA2 GPUs run at, which is kind of insane, but I bet it's really inefficient compared to XSX.
I meant it as an insight into the utilization of either console, not as a comparison between both consoles. For example, we know that XSX can draw as much as 200w, if a game is only drawing 150w, we know somewhere in the system we have underutilization.
 
……How did people come to the conclusion that VRS can’t be used for deferred rendering?
Unless Gears 5, Gears Tactics, COD:MW, Metro: Exodus, etc, are all forward renderers in every aspects then I think VRS is and will be a useful addition to any hardware that has support for it.
On this topic, we literally have a chart from the D3D team showing how much VRS can save during each pass in Gears 5:
lI9omJc.png

I'd say VRS makes even more sense for deferred rendering than for forward rendering, because you probably don't need pixel-perfect lighting to look good. Soooo... what are people smoking?

(Ugh how do I make a picture smaller in a post??)
(Guess next time I'll resize the screenshot before uploading it)
 
Last edited:
Unless Gears 5, Gears Tactics, COD:MW, Metro: Exodus, etc, are all forward renderers in every aspects then I think VRS is and will be a useful addition to any hardware that has support for it.

For Call of Duty, specifically the Modern Warfare reboot they went back to doing forward rendering and added tiled light culling so their gains make perfect sense but they use a software implementation of variable shading regardless so no platforms are left out of this benefit. On Metro Exodus EE, from this single data point the gains for VRS amounts to maybe 1 or 2 more frames which ultimately doesn't mean much in the grand scheme of things ...

As for the Gears games, I don't know exactly what the register pressure of their shaders look like but if they are struggling then they could potentially see some benefits ...

As we go on further on into this new generation, the proposition for VRS becomes a lot weaker as most developers will likely continue to use deferred rendering. It becomes harder to justify using the technique when games are also going to increase geometric density in the future ...
 
Last edited:
No one knows what will happen in the future its always better to have more options. Regardless if its software or HW VRS.

The industry trend is pretty clearly more deferred with tons of smaller primitives ...

In an alternate reality where forward rendering and larger primitives were the dominant design, variable shading could've been highly useful but this is likely not direction for where we are headed ...
 
Indeed the industry is moving toward deferred rendering (it just makes sense), but that doesn't mean everyone write their deferred renderer into one giant pass. With small primitives, you simply disable VRS on your base pass, and enable it on everything after.

Why do D3D, Vulkan and vendors make VRS a thing at all? Because it can offer a fine-grained control that's hard to achieve even with compute shader. If you just dispatch your compute shader normally and branch away in the shader itself, that lane (thread) is masked and still occupies a hardware lane - you don't save anything. If you want to roll your own dispatch control, how can it be simpler than adding a few semantics and calling a few intrinsics in your existing shaders?
 
Indeed the industry is moving toward deferred rendering (it just makes sense), but that doesn't mean everyone write their deferred renderer into one giant pass. With small primitives, you simply disable VRS on your base pass, and enable it on everything after.

Why do D3D, Vulkan and vendors make VRS a thing at all? Because it can offer a fine-grained control that's hard to achieve even with compute shader. If you just dispatch your compute shader normally and branch away in the shader itself, that lane (thread) is masked and still occupies a hardware lane - you don't save anything. If you want to roll your own dispatch control, how can it be simpler than adding a few semantics and calling a few intrinsics in your existing shaders?

Sometimes the industry standardizes really dubious features like geometry shaders and tessellation which are both clearly getting replaced with mesh shading or compressed geometry so instead of assuming that they'll always have perfect foresight, I think it's a little healthy to exercise some skepticism ...

Sometimes they do get things right by introducing compute shaders and ray tracing but who really knows what's going on in their heads if they're getting it wrong with things like tiled/sparse resources or potentially variable shading ?
 
Indeed the industry is moving toward deferred rendering (it just makes sense), but that doesn't mean everyone write their deferred renderer into one giant pass. With small primitives, you simply disable VRS on your base pass, and enable it on everything after.

Why do D3D, Vulkan and vendors make VRS a thing at all? Because it can offer a fine-grained control that's hard to achieve even with compute shader. If you just dispatch your compute shader normally and branch away in the shader itself, that lane (thread) is masked and still occupies a hardware lane - you don't save anything. If you want to roll your own dispatch control, how can it be simpler than adding a few semantics and calling a few intrinsics in your existing shaders?

It will not be the first time a feature did not pass the devs approval.

https://aras-p.info/blog/2018/03/21/Random-Thoughts-on-Raytracing/

On the other hand, as Intern Department quipped, DirectX has a long history of “revolutionary” features that turned out to be duds too. DX7 retained mode, DX8 Matrox tessellation, DX9 ATI tessellation, DX10 geometry shaders & removal of FP16, DX11 shader interfaces, deferred contexts etc.

https://vr.tobii.com/sdk/learn/foveation/rendering/in-game-engines/#forward-rendering

I can search other sources I saw multiples devs told VRS is useful only for forward rendering. Imo VRS is great for use behind motion blur and depth of field. I am not so enthusiast for other use case where it drop IQ, Dynamic resolution can give the same result.

Forward Rendering
To support applications targeting platforms with hardware foveation (such as Qualcomm 845 or PC with VRS) and using forward (or forward+) rendering, then the answer is probably “yes”:

  • Implementation and maintenance costs are very low, and you will see some performance improvement. The amount of the performance improvement will depend on how expensive your main scene rendering shaders are and the resolution being targeted.
  • Some, but not all, applications may show visual artifacts with naïve implementations. If the artifacts are severe enough modification of some content or pipeline changes can help to eliminate them. Many artifacts can be handled by simple tweaks to content or exclusion of some graphical elements from the foveation; see Artifacts and Mitigation.
Deferred Rendering
To support applications using deferred rendering the answer is “maybe”:

  • For application that exhibit significant rendering performance issues, G-Buffer warping foveation can produce significant processing savings. However, this approach has large fixed processing overheads and its implementation can be complex; some existing processing and shaders may need to be modified to be aware of the warp. Lastly a solid understanding of mathematics is needed to create the warp and matching filtering.
  • Variable rate shading approaches are generally unsuitable for foveation of deferred renderers. Processing savings are limited, and artifacts can be severe.

EDIT: And it is not useful with micropolygon.
 
Last edited:
  • Like
Reactions: snc
Gears one of the best implementations of VRS have said that they didn't make full use of it, you can look at their blog post on it for details etc.
The MS dev during presentation of VRS also said sees it like MSAA hardware where their looking into other uses of it.
Hotchips presentation if I recall correctly mentions the very small die space taken up by it.

I personally would rather a higher resolution for the parts that are more important than the bits that aren't, or better fps.
There's big difference between a technical breakdown compared to what people will truly notice during game play.
I'm all for the 400% zoom of corner of screen, but that is not indicative of perceptual quality during most peoples gameplay.

VRS, and VRS hardware usage is still very early days.
 
Status
Not open for further replies.
Back
Top