Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
For Call of Duty, specifically the Modern Warfare reboot they went back to doing forward rendering and added tiled light culling so their gains make perfect sense but they use a software implementation of variable shading regardless so no platforms are left out of this benefit. On Metro Exodus EE, from this single data point the gains for VRS amounts to maybe 1 or 2 more frames which ultimately doesn't mean much in the grand scheme of things ...

As for the Gears games, I don't know exactly what the register pressure of their shaders look like but if they are struggling then they could potentially see some benefits ...

As we go on further on into this new generation, the proposition for VRS becomes a lot weaker as most developers will likely continue to use deferred rendering. It becomes harder to justify using the technique when games are also going to increase geometric density in the future ...
You can take a look at the article from The Coalition in the post just above yours. There they detailed the savings and how they used VRS in different passes. They also plan to make deeper and heavier use of it in their next title, where it would be engineered into the game from the start. They also plan to make use of both hardware and software implementation.
 
Sometimes the industry standardizes really dubious features like geometry shaders and tessellation which are both clearly getting replaced with mesh shading or compressed geometry so instead of assuming that they'll always have perfect foresight, I think it's a little healthy to exercise some skepticism ...

Sometimes they do get things right by introducing compute shaders and ray tracing but who really knows what's going on in their heads if they're getting it wrong with things like tiled/sparse resources or potentially variable shading ?

It will not be the first time a feature did not pass the devs approval.

https://aras-p.info/blog/2018/03/21/Random-Thoughts-on-Raytracing/



https://vr.tobii.com/sdk/learn/foveation/rendering/in-game-engines/#forward-rendering

I can search other sources I saw multiples devs told VRS is useful only for forward rendering. Imo VRS is great for use behind motion blur and depth of field. I am not so enthusiast for other use case where it drop IQ, Dynamic resolution can give the same result.



EDIT: And it is not useful with micropolygon.
And? How does arguing there exist previous features that didn’t become standard, help proving VRS will be a bust too? Why argue in logical fallacy instead of technical details? I mentioned in my post why including the feature makes things easier for devs who want to take advantage of it, apparently you didn’t read it beyond the first sentence. You’ve seen multiple devs saying it’s not useful for deferred renderer, where and how? List them all so we can all be convinced.

By the way the article you linked talks about foveated rendering, where they are looking for huge image quality drop in areas where the eyes aren’t focused on, in return for huge performance gain. Of course VRS isn’t ideal because the goal is to provide moderate performance gain with no visible IQ drop.

And yeah we know it doesn’t do much in a base pass when primitives are small, but what about the other passes? Again I linked an article literally from the D3D team member showing how it can be helpful and people just conveniently ignore it.
 
And? How does arguing there exist previous features that didn’t become standard, help proving VRS will be a bust too? Why argue in logical fallacy instead of technical details? I mentioned in my post why including the feature makes things easier for devs who want to take advantage of it, apparently you didn’t read it beyond the first sentence. You’ve seen multiple devs saying it’s not useful for deferred renderer, where and how? List them all so we can all be convinced.

By the way the article you linked talks about foveated rendering, where they are looking for huge image quality drop in areas where the eyes aren’t focused on, in return for huge performance gain. Of course VRS isn’t ideal because the goal is to provide moderate performance gain with no visible IQ drop.

And yeah we know it doesn’t do much in a base pass when primitives are small, but what about the other passes? Again I linked an article literally from the D3D team member showing how it can be helpful and people just conveniently ignore it.

I will search other comment about Deffered rendering and VRS without foveated rendering and here they talk about overhead and artefact. And I do exactly like you did, you said industry choose this path it must be the good one.

I did not say this is not useful but it is not useful for deffered rendering and games with high polycount or Unreal Engine 5.

And current games are mostly using deffered renderer and futures games will probably continue toward this because having more polygons will be a must.
 
Last edited:
But isnt gears 5 deferred? Deferred is standard pipeline in UE4 and Coallition swicthed to deferred already in GoW4.

"Mike Rayner: Gears of War: Ultimate Edition was running on a heavily upgraded Unreal 3 engine and was still using a forward rendered path. With Gears 4 we have entirely switched over to Unreal Engine 4's fully deferred rendering engine. "

https://www.eurogamer.net/articles/digitalfoundry-2016-gears-of-war-4-tech-interview
 
Wasn't this known that you can also even do a mix of hardware and software VRS as many engines aren't 100% one or the other.
That in no way downplays hardware or VRS at all from what I can see.

Yeah this is clearly stated in blog

"One possibility is to use a hybrid of both techniques, switching between VRS techniques based on the characteristics of the rendering pass."

https://devblogs.microsoft.com/directx/gears-vrs-tier2/

Having access to hw VRS seems to give you best of both worlds tbh.
 
Wasn't this known that you can also even do a mix of hardware and software VRS as many engines aren't 100% one or the other.
That in no way downplays hardware or VRS at all from what I can see.

All engine aren't 100% deffered because the transparent pass is forward rendering. But this is far from being a very important feature and the worse it does not work well with tiny polygons and many engines will have tons of polygons in the future.

Not good with deffered engine is one thing being inefficient with micropolygon is being not future proof.

Out of usage for motion blur and depth of field I am not impressed by VRS not a must have imo, not like compute shading or raytracing and ML.
 
Last edited:

He said software VRS is a better choice for deffered than HW VRS designed for forware rendering.
You're right (I mean he's right) this time. Hardware VRS has a granularity of 4x4 pixels, whereas in compute shader you can have as small as you can achieve. Depending on how the hardware dispatches pixel shader, using pixel shader in a deferred pass may or may not have better performance than compute due to locality.

Will future game engines forgo fixed function and use compute for everything? If so then hardware VRS is useless (among many other things).
 
HW VRS has no future because it's missing on one platform. That's the main downside to HW VRS.
That has nothing to do with that. VRS reminds me of PS3 Quincunx (another hardware based solution), which was also on one platform and hailed by many as a silver solution, at first because on paper it sounded perfect. But it was not worth it and all devs stop using it because the final image was too blurry. Too many downsides.

Currently VRS has too many downside IMO. There is the vaseling effect which is actually noticeable vs image witouth VRS (already noticed by DF). Also it does not mix well with RT in Doom Eternal as shown by NXGamer with some flickering in some ray traced lights (impacted by VRS) on XSX.
 
HW VRS has no future because it's missing on one platform. That's the main downside to HW VRS.
isn't that feature even on Switch? Well VRS 1.0 but it should be there. Shouldn't be that big of a problem to "exclude" one plattform for this feature. After all a software-version might still work and on PC it is widely supported. So there is really not a reason developers shouldn't use it. After all this feature can be used to "downscale" and therefore safe performance. On the other hand it can be used for the other direction. Render "shaded" parts in higher resolution than screen resolution. E.g. nvdidia highlighted the use for something like VR where you want a higher resolution on parts the eyes look at.
 
That has nothing to do with that. VRS reminds me of PS3 Quincunx (another hardware based solution), which was also on one platform and hailed by many as a silver solution, at first because on paper it sounded perfect. But it was not worth it and all devs stop using it because the final image was too blurry. Too many downsides.

Currently VRS has too many downside IMO. There is the vaseling effect which is actually noticeable vs image witouth VRS (already noticed by DF). Also it does not mix well with RT in Doom Eternal as shown by NXGamer with some flickering in some ray traced lights (impacted by VRS) on XSX.
I think people like us vastly overestimate the significance of such downsides in the bigger picture. We get all nitpicky and declare that such and such is 'bad' because it's not perfect, but dont really take into account the more generalized audience that games are for.

And really, when you step back, 3d graphics have always been using 'compromised' solutions to things and developers themselves are using lots of smoke and mirrors to create the illusion of certain effects or scenery or whatever. It's all been part of the game. This idea that solutions need to be nearly perfect seems to be a very weird recent phenomenon that I think has kind of started from the likes of Digital Foundry getting so popular, where their nitpicking and whatnot has changed how people see things. Not that I'm saying DF is bad, far from it, but I think we can definitely trace a lot of discussion about technical aspects of games back to DF's influence. And I think there have been some downsides to this, like a lack of big picture perspective.

Getting more specifically back to VRS, there's definitely a lack of really top quality examples out there, but when looking at the Tier 2 implementation in Gears Tactics, I'd say it's showing serious promise. Have we not learned our lesson here? Dont throw the baby out with the bath water when we're still in early days? Give it some time and let's see how it grows. If we can get notable performance overhead for fairly minimal image quality reduction, I think that's generally going to be an acceptable compromise, especially for something like a console with a fixed performance budget. So even if it never is perfect, this does not mean it's not useful.

Your example of Doom Eternal is a great example. If you watch that NX Gamer video, you will see these 'downsides' being laboriously pointed out with pixel peeping analysis. But this is not representative of what even the average gaming enthusiast will notice, much less the average gamer in general. If in a different situation it was the difference between a solid 60fps and 60fps with drops, would you still say it's not worth it? Would sacrificing VRS at the expense of some other feature being dropped in quality really be better?
 
The case of DOOM isn't that good of an example, they have a very robust DRS system in place. So in the case where you'd be fragment bound the res would drop on step lower, the gain in that case is mostly visual.
 

He said software VRS is a better choice for deffered than HW VRS designed for forware rendering.

The question becomes which is easier to implement? Better by performance can easily be eschewed by better due to cost (cost in terms of implementation skill/time and/or due to computational costs). Software isn't free to create and shitty implementations can waste whatever advantages offered by going the software route.

Some may simply go hardware VRS and live with the limitations on the deferred side because its easier to implement (if it's easier to take advantage of hardware VRS).

COD uses a software based VRS solution but it required implementation throughout the whole rendering pipeline.
 
Last edited:
Status
Not open for further replies.
Back
Top