AMD FidelityFX on Consoles

There is a small cost, but there's a cost to dynamically reducing resolution too as you have to be conservative with reductions to make sure you don't tear or drop frames.
True but I think the cost is less than it seems to be with VRS look at it here
https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/9
at 1080 it can actually make your game run slower as well as looking crap, but then again perhaps not all think it looks worse, like thicc_gaf above me (the tree looks better! :LOL:, Im not seeing it mate, its like looking at the same game on switch vs ps5 and choosing the switches graphics)

Theres 2 places I think VRS can work, when the image is blurry (DOF and perhaps like function mentions with motion blur, though not sure how the motion blur vector is gonna play out with the sampling pattern)

but there is a place where it can have a absolutely massive benefit, if you could see where the person's pupils are actually looking, easy-ish in VR eg https://vr.tobii.com/
not sure about on your TV (and of course this will only work for a single person) I suppose this exists in labs but not yet in home devices?
but with this I can see massive gains that VRS could help out, forget a couple of percent FPS improvement, I believe you could get 100s of percent FPS improvement, which will enable a much better image quality
 
True but I think the cost is less than it seems to be with VRS look at it here
https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/9
at 1080 it can actually make your game run slower as well as looking crap, but then again perhaps not all think it looks worse, like thicc_gaf above me (the tree looks better! :LOL:, Im not seeing it mate, its like looking at the same game on switch vs ps5 and choosing the switches graphics)

Theres 2 places I think VRS can work, when the image is blurry (DOF and perhaps like function mentions with motion blur, though not sure how the motion blur vector is gonna play out with the sampling pattern)

but there is a place where it can have a absolutely massive benefit, if you could see where the person's pupils are actually looking, easy-ish in VR eg https://vr.tobii.com/
not sure about on your TV (and of course this will only work for a single person) I suppose this exists in labs but not yet in home devices?
but with this I can see massive gains that VRS could help out, forget a couple of percent FPS improvement, I believe you could get 100s of percent FPS improvement, which will enable a much better image quality
in the presentation about VRS, if you're using something like a sobel filter (and you can change to whatever similarity or edge detection algo you want), you'll find most of these filters will spot areas with low degrees of luminance. Since there is very little light there, the algorithm will likely very coarsely shade the areas that are dark and if there happens to be highlights, the fitler should be effective enough to keep those sharp.

VRS if you're looking purely for performance benefits, low lighting, or items covered in dark shadows/few highlights/lack of proper lighting is a good win for these types of filters. Ultimately if you're talking about Tier 2 performance savings, you select a filter of your choice, and you can use a slider to determine the thresholds of selection. Where you apply VRS, it can be on lighting, it can be on textures etc, it's up to the developer to decide. It doesn't have to be applied onto everything. Developers get a big amount of control over how they want to save performance.

Developers are unlikely to create their own smart filters until much later on, where they are looking for a specific effect, or they found out after enough research the best areas to apply coarse shading. But right now, they are likely just going to use well researched filters with thresholds to get the effect they want. I don't think there is really much thought that goes into it except to determine the thresholds of effect.
 
Blurrier textures are a result of an overly aggressive VRS implementation.
From this site, comparing no VRS to VRS Performance mode in Hitman 3:

PAFgCUi.jpg





And why do you think @Globalisateur has a passion to "make this feature seem useless"?
VRS in hitman is tier 1 and just blindly applies it to objects based upon categorisations from draw calls. It is not really relevant to the discussion of VRS as found in games like Gears 5 or Wolfenstein games.
 
True but I think the cost is less than it seems to be with VRS look at it here
https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/9
at 1080 it can actually make your game run slower as well as looking crap, but then again perhaps not all think it looks worse, like thicc_gaf above me (the tree looks better! :LOL:, Im not seeing it mate, its like looking at the same game on switch vs ps5 and choosing the switches graphics)

Hitman is using Tier 1 VRS, so that's on a per-draw call basis. So for things like the tree and the mantelpiece that show a fair bit of degradation, it probably means that the entire object has been set to 1x2 / 2 x 1 in "quality" mode and 2 x 2 in performance mode.

Something like FidelityFX Variable Shading will allow for a far more optimal use of VRS as it's Tier 2 based. Why use Tier 1? Well, it's an Intel sponsored game and Intel only support Tier 1 at the moment. :oops:

Theres 2 places I think VRS can work, when the image is blurry (DOF and perhaps like function mentions with motion blur, though not sure how the motion blur vector is gonna play out with the sampling pattern)

It will be interesting to see how effectively FidelityFX plays with motion. From the AMD presentation I saw on GPU Open it seems to be promising in how it can track areas that are moving, and developers could probably roll their own implementation that played even better with whatever motion blur systems they use. Something like a racing game with large amounts of peripheral blur could probably gain too. Faster you're moving, and the more you need high, responsive frame rates the better it could help you keep them... I guess time will show us what it can deliver (or not)!

Another couple of areas where VRS has the potential to increase efficiency are with very small polygons (generating fewer redundant pixel shader threads), and also with transparency. I think transparency could be quite significant. In the case of something like Unreal Engine 4, the highest quality transparencies are through a separate forward rendered pass. For something like lit smoke, which uses many layers of transparency, the cost is very high in terms of shader instructions (potentially more so than in terms of bandwidth or fill rate). Being able to use half or even quarter rate shading could potentially reduce or prevent huge dips, even if you needed to run a post process smoothing filter over the fog pass.

but there is a place where it can have a absolutely massive benefit, if you could see where the person's pupils are actually looking, easy-ish in VR eg https://vr.tobii.com/
not sure about on your TV (and of course this will only work for a single person) I suppose this exists in labs but not yet in home devices?
but with this I can see massive gains that VRS could help out, forget a couple of percent FPS improvement, I believe you could get 100s of percent FPS improvement, which will enable a much better image quality

IMO it's a pity MS fumbled Kinect so massively. A Kinect 3 that was calibrated to know which area of the screen you're looking at could have really allowed foveated rendering to move out of the headset space. TV's are huge now, and the proportion of the screen you need full detail on is normally pretty small. Kinect 2 was already adept at tracking the active user and filtering out voices from other people in something like Skype. Could handle multiple players and track them as they move around too - maybe more than one area of high detail on the screen would have been possible....
 
Hitman is using Tier 1 VRS, so that's on a per-draw call basis. So for things like the tree and the mantelpiece that show a fair bit of degradation, it probably means that the entire object has been set to 1x2 / 2 x 1 in "quality" mode and 2 x 2 in performance mode.

Something like FidelityFX Variable Shading will allow for a far more optimal use of VRS as it's Tier 2 based. Why use Tier 1? Well, it's an Intel sponsored game and Intel only support Tier 1 at the moment.
Like I asked before, Show me some of these tier 2 games with good comparisons?
Surely they exist, I just had a google
https://www.overclock3d.net/news/so...mes_to_3dmark_-_can_you_tell_the_difference/1
Published: 7th December 2019
Thus more than a year ago, So surely they exist in games now?

This is apparently tier 2, with/without
vrs2.png

Dunno mate, I can still see the difference.

For a laugh I took the right image, reduced it in size 25% (then blew it up 25% so its the same size for easy of looking)
OK this aint a fair example cause Im doing supersampling, but as I dont have the game I cant render it at a lower resolution to compare, but I wouldnt be surprised just using lower res will make the game look better than VRS
vrs3.png


what are some games with VRS tier2?, any list maybe I can try one out

edit: kinect
IMO it's a pity MS fumbled Kinect so massively. A Kinect 3 that was calibrated to know which area of the screen you're looking at could have really allowed foveated rendering to move out of the headset space.
Though Kinects problem was not so much the accuracy, it was the latency. Though no doubt in a kinect 3 this would also be improved. Its gotta be <15msec I think for it to have a shot of being valid to make games with
 
Last edited:
what are some games with VRS tier2?, any list maybe I can try one out

You're looking at it from the wrong perspective, you should compare lower resolution scenes without VRS to higher resolution scenes with VRS Tier 2. That's the tradeoff being made to hit performance targets such as 60 FPS on consoles.
 
Like I asked before, Show me some of these tier 2 games with good comparisons?
Surely they exist, I just had a google
https://www.overclock3d.net/news/so...mes_to_3dmark_-_can_you_tell_the_difference/1
Published: 7th December 2019
Thus more than a year ago, So surely they exist in games now?

This is apparently tier 2, with/without
vrs2.png

Dunno mate, I can still see the difference.

For a laugh I took the right image, reduced it in size 25% (then blew it up 25% so its the same size for easy of looking)
OK this aint a fair example cause Im doing supersampling, but as I dont have the game I cant render it at a lower resolution to compare, but I wouldnt be surprised just using lower res will make the game look better than VRS
vrs3.png


what are some games with VRS tier2?, any list maybe I can try one out
There are no titles that have fully plumbed Tier 2 VRS into their engines. Everything has mainly been paid for check box drop ins, via intel, or AMD etc. ie. quick wins for marketing checkboxes.

I think Gears 5 is the closest because they are a first party developer, but they also admit that it's not fully plumbed into the engine.

I would wait.
 
VRS in hitman is tier 1 and just blindly applies it to objects based upon categorisations from draw calls. It is not really relevant to the discussion of VRS as found in games like Gears 5 or Wolfenstein games.

I'm not sure what this has to do with my post.

The context of the post you quoted was to provide an example of what an overly aggressive VRS setting results in the perception of detail in textures.
Tier 2 provides higher granularity on the shading rate so it's not going to magically solve the fact that textures appear blurred if the shading rate become too aggressive (i.e. too coarse).

Chris Wallis from The Coalition describes a number of steps his team needed to take to make sure VRS Tier 2 is applied in a way that presents no perceptual impact (in Quality mode).
Tier 2 makes it possible to better control the shading rate, it doesn't mean an aggressive setting will stop being perceptible.



You're looking at it from the wrong perspective, you should compare lower resolution scenes without VRS to higher resolution scenes with VRS Tier 2. That's the tradeoff being made to hit performance targets such as 60 FPS on consoles.

I think this would be a good study on the benefits of VRS.

1 - Establish an X framerate / median frametimes for comparison
2 - Set resolution Y with Z settings and VRS Tier 2 on
3 - Given Z settings, see how low the render resolution needs to go down (percentage of Y) to get X framerate
4 - Compare the two.
5 - Repeat the test for Quality, Balanced and Performance modes of VRS.
 
Last edited by a moderator:
There are no titles that have fully plumbed Tier 2 VRS into their engines.
OK thanks (seems a bit wierd since its obviously been around since 2019(, So all cost vs benefits its all just speculation at the moment.

@BRiT I'm guessing, rendering at ~10% higher resolution & lower quality (because of VRS) will still look worse
but as iroboto saiz theres no titles that do VRS 2 yet so, apart from gears 5 somewhat
https://en.wikipedia.org/wiki/Gears_5 seems this game has been out a while, does noone have it?, can't they do a comparison? I just checked its not in my libray to download
 
Gears 5 didn't add VRS until just recently, as listed in the blogs linked (December 2020).
In short, if not for the fact that they published a blog/article detailing the use of VRS Tier 2 on Gears 5/Tactices, most/everybody? would be none the wiser about its usage.
 
OK thanks (seems a bit wierd since its obviously been around since 2019(, So all cost vs benefits its all just speculation at the moment.

@BRiT I'm guessing, rendering at ~10% higher resolution & lower quality (because of VRS) will still look worse
but as iroboto saiz theres no titles that do VRS 2 yet so, apart from gears 5 somewhat
https://en.wikipedia.org/wiki/Gears_5 seems this game has been out a while, does noone have it?, can't they do a comparison? I just checked its not in my libray to download
heh.. well ;)
DX11 was around years before the consoles released for DX11 and we didn't really start doing in games much with compute shaders until the trail end of this generation.

With the baseline moving towards DX12U you'll see faster support, but history points at things being still slow at progress.
 
Like I asked before, Show me some of these tier 2 games with good comparisons?
Surely they exist, I just had a google
https://www.overclock3d.net/news/so...mes_to_3dmark_-_can_you_tell_the_difference/1
Published: 7th December 2019
Thus more than a year ago, So surely they exist in games now?

Only game I know of off the top of my head is Gears 5, but as The Coalition have said, this is a first go retrofit, and they expect to be able to do better when they've integrated it more fully.

The benchmark you linked is pre-DX12U, probably using Nvidia specific extensions. DX12U landed around the middle of last year. Like sampler feedback, it's a new feature and only a tiny proportion of GPUs even support it. Even PS5 doesn't, so you can't blame anyone for it not focusing on it in these cross gen, Covid affected times. I think it'll gain more traction over time.

This is apparently tier 2, with/without
vrs2.png

Dunno mate, I can still see the difference.

Yeah, you can definitely still see it - especially as it's a very clear shot with no apparent motion blur, no DoF, no contrast reducing post process filters that reduce contrast like bloom, no upscaling, high quality texture filtering etc. Which I guess is what you want a benchmark to do - stress the feature but also show juicy FPS gains. To try and find out how aggressively this might be being used, I Googled around and found these results on WCCFTech:

https://wccftech.com/3dmark-updated...-tier-2-only-available-on-nvidia-turing-gpus/

T2 VRS.PNG

These are gains of 60 to 72.5% depending upon card, so this is seemingly quite an aggressive application. In real use you'd probably opt for higher IQ and smaller gains (OTOH this benchmark is probably intentionally as pixel shader bound as possible to stress this particular feature).

For a laugh I took the right image, reduced it in size 25% (then blew it up 25% so its the same size for easy of looking)
OK this aint a fair example cause Im doing supersampling, but as I dont have the game I cant render it at a lower resolution to compare, but I wouldnt be surprised just using lower res will make the game look better than VRS
vrs3.png

I don't even have a VRS capable card so I'm in an even worse spot than yourself for checking this out first hand. Ultimately, I think VRS going to be just another tool in the box. It's going to be hard to use it in such a way that direct freeze framed comparisons can't reveal it, but then again those reveal most performance saving techniques.

When there's blur, darkness, flames, smoke, movement etc I think it'll be a lot harder to spot, and hopefully impossible when you're swept up in the game. And it's maybe these stress points where it would be really good to free up performance to spend on the screen filling particles or shadow casting dynamic lights (or ... whatever).

And I guess if a monster is right in your face trying to chew your head off, you could afford to drop shading rate on that wall in the distance - it's the detail in the monster that's most likely to be the focus of your attention - though again, this would need developer time to implement. No free lunch and all that I suppose....
 
I wonder what the performance penalty for using super resolution on console will be. Clearly AMD wouldent bother trying with it if it wasnt going to do something useful but at the same time weve seen other techniques to improve image quality come with some performance hit. Even on ps4 pro which had checkerboard support in there it wasnt used so much
 
Like I asked before, Show me some of these tier 2 games with good comparisons?
Surely they exist, I just had a google
https://www.overclock3d.net/news/so...mes_to_3dmark_-_can_you_tell_the_difference/1
Published: 7th December 2019
Thus more than a year ago, So surely they exist in games now?

This is apparently tier 2, with/without
vrs2.png

Dunno mate, I can still see the difference.

For a laugh I took the right image, reduced it in size 25% (then blew it up 25% so its the same size for easy of looking)
OK this aint a fair example cause Im doing supersampling, but as I dont have the game I cant render it at a lower resolution to compare, but I wouldnt be surprised just using lower res will make the game look better than VRS
vrs3.png


what are some games with VRS tier2?, any list maybe I can try one out

edit: kinect

Though Kinects problem was not so much the accuracy, it was the latency. Though no doubt in a kinect 3 this would also be improved. Its gotta be <15msec I think for it to have a shot of being valid to make games with

LOL. Of course you can tell, lower shading rates seems to be applied to every pixel in the frame. VRS is about variable rate shading and not shading everything to the point its just a lower resolution image.
 
Like I said, the PS5 is not supporting DX12 Ultimate's VRS, nor will it ever. That topic is moot.

I get that Microsoft's technical marketing department was successful at making a big deal out of the PS5 not supporting specific implementations of Microsoft's own API (well, duh), and Sony not caring about publicly sharing the technical details of their SoC happened to play right in their field.
Claiming the PS5 doesn't support Microsoft's "patented VRS" implementation is as obvious and meaningless as claiming the Series X doesn't support GNM fragment shaders (well, duh).


As for the performance drops, they seem so rare that I doubt Codemasters would find it worth the development time and cost to implement whatever foveated rendering technique that Sony supports in their hardware. Like all practical VRS implementations I've seen so far, it doesn't look like it did any miracles to the DX12 consoles anyways.
The frame dips on the PS5 version are far from rare.
If the PS5 had VRS or something similar Codemasters would have used it like they did on the XSX version.
 
problem with stutter on ps5 occurs after later patch (and still exists), has nothing to do with lacking of vrs

I think they're asking "if it had VRS would the stutters be less or perhaps non-existent?" So to answer that question at all, we have to know what is causing the stutters. So what causes it on PS5? Can it be attributed to any specific thing or things? Is it pushing too much CPU? Pushing too high of resolution? Or memory bandwidth? Or I/O? Or what is it?
 
I think they're asking "if it had VRS would the stutters be less or perhaps non-existent?" So to answer that question at all, we have to know what is causing the stutters. So what causes it on PS5? Can it be attributed to any specific thing or things? Is it pushing too much CPU? Pushing too high of resolution? Or memory bandwidth? Or I/O? Or what is it?
the answer is in diff between commits before and after patches ;)
 
  • Like
Reactions: JPT
the answer is in diff between commits before and after patches ;)

:LOL: Great!

Are we certain it didn't happen before the patch? Perhaps lack of experimentation and experience with the various play modes before the patch. After all it seems like a high number of possibilities and combinations to go through in such limited time to be able to rule everything out.
 
Back
Top