AMD FidelityFX on Consoles

Thanks for your link. So the whole hardware VRS they are touting as the holy grail technique only possible on Xbox Series consoles is backed by this one picture? And only one? The same VRS everyone is fighting for or against in multiple forums with just the promise of Microsoft that it's awesome and that picture?

But this is not a comparison. They are showing different parts of the image containing very different details and even different lighting. You can't compare anything with that. It's pointless.

This is how you compare sharpness of textures, you need to focus on a few textures (and the same) with no much geometry hiding it like:
You've gotten VRS wrong on several occasions now passing poor texture filtering as VRS. I think it may make more sense instead of just posting how you believe VRS works, that instead you should watch a video posted by it.
https://forum.beyond3d.com/posts/2193565/

VRS is a desirable feature for developers. DRS is a pure resource savings scaling technique; by in large, it's only purpose is to keep frame rates up.
VRS slower then DRS, and it's proven to be slower.
It's fairly simple, because you're not reducing all the workloads by X% like DRS does. And that's precisely the purpose of VRS - it is to allow developers control how much of their visuals they are willing to give up to regain performance. The performance gain of VRS can be based entirely on what developers want you to see. Therefore VRS has actual artistic control where DRS does not.

The way you look at comparisons, is that you're looking for places in which the clarity of an area is significantly made worse by VRS vs DRS; but done properly the opposite would also be true. There should be areas in a VRS image that should look immaculately sharper than the DRS image unless they are running the same resolution. That's sort of the point I think you're missing. Not understanding how much developers have control over VRS and how much work is needed to integrate it into the engine for it work. it's much easier to deploy a DRS solution, you're rescaling all teh buffers by x%.

With VRS, you have to decide when and where you want to use it when you write out to the buffers. You may only use it for lighting, or for post process, or for both etc. Or more. The versions of VRS you have seen so far are likely not much more than drop in features infiltrating a small part of the engine. When the games get very mature, you'll see VRS doing work in numerous parts of the pipeline.

I think it's great you've got this passion to make this feature seem useless. But you're really comparing the infancy of VRS to the mature technology of DRS and making a judgement call on two features that solve entirely different problems.

blog3.jpg


You can't do this with DRS. You'll run a depth of field, but if DRS kicks in the the detail on the actual focused area is lost. With VRS, you can maintain the 4K detail on the flower and lose it everywhere else. That's how you should look at VRS. Some will just use standard image filters to find like pixels and change the shading rate to be more coarse to reclaim performance. But as time goes on, developers can certainly develop much more smarter ways to select which areas to affect.
 
The picture on it's own doesn't make it particularly clear, but that's demonstrating VRS based supersampling. As well as being used to reduce shading rates (so saving on shading work) it can be used to increase shading rates above 1:1, so effectively supersampling the selected tile. So in the case of the above image, you could use very highly focused supersampling on the area the user was directly looking at, and run at a standard rate elsewhere.
Oh OK thanks I get it.
So this will give the opposite than how its been used in games till now

Now its usually in games (or so I gather)
VRS = looks worse but goes quicker than no VRS

here its used as
VRS = looks better but goes slower than no VRS
 
Yes companies never do multi-million stupid decisions [cough]intels CPU's from the last 5 years, kinect, geforce fx, 3d tvs [/cough]
but seriously, I understand the concept if its what I think it is and agree it can be good in theory esp for VR if you can see where the person is looking, so I google it and I see this image on nvidias explainer page
VRSS-Tech-Text-BONEWORKS-final.jpg

surely this is backwards, you typically want the important stuff in the middle of the screen so you want it at the highest quality, thus the blue should be normal shading rate, the lower quality (i.e. faster shading) should be on the outside?
maybe I dont understand it?, either that or nvidia f-ed up their explainer image
https://developer.nvidia.com/vrworks/graphics/variablerateshading

also from 0-14% improvement in framerate seems very meagre? I would of guessed 50% minimum, but I see from that MS page that it can easily look bad so you want to minimize its use.
For eye tracing VR though I can see its benifits being massive

I was being sarcastic. :D One company having a "D'oh!" moment happens all the time, occurring across an industry is a much rarer occurrence.

14% is hardly meager. 14% equals 2.3ms savings per frame when targeting 60 fps (or 4.6ms when targeting 30 fps). If VRS is providing that type of savings consistently then you can spend that 2.3ms on other aspects of rendering.

Furthermore, VRS is relatively new. It's kind of hard to imagine that VRS's ultimate impact on games is going to be realized at the point where supported hardware is just a small fraction of the total userbase.
 
Last edited:
Blurrier textures are a result of an overly aggressive VRS implementation.
From this site, comparing no VRS to VRS Performance mode in Hitman 3:
VRS Quality mode looks OK but VRS Performance looks terrible yet its giving you bugger all extra FPS (very low single digit % improvement, I assume theres much better ways to improve your framerate which hurt the visuals less, sure its nice to have another option but ?)

Certainly not the strongest case for using it
Whats the title it improves the most in independent benchmarks (with screenshots)
 
Certainly not the strongest case for using it
Whats the title it improves the most in independent benchmarks (with screenshots)

According to The Coalition's Chris Wallis it's Gears Tactics with a 14% boost in framerate at 4K / Insane settings.
However I can't figure out if they're claiming these are average or "up to" numbers, and I can't find any comparisons in the internets yet.
 
According to The Coalition's Chris Wallis it's Gears Tactics with a 14% boost in framerate at 4K / Insane settings.
However I can't figure out if they're claiming these are average or "up to" numbers, and I can't find any comparisons in the internets yet.

Also, I don't know if that's strictly migrating from No VRS to VRS Tier 2, or from upgrading VRS Tier 1 to VRS Tier 2. If those numbers were from Gears Tactics or purely Gears. The verbiage there is a little vague to me too.

Exact article verbiage:
The team at The Coalition haven’t stopped innovating since bringing Tier 1 VRS to Gears Tactics, and have brought Tier 2 VRS support to both Gears 5 and Gears Tactics.

The team saw similarly large perf gains from VRS Tier 2 – up to 14%! – this time with no noticeable visual impact. See for yourself if you can tell which side of the first image in the blog has VRS enabled to get a perf boost, and which side doesn’t.

Ah, they have performance breakout, with VRS Quality at 14% on Insane or 8% on Ultra compared to no VRS.

upload_2021-3-11_11-14-15.png
 
at developers want you to see. Therefore VRS has actual artistic control where DRS does not.
Blurrier textures are a result of an overly aggressive VRS implementation.
From this site, comparing no VRS to VRS Performance mode in Hitman 3:

PAFgCUi.jpg





And why do you think @Globalisateur has a passion to "make this feature seem useless"?
Yea, and that can happen. That's entirely up to them to control how they want it to look. If you look beyond the tree and ground texture, you'll see that the branches are still sharp though. And that wouldn't be characteristic of DRS but characteristic of VRS.

I think if someone wants to say VRS is bad, they need to look at the full image, as opposed to just picking out where it's done poorly; heck we should start at identifying what is VRS and what is not, since that is probably more troublesome to begin with. Different filters will select different parts of the image for coarser shading. Though with Hitman3 they have to do it by draw call (Tier 1), whereas with Tier 2 they can control using a screen space image where and what they want to coarsely shade.

To be clear, VRS will never look better than native unless you are super sampling with it. VRS 4K will never look better than 4K native. VRS is about removing detail (or gaining detail for super sampling). VRS 4K could look better than a lower dynamic resolution however, and that's really where the comparisons matter.

I don't really care why he wants it to fail. That's not important to me. I think what's important is that we can have an equal grounding of understanding of what VRS is and how it works. It's okay to be biased against it, provided we are speaking on how it works technically the same.
 
Also, I don't know if that's strictly migrating from No VRS to VRS Tier 2, or from upgrading VRS Tier 1 to VRS Tier 2. If those numbers were from Gears Tactics or purely Gears. The verbiage there is a little vague to me too.

Exact article verbiage:


Ah, they have performance breakout, with VRS Quality at 14% on Insane or 8% on Ultra compared to no VRS.

View attachment 5342


Yes, it's 14% at quality settings vs. VRS Off. It seems to be a measurement taken at a particular frame and not an average taken from a playthrough in a certain zone.

It also seems to be more dependent on GPUs whose bottleneck is in shader throughput, so I'm guessing the e.g. RX 6800 might have shown larger gains (same number of shader engines, ROPs, memory bandwidth, 75% of the number of compute units).
 
Yes, it's 14% at quality settings vs. VRS Off. It seems to be a measurement taken at a particular frame and not an average taken from a playthrough in a certain zone.

It also seems to be more dependent on GPUs whose bottleneck is in shader throughput, so I'm guessing the e.g. RX 6800 might have shown larger gains (same number of shader engines, ROPs, memory bandwidth, 75% of the number of compute units).
yup, vrs is about freeing up pixel shader limited scenarios in terms of gaining performance.
 
Isn't VRS a non starter at the moment anyway? Don't TV's etc need to be built with support for it? And until >80% of the TVs plugged into consoles have this, isn't it just a niche feature used primarily for point scoring?
 
Isn't VRS a non starter at the moment anyway? Don't TV's etc need to be built with support for it? And until >80% of the TVs plugged into consoles have this, isn't it just a niche feature used primarily for point scoring?
Erm.. I think you might be mistaking VRS (Variable Rate Shading) for VRR (Variable Refresh Rate).
 
Isn't VRS a non starter at the moment anyway? Don't TV's etc need to be built with support for it? And until >80% of the TVs plugged into consoles have this, isn't it just a niche feature used primarily for point scoring?
that's VRR.

VRS for the game engine to take advantage of.
 
Erm.. I think you might be mistaking VRS (Variable Rate Shading) for VRR (Variable Refresh Rate).
....
that's VRR.

VRS for the game engine to take advantage of.

Ah. OK. Carry on then :D
 
To be clear, VRS will never look better than native unless you are super sampling with it.
As you pointed out before its about the whole frame though.
So if your spending more PS on more important parts of the frame compared to parts that can take a hit. Then the overall frame as a whole could look better than without VRS.

I expect VRS to not be going anywhere and for software VRS to also be used. This is just very early days.
To be clear I'm not expecting VRS to be night and day, when it is is it will more than likely be due to bad application/being to aggressive.
 
As you pointed out before its about the whole frame though.
So if your spending more PS on more important parts of the frame compared to parts that can take a hit. Then the overall frame as a whole could look better than without VRS.

I expect VRS to not be going anywhere and for software VRS to also be used. This is just very early days.
To be clear I'm not expecting VRS to be night and day, when it is is it will more than likely be due to bad application/being to aggressive.
If you are less aggressive with DRS, it will start losing it's perf gains, and ultimately at some point it's going to be pointless to even use it. I gather using it is probably not totally free of ressources? Like CBR. it cost to even do any kind of CBR but the benefits are often worth it.
 
According to The Coalition's Chris Wallis it's Gears Tactics with a 14% boost in framerate at 4K / Insane settings.
However I can't figure out if they're claiming these are average or "up to" numbers, and I can't find any comparisons in the internets yet.
They say up to 14% at least twice, so that means anywhere between 0->14% improvement. Its the old advertising trick 'up to 90% off the price' ppl just hear the 90%

If you look at the screenshots you can see why theres very little FPS improvement, most of the image looks like its turned off,
eg here the clock looks the same in all 3 but the reflection looks like ass

quality VRS, no VRS, performance VRS
vrs.png


Is this worth ~5% FPS increase?

I managed to find gears tactics with VRS
https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/9
Its not pretty, note this is an older impmentation, its more finegrained now but the general technique remains the same
performance at 4k does seem better, greater than 14% even, though it looks like ass, performance at 1080p though could be worse (plus you guessed it, looks like ass)

Though I could see VRS being good if your original image is very blurry (eg heavy DOF) as then some extra smudging prolly wont be that visible.
Maybe thats when they should only use it?
 
note this is an older impmentation, its more finegrained now but the general technique remains the same
There's pretty big differences between tier 1 and 2.
If you was just talking high level conceptually then fine. If your using the image as example not so much.
 
Oh OK thanks I get it.
So this will give the opposite than how its been used in games till now

Now its usually in games (or so I gather)
VRS = looks worse but goes quicker than no VRS

here its used as
VRS = looks better but goes slower than no VRS

Yeah, that's what I've understood from a couple of presentations I've watched online. As you say, in the desktop / PC space pretty much everything has been about saving on pixel shader work to boost fps or resolution. It only helps if you're pixel shader bound though - beyond a point you could hammer the IQ right down and gain almost nothing.

How one decides how to apply it looks to be everything. Luminance seems to be one emerging way - for example in 8x8 blocks where contrast changes are small. I think you touched on that in a later post mentioning DoF, but motion blur could be another good application, as could areas of the screen where lighting differences reduce perceivable detail (either very dark or washed out due to very high brightness).

AMD's FidelityFX implementation uses both luminance and motion vectors, so you can look for areas of low contrast and predict their movement between frames. Motion vectors might also be tuned to work optimally with object motion blur, I think.

FidelityFX was made available as an addon to UE4 a couple of weeks back iirc. Maybe that'll kick start more widespread investigation of the technique...

If you are less aggressive with DRS, it will start losing it's perf gains, and ultimately at some point it's going to be pointless to even use it. I gather using it is probably not totally free of ressources? Like CBR. it cost to even do any kind of CBR but the benefits are often worth it.

There is a small cost, but there's a cost to dynamically reducing resolution too as you have to be conservative with reductions to make sure you don't tear or drop frames. Plus as you drop resolution, scan conversion creates more sub optimal pixel quads with dead threads at shading time.

As you pointed out before its about the whole frame though.
So if your spending more PS on more important parts of the frame compared to parts that can take a hit. Then the overall frame as a whole could look better than without VRS.

I expect VRS to not be going anywhere and for software VRS to also be used. This is just very early days.
To be clear I'm not expecting VRS to be night and day, when it is is it will more than likely be due to bad application/being to aggressive.

Yeah, it's just another trick in the bag to balance performance and IQ. If you have a good solution like The Coalition, best to always use it and then use dynamic resolution drops when you're still not making frame time.

And if a user (probably on PC) decides they'd rather take the hit to shading clarity to keep edges sharp and frame rate up (1337 pro gamer pwn) then let them.

Choices like these only seem to be bad to people who don't have them.
 
You've gotten VRS wrong on several occasions now passing poor texture filtering as VRS. I think it may make more sense instead of just posting how you believe VRS works, that instead you should watch a video posted by it.
https://forum.beyond3d.com/posts/2193565/

VRS is a desirable feature for developers. DRS is a pure resource savings scaling technique; by in large, it's only purpose is to keep frame rates up.
VRS slower then DRS, and it's proven to be slower.
It's fairly simple, because you're not reducing all the workloads by X% like DRS does. And that's precisely the purpose of VRS - it is to allow developers control how much of their visuals they are willing to give up to regain performance. The performance gain of VRS can be based entirely on what developers want you to see. Therefore VRS has actual artistic control where DRS does not.

The way you look at comparisons, is that you're looking for places in which the clarity of an area is significantly made worse by VRS vs DRS; but done properly the opposite would also be true. There should be areas in a VRS image that should look immaculately sharper than the DRS image unless they are running the same resolution. That's sort of the point I think you're missing. Not understanding how much developers have control over VRS and how much work is needed to integrate it into the engine for it work. it's much easier to deploy a DRS solution, you're rescaling all teh buffers by x%.

With VRS, you have to decide when and where you want to use it when you write out to the buffers. You may only use it for lighting, or for post process, or for both etc. Or more. The versions of VRS you have seen so far are likely not much more than drop in features infiltrating a small part of the engine. When the games get very mature, you'll see VRS doing work in numerous parts of the pipeline.

I think it's great you've got this passion to make this feature seem useless. But you're really comparing the infancy of VRS to the mature technology of DRS and making a judgement call on two features that solve entirely different problems.

blog3.jpg


You can't do this with DRS. You'll run a depth of field, but if DRS kicks in the the detail on the actual focused area is lost. With VRS, you can maintain the 4K detail on the flower and lose it everywhere else. That's how you should look at VRS. Some will just use standard image filters to find like pixels and change the shading rate to be more coarse to reclaim performance. But as time goes on, developers can certainly develop much more smarter ways to select which areas to affect.

VRS can also be used for artistic purposes where the technical ability of the hardware is being stretched, something DRS can't provide. That level of granularity over what parts of the image to selectively render at lower resolution to keep other areas at higher resolutions is the key reason for it.

Within a couple of years we'll hopefully start seeing some titles utilizing VRS for resource gains and creative artistic touches.

Blurrier textures are a result of an overly aggressive VRS implementation.
From this site, comparing no VRS to VRS Performance mode in Hitman 3:

PAFgCUi.jpg





And why do you think @Globalisateur has a passion to "make this feature seem useless"?

In some ways I think the 2nd image actually looks better. Not in the foilage per se, but the tree; I know that their implementation of VRS is nowhere near the pinnacle of fine-tuned usage (it likely isn't even Tier 2), but what's there creates a DOF on that part of the image which can potentially be leveraged more precisely in the future for artistic touches.

You don't always need top resolution or even image clarity in every part of the frame, sometimes less actually is more. This screenshot isn't the best example of that but it can serve as a hint to such and, eventually, I think even technical analysis spots will have to take that into account when analyzing technical performance regarding output resolution, where appropriate.
 
Back
Top