AMD FidelityFX on Consoles

The misinformation here is being done by people conflating --> DX12 Ultimate's VRS <-- with every other foveated rendering technique out there. All of these are examples of foveated rendering:

VRWorks VRS -> nvidia's tools for implementing foveated rendering across several APIs with a focus on Virtual Reality applications
DirectX 12 Ultimate VRS -> Microsoft's implementation of foveated rendering with a focus on 2D panels, obviously limited to the devices that use DirectX12 like PCs (Intel, Nvidia, AMD) and Xbox consoles.
XR Adreno Foveation is Qualcomm's name for supporting foveated rendering with a focus on VR in their Adreno GPUs starting with Snapdragon 845 back in 2018, and it supports OpenGL and Vulkan. They seemingly extended the feature with a focus on 2D panels for the Adreno 660 in Snapdragon 888 (could be just marketing and it's supported on all Adreno 6xx though). I doubt the Adreno Foveation/VRS is supported in DirectX12, even if Qualcomm does write drivers for Windows.



On whether the PS5 supports some form of foveated rendering or not, I'll just leave these three links to patents (AFAIR there's a bunch more but these are what I found) registered by Sony and let people take their own conclusions:

- 2018 (originally filed in 2016): https://pdfaiw.uspto.gov/.aiw?Docid=20190384381
- 2019 (originally filed in 2016): https://pdfaiw.uspto.gov/.aiw?Docid=20190156794
- Filed in 2014 by Mark Cerny: https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2015153167

Has Sony talked about the foveated rendering implementations they have in store for the PS5? No they haven't, like they haven't talked about almost anything low-level in their SoC to any tech/gaming outlet or during conferences, unlike Microsoft.
I think if they ever do talk about it is if/when they talk about optimizations for PSVR2.



And the only reason other people keep posting incorrect or weirdly glorified clamours over a broadly studied and widely implemented concept like foveated rendering is because one console supports an API-specific implementation and the other does not.
This goes both ways.

As for the "dismissive" part, well yes I agree that DX12's VRS will be a bit dismissible as long as real-life results look like this:

IgjPNzR.jpeg



Perhaps DX12U's Tier 2 implementations will bring more relevant performance boosts, but so far it doesn't look like a game-changing feature.
yeah, looking at some results I wonder if there would be same vrs hype if ps5 also had it hw implementation, doubt it
 
Besides, I'm not expecting a long lasting gem of optimization genius to happen during the COVID era.
Well I haven't written code in any significant way in quite a while so I wouldn't know personally, but my friends who do are saying remote work is the best thing that ever happened to their productivity.

Not having to shower, commute or get dressed from the waist down is apparently doing wonders to their ability to stare at the same 50 lines of code for several hours in a row.
 
how about this

https://devblogs.microsoft.com/directx/gears-vrs-tier2/
  1. Tier 2 VRS sees higher performance gains when running at higher resolutions. While actual results will vary based on engine and content, we found that resolutions of 1080p or lower saw generally saw diminishing returns from Tier 2 VRS.
so with technologies like AMD super resolution we can assume that VRS will be pointless?
 
how about this

https://devblogs.microsoft.com/directx/gears-vrs-tier2/
  1. Tier 2 VRS sees higher performance gains when running at higher resolutions. While actual results will vary based on engine and content, we found that resolutions of 1080p or lower saw generally saw diminishing returns from Tier 2 VRS.
so with technologies like AMD super resolution we can assume that VRS will be pointless?

Techniques and options are never pointless.
 
so with technologies like AMD super resolution we can assume that VRS will be pointless?

I don't think VRS is pointless, since as iroboto mentioned there's probably room for improvement and we may still not know the full extent of its potential.
But at least at the moment it isn't the groundbreaking feature that some people make it out to be.

Back in late 2019 when the 3DMark VRS test was released, many people thought this was going to be revolutionary:


e3C7cJ2.png

(50% performance boost OMG!!)


Then some actual games started to implement the technique and this benchmark became as representative of gaming performance as their PCIe 4.0 bandwidth test.
 
  • Like
Reactions: snc
Yes but we already have examples of 20% of perf gaining with final words being

While we were able to implement VRS for all the passes that gave us the biggest bang for the buck, it was not plumbed into the entire engine due to time constraints. A deeper integration would allow VRS to provide even larger GPU savings

so it’s not impossible to gain even more. Seems like it depends on game, team and time/money.

https://devblogs.microsoft.com/directx/gears-vrs-tier2/
 
14% is 14%. VRS is awesome when used well.

I'm glad that Gears 5 runs at 60 fps instead of 52 fps. :)
Not if it destroys the image in the process by applying a vaseling filter. How would Gears look without VRS, like maybe VRS off + 5% reduced resolution will make the game look better than 8% higher res and blurrier textures?

The only way to test the + and - of the tech is to compare a multiplat games with VRS on and off. We have that game with Dirt 5 and the results IMO are not good, at least in this game.
 
  • Like
Reactions: snc
We have writeups done by Digital Foundry on Gears Tactics as well. You keep bringing up Gears but you have yet to show a single spot where you can pinpoint what you're saying as to being attributed to VRS.
 
The linked article even has a direct comparison showing the same scene with VRS on and off.
Thanks for your link. So the whole hardware VRS they are touting as the holy grail technique only possible on Xbox Series consoles is backed by this one picture? And only one? The same VRS everyone is fighting for or against in multiple forums with just the promise of Microsoft that it's awesome and that picture?

But this is not a comparison. They are showing different parts of the image containing very different details and even different lighting. You can't compare anything with that. It's pointless.

ZUjkqI5.png


This is how you compare sharpness of textures, you need to focus on a few textures (and the same) with no much geometry hiding it like:

Outriders (caused by texture filtering difference):
hKkt3k3.png


Mortal Shell (caused by slower I/O streaming):
sZsndzY.png
 
Thanks for your link. So the whole hardware VRS they are touting as the holy grail technique only possible on Xbox Series consoles is backed by this one picture? And only one? The same VRS everyone is fighting for or against in multiple forums with just the promise of Microsoft that it's awesome and that picture?

But this is not a comparison. They are showing different parts of the image containing very different details and even different lighting. You can't compare anything with that. It's pointless.

ZUjkqI5.png


This is how you compare sharpness of textures, you need to focus on a few textures (and the same) with no much geometry hiding it like:

Outriders (caused by texture filtering difference):
hKkt3k3.png


Mortal Shell (caused by slower I/O streaming):
sZsndzY.png

the blog post has link to other blog posts about vrs and there you can find more pictures like this one

https://devblogs.microsoft.com/directx/gears-tactics-vrs/

And even more tech details, a bit more than just a promise that is awesome
 
One of the technologies offered in the Fidelity FX suite is Fidelity FX Variable Shading, which uses VRS.

(VRS is just a feature of the hardware, actually using it requires a developer side implementation of a variable shading system).
 
I’m pondering why VRS which offers so little in performance has garnered hardware support from AMD, Nvidia and Intel.

Hhmmmm. What an awful waste of silicon.

How can these companies can be so easily duped while being so ignorant to what better geometry culling offers?

SMH
 
I’m pondering why VRS which offers so little in performance has garnered hardware support from AMD, Nvidia and Intel.

Hhmmmm. What an awful waste of silicon.
Yes companies never do multi-million stupid decisions [cough]intels CPU's from the last 5 years, kinect, geforce fx, 3d tvs [/cough]
but seriously, I understand the concept if its what I think it is and agree it can be good in theory esp for VR if you can see where the person is looking, so I google it and I see this image on nvidias explainer page
VRSS-Tech-Text-BONEWORKS-final.jpg

surely this is backwards, you typically want the important stuff in the middle of the screen so you want it at the highest quality, thus the blue should be normal shading rate, the lower quality (i.e. faster shading) should be on the outside?
maybe I dont understand it?, either that or nvidia f-ed up their explainer image
https://developer.nvidia.com/vrworks/graphics/variablerateshading

also from 0-14% improvement in framerate seems very meagre? I would of guessed 50% minimum, but I see from that MS page that it can easily look bad so you want to minimize its use.
For eye tracing VR though I can see its benifits being massive
 
Last edited:
For eye tracing VR though I can see its benifits being massive
But does eye tracking vr exists ? I know sony has some patents but is there real product with eye tracking yet ? edit: there is HTC Vive Pro Eye, will check it reviews
 
Last edited:
VRSS-Tech-Text-BONEWORKS-final.jpg


surely this is backwards, you typically want the important stuff in the middle of the screen so you want it at the highest quality, thus the blue should be normal shading rate, the lower quality (i.e. faster shading) should be on the outside?
maybe I dont understand it?, either that or nvidia f-ed up their explainer image
https://developer.nvidia.com/vrworks/graphics/variablerateshading

The picture on it's own doesn't make it particularly clear, but that's demonstrating VRS based supersampling. As well as being used to reduce shading rates (so saving on shading work) it can be used to increase shading rates above 1:1, so effectively supersampling the selected tile. So in the case of the above image, you could use very highly focused supersampling on the area the user was directly looking at, and run at a standard rate elsewhere.

I imagine this could be especially good for lower resolution headsets, or for materials that are experiencing a high degree of shader aliasing. Edit: there are probably other good uses for it to (but I'm low on imagination at the moment...)
 
Back
Top