Sony PlayStation VR2 (PSVR2)

The whole point of foveated rendering is that it's not running at a high resolution (but you don't notice). If indeed settings are the same, and the image quality is indistinguishable between the two (two big ifs) then this is testament to foveated rendering and doesn't really say anything specific about the P5's performance.
Yeah but its like compariosn rtx2060ti using dlss to consoles as performance comparison. Not best but still its unique feature for some hw that there is not on other.
 
Yeah but its like compariosn rtx2060ti using dlss to consoles as performance comparison.

I agree the tech is awesome and offers huge potential for increasing VR performance.

Not best but still its unique feature for some hw that there is not on other.

Not so, PSVR2 is late to the party in this regard. What it will do is bring eye tracking foveated rendering to the masses though which will be a great thing for everyone. Here are some PC headsets that support eye tracking:

HP Reverb G2 Omnicept
Meta Quest Pro
Varjo Aeor, VR-3 and XR-3
Vive Pro Eye

Nvidia even have a driver level feature to automatically implement eye tracking foveated rendering on any DX11 forward rendered game that uses MSAA (i.e. a lot of VR games). i.e. no specific game support is required for it. So you can do this on PC right now.
 
I agree the tech is awesome and offers huge potential for increasing VR performance.



Not so, PSVR2 is late to the party in this regard. What is will do is bring eye tracking foveated rendering to the masses though which will be a great thing for everyone. Here are some PC headsets that support eye tracking:

HP Reverb G2 Omnicept
Meta Quest Pro
Varjo Aeor, VR-3 and XR-3
Vive Pro Eye

Nvidia even have a driver level feature to automatically implement eye tracking foveated rendering on any DX11 forward rendered game that uses MSAA (i.e. a lot of VR games). i.e. no specific game support is required for it. So you can do this on PC right now.
I know there are other eye tracking solution already (tough "little" more expensive), comment was vs headset without this feature.
 
It will be very interesting to see how coarse the resolutions at edges can get.
If it is something like 4x4 pixels or larger it may save quite a bit of performance.
Well, actual pixel resolution can be way lower. Probably can't discern 16x16 chunks in the mid periphery. The issue then is detail loss not in clarity but in temporal artefacting. A thin highlight that's not sampled one frame to another will flicker, whereas in real life it's consistently present if blurred. If rendering can find a way to smooth/sample that sort of data without rendering full pixels (??), there are insane savings to be made.
I wonder if it's just main pass or if post processing and other passes can be easily handled in low resolution.
All passes should be processable in different resolutions for them to work with the eyes. The issue is whether GPU and rendering engines can adapt. What's an unknown for PS5 is the variable renderer targets, whatever that tech is official called, and how they fit in with the render pipeline and execute. Do they operate automagically level or do they need integrating with various caveats?
 
Well, actual pixel resolution can be way lower. Probably can't discern 16x16 chunks in the mid periphery. The issue then is detail loss not in clarity but in temporal artefacting. A thin highlight that's not sampled one frame to another will flicker, whereas in real life it's consistently present if blurred. If rendering can find a way to smooth/sample that sort of data without rendering full pixels (??), there are insane savings to be made.
Yup, depending on method used there should be nice boost possible.

Eyes are highly sensitive to temporal variations on outer areas of vision, so proper filtering would be nice.

Though fast tweaks like positive mipmap bias, should help within surfaces.

All passes should be processable in different resolutions for them to work with the eyes. The issue is whether GPU and rendering engines can adapt. What's an unknown for PS5 is the variable renderer targets, whatever that tech is official called, and how they fit in with the render pipeline and execute. Do they operate automagically level or do they need integrating with various caveats?
It certainly will be very interesting to hear how it works.

If it can be mostly transparent to developers during rendering, shading and post processing passes, I'm very impressed.
 
Last edited:
Can anyone recall what the tech is named? My Goole-Fu is weak. I know it's in this thread if I decide to go looking! ;)
 
Can anyone recall what the tech is named? My Goole-Fu is weak. I know it's in this thread if I decide to go looking! ;)
The eye tracking and foveated rendering?

They have tech name?

On unreal engine 5.1 they didn't have a tech name. Just openxr eye tracking, and so on
 
The adaptive rendering resolution customisation in the GPU. I only recall it being mentioned in a tweet in passing.
 

Interestingly this patent deals with luminance and chrominance separately.

1674475599073.png

This is only the theoretical patent and not execution. I don't know what can be done in hardware, particularly regards YUV colourspace rendering to different targets where GPUs + engines work with RGB data and render targets.
 

Interestingly this patent deals with luminance and chrominance separately.

View attachment 8159

This is only the theoretical patent and not execution. I don't know what can be done in hardware, particularly regards YUV colourspace rendering to different targets where GPUs + engines work with RGB data and render targets.

There is a japanese article about a CEDEC 2022 Capcom presentation where talk about this. Someone spealing english and japnese did a translation. It is using the hardware rasterizer. This is directly something the dev told during a presentation about development of RE8 on PSVR2.
 

 The first is the use of the PS5 GPU's rasterizer special function " Flexible Scale Rasterization " (Flexible SR).

 A rasterizer is one of the functional blocks installed in any GPU, and it plays the role of decomposing polygons into pixels. And with the PS5's GPU, the rasterizer can decompose polygons into pixels with an intentionally imbalanced balance.
 
There is a japanese article about a CEDEC 2022 Capcom presentation where talk about this. Someone spealing english and japnese did a translation. It is using the hardware rasterizer. This is directly something the dev told during a presentation about development of RE8 on PSVR2.
Yes, I know there's a Flexible Hardware Rasterizer in the GPU. What we don't know is how it functions and how it relates to this patent. This patent is the only mention I've found thus far and it seems to cover ground that I doubt makes it into the GPU. Failing any earlier insights, once we get video captures from PSVR2 we can see if it has separate luminance and chrominance scales. Until then, I'm expecting not and it'll be flexible RGB scaling.
 
John Carmack has been playing down the performance increase from foveated rendering for a while now. I can't find the video I saw him talking about it in but I did find some tweets.

"You can't cut down the number of pixels rendered nearly as aggressively as you might think, because several times a second your eyes will dart to a new position, and the latency from movement through eye tracking, through rendering and displaying a new frame shows a lot of blur."

"You can definitely do things with it, but many people got unrealistic hopes of 10x improvements. You won’t even get 2x versus fixed foveation."


I actually think the eye trackingis worth it just for the automatic ipd adjustment and eye controlled input/navigation anyway but will be curious how much performance it does bring and how useful it is with fresnel lenses. I always turn my head to get whatever im looking at into the lens sweet spot.
 
"You can definitely do things with it, but many people got unrealistic hopes of 10x improvements. You won’t even get 2x versus fixed foveation."
I think its a matter of expectation, imo 50% would be briliant achievement when you cant see diff (as what you look at is sharp), 10x would be like getting new generation, thats quite an expectation ;d
First benchmarks are quite promising
 
John Carmack has been playing down the performance increase from foveated rendering for a while now. I can't find the video I saw him talking about it in but I did find some tweets.

"You can't cut down the number of pixels rendered nearly as aggressively as you might think, because several times a second your eyes will dart to a new position, and the latency from movement through eye tracking, through rendering and displaying a new frame shows a lot of blur."

"You can definitely do things with it, but many people got unrealistic hopes of 10x improvements. You won’t even get 2x versus fixed foveation."


I actually think the eye trackingis worth it just for the automatic ipd adjustment and eye controlled input/navigation anyway but will be curious how much performance it does bring and how useful it is with fresnel lenses. I always turn my head to get whatever im looking at into the lens sweet spot.

Carmack talking some sense into people again. He's a legend.
 
Back
Top