Sony PlayStation VR2 (PSVR2)

This combined with Gt7 pc rumor, here's hoping Gt7 comes to pc with VR support

My impression is they are testing Steam Link.

So basically like how it is in quest then.

As currently PSVR2 use virtuallink for funneling the things being displayed. But most pc doesn't have virtuakink support.
Or Sony will do an fw update and make psvr2 support regular USB c DP?
But then you'll alienate people without USB c.

So yeah, probably steam vr link compatibility would be better for wider support.

EDIT:
what if... steam vr link app on PS5... so you would still need a PS5 to play PCVR games with PSVR2
 
Last edited:
Developer of cyubeVR has AMA on reddit and I asked what was performance gain of using dynamic foveated rendering, he replied:
Very hyuge! Hard to put a number on it, but at least 2x. Without it, cyubeVR would not have worked at all on PSVR2, it's by far the most important feature of the PSVR2!
 
Fabulous, but also well below the theoretical potential of foveated rendering. Why isn't it producing 10x performance improvements?
 
It seems to be more typically around the realm of a 30 - 50% boost. I guess if you could massively downsample the periphery you could get more though.
 
It seems to be more typically around the realm of a 30 - 50% boost. I guess if you could massively downsample the periphery you could get more though.
And you should be able to. Visual acuity outside the fovea is pants! I wonder what the limiting factor is? Are the transitions too slow? Did scientists underappreciate the human reconstruction algorithms and requirement for data beyond simple acuity??
 
And you should be able to. Visual acuity outside the fovea is pants! I wonder what the limiting factor is? Are the transitions too slow? Did scientists underappreciate the human reconstruction algorithms and requirement for data beyond simple acuity??

Carmack has talked about it before and tobii, which is what psvr2 is using and they are implying it will be a bigger win the higher the hmd displays resolutions and fov get.


 
Carmack has talked about it before and tobii,
Not particularly detailed. :( Also nothing anywhere on PS5's custom variable dynamic resolution thingy. It was very apparent in one of the gameplay videos, but also strangely distributed that it wasn't an obvious pixel count of reduced quality by area.
 
Not particularly detailed. :( Also nothing anywhere on PS5's custom variable dynamic resolution thingy. It was very apparent in one of the gameplay videos, but also strangely distributed that it wasn't an obvious pixel count of reduced quality by area.
Yeh that wasn't the longer clip I thought it was, he talks about with pancake lens in the pro having more clarity edge to edge you notice the chunky pixels in your peripheral and also the 50ms latency can be an issue because your eyes can move alot in that time apparently. I've played DCS on an Aero hmd with eye tracked foveated rendering and with aggressive settings that gave a performance boost of like ~45fps off to ~70fps on. I would notice aliasing in the cockpit panels in the peripheral. Wasn't mine so I didn't have a chance to really play around with it. If psvr2 does get pc support I might be able to mess about with alot more stuff, a quest pro would maybe be a better thing to experiment with because of the lenses but i'm not interested in looking for a 2nd hand one of those.

Would the vrs mentioned in the tobii article be comparable to the ps5's dynamic res thing?
 
Yeh that wasn't the longer clip I thought it was, he talks about with pancake lens in the pro having more clarity edge to edge you notice the chunky pixels in your peripheral
That contradicts with the theory. Perhaps the science misses something? Although a bit of guassian blur might be all that's needed? Or is peripheral acuity actually better that science thinks?
Would the vrs mentioned in the tobii article be comparable to the ps5's dynamic res thing?
No. VRS just reduces shader quality of pixels drawn. PS5 is fully scaling the pixels, though at an arbitrary amount as opposed to the various base-2 sizes normally seen. VRS + scaling would be better than just scaling, although by the time you at 1/16th pixels it isn't going to make a huge difference.

The most important saving is being able to render most of the screen at a much lower fidelity, but it sounds like the user can still notice chunky pixels. Reiterating, perhaps a bit of blur is all that's needed to make it more natural?
 
I can understand the shimmering. If that's the problem, and it was pointed out as a potential serious issue with low-resolution sampling, that'd definitely limit what can be done. The solution then becomes somehow 'supersampling' the low sampled points, which needs magic as the whole point of the lower rendering is sampling less.

I feel there's an ML driven solution that can approximate the periphery based on a low frequency sampling over time...
 
Back
Top