What do you prefer for games: Framerates and Resolutions? [2020]

What would you prioritize?


  • Total voters
    42
What makes you think it wasn't enough for MS?
I’m just looking at logical reasons why someone would roll their own solution.
Either it’s not present.
It’s not compatible.
It’s not adequate in performance.
Why spend resources otherwise? I can’t see another reason to do this.

MS does not have VRS as a whole patented. They have their own patent version of VRS as does Nvidia and AMD.
 
But did they roll their own solution? In hardware? Or are they using AMD's hardware to implement their solution?
When Sony started going quiet about VRS; it started to make me think that it wasn’t available for either. So MS had to make their own version for it to work. Otherwise the only other reasons would be performance.

We do have the patent information for this in an older thread. It is different from AMDs
  • Variable Rate Shading (VRS): Our patented form of VRS empowers developers to more efficiently utilize the full power of the Xbox Series X. Rather than spending GPU cycles uniformly to every single pixel on the screen, they can prioritize individual effects on specific game characters or important environmental objects. This technique results in more stable frame rates and higher resolution, with no impact on the final image quality.
 
I certainly hope it's in the PS5, even if, for whatever reason, it's an inferior version compared to MS's. Combined with foveated rendering - especially if it's based on eye tracking - it'd be incredibly useful for PSVR2 games.
 
So there's more than one form.
Like I understand why each GPU vendor has their own patents for it. They don’t want to pay licensing fees to another. But the case for Sony and MS is that they are licensing everything already.
So why did MS go make its own?
Seems entirely wasteful from a resource perspective, and they will gain nothing from their own patent, each vendor will continue to evolve their own IP.

MS would have to put it into a bunch of MS devices or something. I dunno. It just doesn’t make sense to me. Because they don’t make their own GPUs. They are forever going to license a chip and toss away their VRS for theirs I suppose? Beyond daft.
 
Could it be as simple as an implementation that's platform agnostic, leaving them with greater flexibility for future GPU vendors?
APIs are meant to solve that issue. You describe some inputs and you describe how it should behave and what the outputs and options are.
It shouldn’t matter if they move around, all VRS should work the same as long as it’s the same DX12 command calling it. How it performs is a different story.
 
Like I understand why each GPU vendor has their own patents for it. They don’t want to pay licensing fees to another. But the case for Sony and MS is that they are licensing everything already.
So why did MS go make its own?
I'm not suggesting anything about MS's solution. Only that their patent doesn't stop AMD having VRS, and if VRS is in RDNA2, surely it's in PS5? The argument, "PS5 can't have VRS because MS have patented it," doesn't hold if the IHVs all have their own versions.
 
I'm not suggesting anything about MS's solution. Only that their patent doesn't stop AMD having VRS, and if VRS is in RDNA2, surely it's in PS5? The argument, "PS5 can't have VRS because MS have patented it," doesn't hold if the IHVs all have their own versions.
Yea nothing MS could do would stop Sony from having VRS. They license directly from AMD and they are cleared to make their own as well.

So then the only things remaining ideas are:
Sony does have VRS, And MS wanted their own custom solution for it.
Or
Both Sony and MS did not have access to VRS for their SoC; MS was forced to make its own to obtain the feature for whatever reason.

both are viable possibilities, Im just not sure which one is right.
 
Last edited:
You need both VRS and eye tracking to get foveated rendering IIRC.
You don't need VRS. You can just render partial screens at lower res and composite. In fact, that's better than VRS as VRS doesn't change resolution and would be rendering far more pixels than necessary. What you really want is variable rate resolution across the display buffer. VRS can be added on top to simplify the out-of-focus shading.
 
You don't need VRS. You can just render partial screens at lower res and composite. In fact, that's better than VRS as VRS doesn't change resolution and would be rendering far more pixels than necessary. What you really want is variable rate resolution across the display buffer. VRS can be added on top to simplify the out-of-focus shading.
But you will need to know where the fovea is to render to so at least that is a given.
Still I wonder if the silence WRT Sony and VRS might have something to do with PSVR2 ?
 
You need both VRS and eye tracking to get foveated rendering IIRC.

For proper foveated rendering, you certainly need eye tracking. Although it exists in a form on PSVR already, rendering at a higher resolution in the centre of the image, and lower as you move further outward along the radius.

VRS, I think, isn't necessary, but it would be able to benefit from eye tracked foveated rendering, because it can dedicate more shading power to the area with the highest resolution: the area you're looking at.
 
VRS, I think, isn't necessary, but it would be able to benefit from eye tracked foveated rendering, because it can dedicate more shading power to the area with the highest resolution: the area you're looking at.
Though true, the very low resolution you can use outside the foveated area is already very lean. If you are spending 90% of your GPU power on the foveated area, VRS nets you a ...20% gain on the 10% outside, or a small percentage in total. Effective eye tracking is realistically the holly grail of efficient graphics rendering.
 
Though true, the very low resolution you can use outside the foveated area is already very lean. If you are spending 90% of your GPU power on the foveated area, VRS nets you a ...20% gain on the 10% outside, or a small percentage in total. Effective eye tracking is realistically the holly grail of efficient graphics rendering.

Good point, I didn't think of it that way.

Are we at all near a point where eye tracking is feasible for TV viewing distances? I know there are PC solutions already, such as Tobii, but it seems to be limited to 26" screens and a 35" distance.

Can that be scaled up and still exist as a little monitor-topping attachment? Or does size and distance lead to a point where something more like Google Glass is more practical? E.g. a visor which tracks your eyes, while a TV-topping camera tracks the visor.
 
I recall someone having some patents for something camera-ish. The many limitation so far AFAIK has been speed of tracking - it's just been too slow. There are hints it may be present in PSVR2. Which, conceptually, could provide an option that isn't VR but a virtual 2D screen, but with eye-tracking for optimised graphics. Something like that could concentrate quality on a specific area and give a massive proportional power boost, but it'd be a weird implementation choice for any game. Short of Sony doing that in a 1st Party just as a showcase, I can't see it becoming standard practice until a large part of the gaming populace as eyetracking headsets. And then devs will have to scale their game from integrated graphics through console and PC gamers to foveated rendering. Yay.
 
I feel that it highly depends on the game.
If it is fast-paced/requires a lot of accuracy/reaction-time based, then faster framerate is usually better.
But in other cases, the visual presentation may do more to make the experience positive than the framerate.
I often feel that if the game I play uses a mouse as input, the framerate automatically becomes 10000 times more important.
 
Back
Top