Current Generation Hardware Speculation with a Technical Spin [post launch 2021] [XBSX, PS5]

Status
Not open for further replies.
If VRS brings 5-10fps increase, that can be simply clawed back by outputting slightly less pixels, or by using software VRS (ala MW), then it makes total sense why Sony would not implement it.
What it really comes down to is how much performance you get per die space. If VRS is only 1% more die space for 10% more performance I would imagine that would be a fair trade off. But that performance gain has to be above the software solution, not above having the feature turned off.
 
If VRS brings 5-10fps increase, that can be simply clawed back by outputting slightly less pixels, or by using software VRS (ala MW), then it makes total sense why Sony would not implement it.

It does, but that's applicable to both the argument of stripping out VRS hardware and not spending the resources to expose it in dev kits.

What it really comes down to is how much performance you get per die space. If VRS is only 1% more die space for 10% more performance I would imagine that would be a fair trade off. But that performance gain has to be above the software solution, not above having the feature turned off.

True. The capacity for it is present somehow, as it's already supported in software for pseudo foveated rendering for PSVR games. Maybe VRS hardware is present and can be used for foveated rendering. Maybe it doesn't lend itself as well to foveated rendering, and has therefore been removed in favour of a software solution that works equally well between VR and non-VR. Maybe they have their own equivalent of hardware VRS but it's only a real priority for the release of PSVR2, so exposing it in hardware will only become a pressing matter closer to the launch of said hardware.

¯\_(ツ)_/¯
 
Really excited about this. I'm a fan of this game and 4a. That 90% compute number (wow!) is relevant to our discussion a day or two ago, I hope that makes the "renderers trend towards compute" prediction sound a little more concrete.

Compute seems to be becoming an increasingly common reason for many games to be considered 'unoptimized' nowadays. Recently, I've noticed that RDR2 and HZD both show this symptom where dropping res as low as 640x360 will still leave the GPU reporting 100% utilization and with little to no improvement over higher resolutions.

I haven't gotten RDR2 working in RenderDoc (though I suspect water physics being the main cause) but HZD is definitely using it for a number of effects and including Aloy's hair rendering. Coupled with the increasingly common use of GPU culling via compute shaders then I wonder if it isn't becoming a bigger bottleneck than many people suspect.
 
Compute seems to be becoming an increasingly common reason for many games to be considered 'unoptimized' nowadays. Recently, I've noticed that RDR2 and HZD both show this symptom where dropping res as low as 640x360 will still leave the GPU reporting 100% utilization and with little to no improvement over higher resolutions.

I haven't gotten RDR2 working in RenderDoc (though I suspect water physics being the main cause) but HZD is definitely using it for a number of effects and including Aloy's hair rendering. Coupled with the increasingly common use of GPU culling via compute shaders then I wonder if it isn't becoming a bigger bottleneck than many people suspect.
Doesn't Horizon use GPU compute to calculate placement of foliage, rocks, and tons of other detail objects? A workload like that would be resolution independent.
 
If VRS brings 5-10fps increase, that can be simply clawed back by outputting slightly less pixels, or by using software VRS (ala MW), then it makes total sense why Sony would not implement it.

It might be required for MS, along with Mesh Shaders and SFS, just so they can unify DX12U across their whole family of devices, but for Sony? I think they would rather take having chip and dev kits ready well before and spending that budget somewhere else.

IMO good decision by Sony, one after the other (exploiting max clock rates, great IO/SSD, joystick etc)
Yes according to their patent Sony seem to have another strategy with VRS (particularly for VR purpose). Instead of reducing the resolution of (the ideally) less visible parts of the image they want to increase the resolution of the most visible parts of the image and render only the visible polygons (thanks to their geometry engine).

MS patented VRS is not a win win solution like what are trying to accomplish most reconstruction techs. MS solution is win lose. They get more frames but each frames have a perceptibly lower resolution, all the time.
 
MS patented VRS is not a win win solution like what are trying to accomplish most reconstruction techs. MS solution is win lose. They get more frames but each frames have a perceptibly lower resolution, all the time.
This is not true - the point of VRS is to use a subjectively defined texture map to say which areas should have shading reduction. It is fully up to the developer to decide how much and to what degree and what "area" of a frame consitutues a theshhold of sameness so that shading reduction is perceptibly invisible. It can be completely impossible to see if colour between multiple pixels is anyways already similar - or already at a high enough speed in-camera to not be visible, or far enough away or behind depth of field so as not to be visible anyway.

The point of VRS is to exploit perceptual similarity to increase performance. It is not about degrading image quality - and is only about such if a developer chooses it to be. It is not so by design.
 
Yes according to their patent Sony seem to have another strategy with VRS (particularly for VR purpose). Instead of reducing the resolution of (the ideally) less visible parts of the image they want to increase the resolution of the most visible parts of the image and render only the visible polygons (thanks to their geometry engine).

MS patented VRS is not a win win solution like what are trying to accomplish most reconstruction techs. MS solution is win lose. They get more frames but each frames have a perceptibly lower resolution, all the time.
Like @Dictator already wrote, VRS has nothing to do with reducing frame resolution. Just Shading resolution for areas where a reduced resolution makes not difference. But it is still up to the developer how to use it.
First chips that supported VRS would be turing GPUs. So it not even a really new feature.

If VRS is implemented correctly, you should not see a visible difference, because the area has the same color all over (e.g. sky), is not visible (darkness) or is heavily blurred due to some post processing.
Those are very common situations and culling does not work in those situations, because the details might just be visible (because culling does not include the visibility in the dark or because of blurring).

In the end, all the new techs are more or less there to get things done more effective.
 
I think the best overview of the effectiveness of VRS is this blog on the implementation in Gears 5. It talks about performance gains but also potential issues. At the end they even mention the software based approaches and the possibility of combining hardware and software VRS.

I think an up to 14% performance gain (with no noticeable impact on visual appearance) is certainly a worthwhile technique.
 
Yes according to their patent Sony seem to have another strategy with VRS (particularly for VR purpose). Instead of reducing the resolution of (the ideally) less visible parts of the image they want to increase the resolution of the most visible parts of the image and render only the visible polygons (thanks to their geometry engine).

MS patented VRS is not a win win solution like what are trying to accomplish most reconstruction techs. MS solution is win lose. They get more frames but each frames have a perceptibly lower resolution, all the time.

Pfft. Thats just semantics. What’s the difference between making the targets in the foreground or your focus better, or making the objects in the background or peripheral worse?

Absolutely none. Other than to spin one positively and the other negatively.

You end up with varying levels of quality either way.
 
Last edited:
Typically, with semi custom solutions you can pick and choose or remove which hardware blocks you’d like on your chips.
In this particular case however, AMD doesn’t have a fixed function tensor processing patent or design, IIRC, that is working, so it’s not something they can just bring in.
Actually AMD does have those on the CDNA side of the fence, called Matrix Core Engines. Not sure whether they could be bolted onto RDNA though.
 
Like @Dictator already wrote, VRS has nothing to do with reducing frame resolution. Just Shading resolution for areas where a reduced resolution makes not difference. But it is still up to the developer how to use it.
First chips that supported VRS would be turing GPUs. So it not even a really new feature.

If VRS is implemented correctly, you should not see a visible difference, because the area has the same color all over (e.g. sky), is not visible (darkness) or is heavily blurred due to some post processing.
Those are very common situations and culling does not work in those situations, because the details might just be visible (because culling does not include the visibility in the dark or because of blurring).

In the end, all the new techs are more or less there to get things done more effective.
Maybe if used very sparingly like in those ideal cases. But in some cases like Halo Infinite (software VRS) or Dirt 5 the overall vaseline effect is too strong (IMO). VRS in those cases reminds me the first aggressive implementations of FXAA (or even Quincunx on PS3). Back then many people were saying FXAA was such a great tech, a new industry standard, and some where even saying that we had to use it at all cost, even if it was obviously destroying the sharpness of the final image. Back then I was already saying that it had to many negative effects and should not be used, differently or very sparingly.

About hardware VRS vs software VRS by Activision. According to them software VRS is better overall because hardware VRS has too many side effects.

As opposed to hardware-based solutions which are restricted to a subset of available devices, its software-based implementation makes it possible to achieve higher quality and performance on a wide range of consumer hardware.

https://research.activision.com/pub...able-rate-shading-in-call-of-duty--modern-war
 
Maybe if used very sparingly like in those ideal cases. But in some cases like Halo Infinite (software VRS) or Dirt 5 the overall vaseline effect is too strong (IMO). VRS in those cases reminds me the first aggressive implementations of FXAA (or even Quincunx on PS3). Back then many people were saying FXAA was such a great tech, a new industry standard, and some where even saying that we had to use it at all cost, even if it was obviously destroying the sharpness of the final image. Back then I was already saying that it had to many negative effects and should not be used, differently or very sparingly.

About hardware VRS vs software VRS by Activision. According to them software VRS is better overall because hardware VRS has too many side effects.



https://research.activision.com/pub...able-rate-shading-in-call-of-duty--modern-war

Software VRS is hardware agnostic...and that's it in terms of advantages over hardware implementation.
 
Does VRS benefit from higher framerates? Can it, for example, shade an area at higher precision one frame, lower precision the next, rinse and repeat? That's assuming a texture that's not as uniform as the sky, but not as varied as a face - maybe something like a few square feet of pebbles.

I'm just wondering if in that sort of scenario, some artefacting would be perceptible at 30fps, less so at 60, and then barely at all at 120.
 
Software VRS is hardware agnostic...and that's it in terms of advantages over hardware implementation.
Not necessarily true.
Depends on the hardware implementation, example is sizes of grid available.
Hardware has many benefits though, possibly performance, ease of implementation, etc. That alone could easily make it worth the silicon.

Also hardware VRS is seen as an area of research for MS. A bit like the way MSAA hardware was leveraged for other uses e.g. checker boarding.
 
Actually AMD does have those on the CDNA side of the fence, called Matrix Core Engines. Not sure whether they could be bolted onto RDNA though.
nice, an alternative for me! Hopefully they get some solid library support and I don't actually have to buy nvidia.
 
nice, an alternative for me! Hopefully they get some solid library support and I don't actually have to buy nvidia.

On RDNA, we have lower precision dot product instructions. On CDNA, we have matrix core engines. Both approaches are useful for accelerating machine learning ...
 
Does VRS benefit from higher framerates? Can it, for example, shade an area at higher precision one frame, lower precision the next, rinse and repeat? That's assuming a texture that's not as uniform as the sky, but not as varied as a face - maybe something like a few square feet of pebbles.

I'm just wondering if in that sort of scenario, some artefacting would be perceptible at 30fps, less so at 60, and then barely at all at 120.
I would worry it would come across as texture shimmering everywhere, think of TXAA, but now apply it to whole surfaces instead of the edge. Not ideal.
The goal of VRS is just a tool in which you can variably change the shading rate.
By design it saves performance because it uses 1 calculation to spread over more pixels. But that doesn't mean it's purpose by design is necessarily only to save frame rate.
There are a great deal of many optical techniques that would benefit from this as like depth of field etc, that are costly calculations to perform that can be estimated fairly well by using something like VRS.

I would say that VRS has a larger and less perceptible impact as the resolution gets higher. As the smaller the pixels become the more likely they are to be the same colour anyway to represent the same things. So save some calculations and spread. Once you get into extreme high fidelity you can still have VRS just ignore those areas and target the areas further back.

Not directed to you:
Developers when they get handle on using VRS more, I can see it being applied to scenarios where typically they get poor performance from the algorithms and VRS is a good fit for an estimation of that effect at a significantly reduced cost.

There is nothing wrong with VRS, just like there isn't anything wrong with the multitudes of anti aliasing techniques. If people only accepted the absolutely best quality of AA, we would never have left super sampling anti aliasing. Clearly there is more than significant appetite for these types of techniques that can compromise some image quality and claw back significant performance depending on the job you require it to do.

And not all games support dynamic resolution scaling. So that should be kept in mind. If you're posturing for PS: having nightmares about Hitman3 being down 44% of pixels. With VRS its possible they could have run the 4K. Just an idea to throw out there before people dismiss VRS.

As for hardware vs software VRS. GE and VRS are both done on the 3D pipelines. So that's something to take note of. That means you'll ultimately end up using the rasterization step. These are nice customizations, but developers who have both talent and resources to roll entirely compute based solutions will skip over this in favour of their own custom compute methods.
 
Last edited:
Status
Not open for further replies.
Back
Top