Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
nVidia's tessellation advantage was mitigated by AMD "optimizing" their drivers to cap the maximum tessellation level to about a quarter of what nVidia uses. There is a case to be made that those optimizations are valid because there is little to no visual difference, but as is often the case in PC games, when you start increasing resolutions beyond the scope of what was available at the time, you starts seeing things you may not have seen at a lower resolution. I'm curious what some of those games look like with AMD's altered tessellation on and off at 8k. It's a clever trick, but it's also a false equivalency because both cards aren't doing the same amount of work.

An example of something similar historically is when 3Dfx had a feature called mipmap dithering. It dithered the transitions between mipmaps when bilinear filtering was enabled, giving you less noticeable steps between mip levels without the performance penalty of trilinear. This is fine for plenty of games when I had an older Voodoo card, but when I upgraded to a Voodoo3, that thing supported resolutions much higher than before, and while thing weren't what I would call playable at those resolutions, I did mess around with some games at extreme resolutions for it's time. I don't remember what the maximum resolution was in 3d for the Voodoo 3, but I do remember that my monitor could handle it. Might have been 1600*1200, or maybe just 1280*1024, but it was much higher than before, and one of the things I noticed was the mipmap dithering was much more... Identifiable I think is the right word. It still looked better than plain bilinear, but also clearly not as good as true trilinear, and there was a telltale pattern where the mip transitions were. But while at lower resolutions trininear and dithered bilinear looked nearly identical.

AMD never forced the tessellation optimization, it was manually selectable in drivers. Benchmarkers never enabled it for their comparisons.
 
When he was explaining what and how they would lower the resolution of several of the layers because of DOF and how you wouldn't be able to tell, I said to myself "hmm VRS would make this a lot easier and more straight forward" only to hear him say towards the end that VRS is one of those features he is looking forward to as their layering system looks like a poor man's version of VRS lol.
For this case VRS may not scale enough.
Apparently on XsX it only goes to coarseness of 2x2 pixels. (4xMSAA ordered grid.)
 
Last edited:
Yeah that is an interesting thing to think of - RDNA 2's does not give such great coarseness with only 2X2. Turing and Ampere doing 4X4 is quite interesting for such DOF cases... perhaps even 8X8 could have its uses then at times!
Yup.
Although one does have to be quite careful so it will be temporally stable, TAA jitter may cause sample to be quite bit different between frames etc. (oh, and blur away the point sampled look, hopefully with additional data from TAA..)
 
Yeah that is an interesting thing to think of - RDNA 2's does not give such great coarseness with only 2X2. Turing and Ampere doing 4X4 is quite interesting for such DOF cases... perhaps even 8X8 could have its uses then at times!

I don't understand this exactly but...

Its does also support other functions outside pixel, it also supports per 8x8 tile which I assume could be 1 if you wanted course?

upload_2020-12-7_12-10-6.png

It will be interesting where this goes once developers have time to experiment with this.
 

Attachments

  • upload_2020-12-7_12-9-29.png
    upload_2020-12-7_12-9-29.png
    60.3 KB · Views: 9
I don't understand this exactly but...

Its does also support other functions outside pixel, it also supports per 8x8 tile which I assume could be 1 if you wanted course?

View attachment 5058

It will be interesting where this goes once developers have time to experiment with this.
take a look at the coarse rates there ^^ that is the levels to which it can degrade shading. 1X2, 2X1, and 2X2.

The Tile size is the granularity of it generating the coarse shading rate.
 
For this case VRS may not scale enough.
Apparently on XsX it only goes to coarseness of 2x2 pixels. (4xMSAA ordered grid.)

You could use a combination of variable resolution and variable shading to get the optimum tradeoff of performance and quality. For example, run a layer at a resolution higher than you otherwise would but reduce the cost back down using VRS.
 
Can you also do reverse VRS?
So that you draw the whole scene in a lower resolution except for the front or focused layer?
I can see that freeing up even more performance than the other way around, as with modern games with DOF rendering and everything there is a lot of detail which would be several pixels wide.

(If you freeze frame even the best 4K movie, and walk up close to the screen, there will most probably not be a single identifiable line or detail which is only 1 pixel wide, not talking about 4:2:2, but actual detail.)

The problem I think is relying on prebuilt tech. I am pretty sure that the Xbox VRS artefacts are just a product of using it like the tech is intended. But if you were to render it to another buffer and maybe there smooth it, it would not look that out of place
 
Can you also do reverse VRS?
So that you draw the whole scene in a lower resolution except for the front or focused layer?
I can see that freeing up even more performance than the other way around, as with modern games with DOF rendering and everything there is a lot of detail which would be several pixels wide.

(If you freeze frame even the best 4K movie, and walk up close to the screen, there will most probably not be a single identifiable line or detail which is only 1 pixel wide, not talking about 4:2:2, but actual detail.)

The problem I think is relying on prebuilt tech. I am pretty sure that the Xbox VRS artefacts are just a product of using it like the tech is intended. But if you were to render it to another buffer and maybe there smooth it, it would not look that out of place

VRS is to save performance on the backend, not the frontend of game engine. If developers want to deliver a certain image quality and performance, then they operate within the boundaries they set. And if VRS is implemented correctly, then the additional performance gains are just icing on the cake.
 
Last edited:
Can you also do reverse VRS?
So that you draw the whole scene in a lower resolution except for the front or focused layer?
I can see that freeing up even more performance than the other way around, as with modern games with DOF rendering and everything there is a lot of detail which would be several pixels wide.

(If you freeze frame even the best 4K movie, and walk up close to the screen, there will most probably not be a single identifiable line or detail which is only 1 pixel wide, not talking about 4:2:2, but actual detail.)

The problem I think is relying on prebuilt tech. I am pretty sure that the Xbox VRS artefacts are just a product of using it like the tech is intended. But if you were to render it to another buffer and maybe there smooth it, it would not look that out of place
VRS artifacts is only noticeable dependent on how aggressive the developer wants VRS to be. You're unlikely to notice it if done right, and done tastefully in a way where a developer isn't clutching on it to save performance.

While VRS saves on performance, it is not meant to be a replacement for actual performance savings like DRS.

There is a notable difference variable resolution and variable rate shading and what the two accomplish. Having different resolution buffers mixed together doesn't necessarily create the same effects as variable rate shading. If you want to do things like DOF you can leverage things like VRS for that and that makes sense as coarser shading rates would cause more pixels to be similar thus creating a blur effect, as you can adjust how much coarse shading you want throughout the whole frame. Reducing the resolution won't solve that issue, it just makes details look softer, you'd have to build hundreds of individual target buffers all with differing resolutions to attempt to do what VRS does.
 
You could use a combination of variable resolution and variable shading to get the optimum tradeoff of performance and quality. For example, run a layer at a resolution higher than you otherwise would but reduce the cost back down using VRS.
This should work nicely for places which have sharp edges and low contrast interior.
So tree in fog, mostly silhuet and single color.
 
I wonder what it would take for Sony or Microsoft to front places like Digital Foundry, modded machines.. Nothing different of course, but "binned" chips with ideal thermal profiles, coupled with full, hands-on inspections and testing above what would be done in an assembly line... Then, "tweak" performance.
Especially for PS5, they could skew the clock frequency faster than they'd be comfortable with on a large scale, but okay with in that scenario. Since the clock-rate fluctuates anyway, there wouldn't be alot of risk in anyone testing it directly... And in the case of PS5, even if they did power draw tests, those would vary anyway, and you'd have to see a wide-scale test of random machines to see anything too different.
(and put a kill switch to return to default if they decide to drop the risk)

Note, I'm not assuming they do, but they "could", technically. Alot of the frame-rate and resolution differences are too small for regular people to actually notice, but it is pretty big in terms of PR. They'd just have to gift them a large number of machines, to help ensure they tested using in-house machines..

*Thinking about it, I suppose there wouldn't be enough of a benefit for the trouble / risk.
 
DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2020-call-of-duty-warzone-next-gen-showdown

Call of Duty: Warzone - what's really happening on PS5 and Xbox Series consoles?
4K60 and 120Hz gaming tested on all three next-gen consoles.

Activision made headlines at the next generation console launch by adding 120fps support To Call of Duty: Warzone on Xbox Series X - an extra that was not mirrored on PlayStation 5, which continues to top out at 60Hz. Confusion followed, but clarifications came when Rocket League developer Psyonix revealed that Sony does not allow 120Hz support on legacy PS4 apps running on PlayStation 5. It's a shame to not see it yet, especially since other enhancements have been delivered to PS4 games through back-compat (most notably on Days Gone and Ghost of Tsushima, which both now run at 60fps on PS5). And so, we wondered to what extent Warzone has been improved when running on both next-gen machines. 120fps support is allowed on Series X, but is there any other advantage here? The answer? Absolutely yes.

To get the comparison data we needed, we leaned into Warzone's crossplay functionality, which allows up to four different gamers using any supported console or PC to play together on the same server. Not only that, but we can actually get precise, lined up shots of gameplay from all systems by allowing all members of the squad to perish, then sync capture in spectator mode. Four consoles, four gamers all recording footage of the same player. From there, the analysis can begin proper.

First things first, we can confirm that Warzone is indeed a backwards compatibility title, even though Xbox Series X and S versions receive 'optimised' status, presumably owing to their 120Hz support. However, running matched feeds of Xbox One X and Series X side-by-side, it's pretty clear that this is effectively the same game - though there are some interesting improvements, mostly delivered by the efforts of the Xbox compatibility team. First of all, the Series X hardware essentially eliminates the dynamic resolution sub-4K rendering seen on Xbox One X, which scales between 1920x2160 all the way up to 3840x2160. In every pixel count we carried out, Series X delivers full ultra HD resolution - even in scenarios where Xbox One X drops well beneath 60 frames per second. In essence, Warzone fully taps out back-compat support, with a 2x resolution multiplier - impressive stuff.


...
 
Can you also do reverse VRS?
So that you draw the whole scene in a lower resolution except for the front or focused layer?
I can see that freeing up even more performance than the other way around, as with modern games with DOF rendering and everything there is a lot of detail which would be several pixels wide.

(If you freeze frame even the best 4K movie, and walk up close to the screen, there will most probably not be a single identifiable line or detail which is only 1 pixel wide, not talking about 4:2:2, but actual detail.)

The problem I think is relying on prebuilt tech. I am pretty sure that the Xbox VRS artefacts are just a product of using it like the tech is intended. But if you were to render it to another buffer and maybe there smooth it, it would not look that out of place
So I have heard that series X and S support a super sampling mode for VRS - to give extra shading to an area on demand. But, I am not sure how it could or would be used.
 
You're unlikely to notice it if done right, and done tastefully in a way where a developer isn't clutching on it to save performance.
I look forward to the BAFTA award for the most tasteful implementation of VRS on a console. :mrgreen:
 
So I have heard that series X and S support a super sampling mode for VRS - to give extra shading to an area on demand. But, I am not sure how it could or would be used.


I suppose it would work like the 6k downsampling mode on Ori and the will of the wisps. In the interview John did with the lead engineer on Ori, Gennadiy Korol, he was talking about how they only really render the character layer at 6k then downsample that, could the vrs supersampling be used for things like that? It might also be good for things like fine text in game too
 
AMD never forced the tessellation optimization, it was manually selectable in drivers. Benchmarkers never enabled it for their comparisons.
It's enabled by default. I doubt most benchmarks disable it, since my AMD cards performed in line with basically every tech websites' benches, within reason/margin of error, and worse in games like Crysis 2 where tessellation is used to an excessive degree when the option is disabled.
 
So I have heard that series X and S support a super sampling mode for VRS - to give extra shading to an area on demand. But, I am not sure how it could or would be used.

Maybe higher lighting precision on the main character or related to shadows? Also no idea but it's cool if the developers start thinking outside the.....
x- box :D
 
So I have heard that series X and S support a super sampling mode for VRS - to give extra shading to an area on demand. But, I am not sure how it could or would be used.
would love to see how the pipeline would work around this, I would love to see a developer attempt this lol. imo a good use to take advantage of available power other than
a) increasing resolution
b) increasing frame rate
c) what to do with available power when you're close up and neither (a) or (b) or draw distance can increase the workload
 
Status
Not open for further replies.
Back
Top