Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
there was also some serious pop-in issue with respect to shadows. So what may have appeared as 'no-lighting' or 'flat-lighting' may have actually just been, shadows delayed for so long that you'd think it was the fault of averaging the values
everywhere.

Check out the infamous no shadow nadda lighting control stand. Keep your eye fixated on it long enough and when he's finally beside it both lighting and shadows suddenly appear on it and the other items around it
giphy.gif


So I'll forgive anyone thinking they are in constant shadow, and thus unable to cast shadows within shadows, when in reality, their renderer is borked here.

it may also be an issue with their time of day rendering depending on how they did it.
Here is the sun at 3min
HZAV0kg.jpg


And the sun and lighting again at 6min
NsZpTqV.jpg


So I'm not sure entirely with the sun being obscured what is happening.

If time of day ends up making a difference to the gameplay e.g. stealth is easier at night, or it's easier to rescue marines and get them out alive, that could be a really cool feature.

Also interested to see what path the sun takes in the sky e.g. is the ring spinning like a wheel, while also orbiting a planet (with a dark side). There could be some really cool atmospheric results that really drive home the outer-spaciness of Halo.

I'm up for more dynamic elements in the game. Hopefully that will extend to the trees and grass in the finished thing.
 
If time of day ends up making a difference to the gameplay e.g. stealth is easier at night, or it's easier to rescue marines and get them out alive, that could be a really cool feature.

Also interested to see what path the sun takes in the sky e.g. is the ring spinning like a wheel, while also orbiting a planet (with a dark side). There could be some really cool atmospheric results that really drive home the outer-spaciness of Halo.

I'm up for more dynamic elements in the game. Hopefully that will extend to the trees and grass in the finished thing.
Athmospheric scattering for a ring could be interesting challenge.
Planetary and ring shadows, color shifts etc.
 
Nvidias work really did pay off. But at the same time at this point many devs are not using this kind of checkerboard anymore but more refined taa solutions which are already superior. From dfs perspective tlou2s iq at 1440p is already very close perceptually to native 4k and without the drawbacks of checkerboarding.

So even for hw without dlss level reconstruction there are alternatives. Just nothing standardized. At the same time that latecy probably kills any chance of switch 2 using dlss as wwll.
 
At the same time that latecy probably kills any chance of switch 2 using dlss as wwll.

I think that will depend on how much time you save on rendering thanks to MLAA's ability to excel even at lower native frame buffer resolutions.The cost has to be balanced against the saving.

But that, too, has to be balanced against the silicon cost (tensors vs compute).

But ... that too has to be balanced against the power required for the operations required on the different types of processor (especially on mobile).

Edit: DF actually did mention all that in the video to be fair, now I've finished it.

Edit 2: Hey, @Dictator ! Enjoyed the video, but did some of the labelling get mixed up here?

 
Last edited:
@Inuhanyou Is tlou2 really close to native 4k? I have a feeling it's not, and TAA adds a lot of blur. I don't think the problems with TAA have been solved. That isn't to say it doesn't look very good. Without being able to compare it directly to a native 4k render you can't really know.

 
Last edited:
Hey @Dictator, sorry to ping you twice quickly in succession! There was also something interesting going on earlier, at this point.


On either side of the the backpack on both screens (CB and DLSS for anyone trying to follow along), you can see big long motion blurred rain drops. But directly between the camera and the backpack some of the rain appears missing, and all of the weird 'noise' on the CB version on the left (which I concluded might be simple, single frame, point impacts of rain on the backpack) is missing in DLSS as far as I can tell too. The rain trails down the backpack sometimes show in DLSS, but not those grey things that pop into and out of existence.

(single frame single pixel particle effects wouldn't have motion or depth vectors, right?).

So ... does this mean that DLSS is taking into account depth buffers for the free falling rain between the camera and the backpack, and incorrectly filtering data out?

And while the rain running down the backpack is there with DLSS, that peculiar weird pixel of [light grey or dark grey] that I thought might be a low-cost rain impact effect is missing too. Do you think this could be because these (seemingly) single frame, single pixel particle effects won't have either a motion vector, or depth value? Could this be a current weakness of DLSS 2.0?

This was seriously a great video, and it's got me asking questions!

Edit 3: I mean aliasing [strike]basically[/strike] kinda sorta manifests itself as noise (anti-noise?) over time. If you have a mega simple effect of random pixels popping (with no ancillary information like motion, or neighbours), it's going to look a lot like noise to a system trained to look for and eliminate aliasing, right?

Edit 4: I have a thought process going on here. I'm not insane. Probably. Anti-aliasing system looking for out of place pixels that are too different, small on the level of rasterization, and don't have any kind of persistence within the buffers used for DLSS. They have to look like some Nyquist theorem fuckery, right?
 
Last edited:
No its 1440p. But the AA solution used cleans up very well(again according to df) to the point where dlss isnt all that neccesary if you get good resource saving techniques that can eliminate most glaring visual drawbacks. Insomniac and naughtydog both have great iq solutions and with the extra hw power on offer itll hopefully be most devs
 
No its 1440p. But the AA solution used cleans up very well(again according to df) to the point where dlss isnt all that neccesary if you get good resource saving techniques that can eliminate most glaring visual drawbacks. Insomniac and naughtydog both have great iq solutions and with the extra hw power on offer itll hopefully be most devs
If I may, i really do not think tlou 2 is doing any Form of temporal upsampling at all. You can see it rather well in a direct feed shot or when you put it next to a Real 4k Image of any other game.
I am not sure where the idea of it using TAAu came from like insomniac games.

A great example is to look at the edge quality in the Halo video I just made: when it Switches to the few TLOU2 shots I have it looks very noticably different in terms of aliasing and detail.
 
@Inuhanyou Is tlou2 really close to native 4k? I have a feeling it's not, and TAA adds a lot of blur. I don't think the problems with TAA have been solved. That isn't to say it doesn't look very good. Without being able to compare it directly to a native 4k render you can't really know.

The problem is not the TAA, it's the film grain. You can see the difference with the same shots without film grain using the photo mode.
 
If I may, i really do not think tlou 2 is doing any Form of temporal upsampling at all. You can see it rather well in a direct feed shot or when you put it next to a Real 4k Image of any other game.
I am not sure where the idea of it using TAAu came from like insomniac games.

A great example is to look at the edge quality in the Halo video I just made: when it Switches to the few TLOU2 shots I have it looks very noticably different in terms of aliasing and detail.

Er..i didnt mean to imply naughty dog uses temporal upsampling like insomniac. Just that the iq method in general used looked good enough to clean up well on a 4k tv like insomniacs iq method.

I was saying devs will refine these techniques even further to get good iq results and performance savings despite not having a standardized reconstruction method like dlss
 
taa adds blur. It’s one of the main drawbacks of taa. It’s why many games with taa add sharpening.
I don't even know what they are using in TLOU2. My point is with that much blur added because of the strong film grain they are using, it's premature to say that TLOU2 is blurred by TAA. We don't know what kind of TAA (if any) they are using. But we do know (because of the photo mode) that the film grain they are using is very strong.
 
I don't even know what they are using in TLOU2. My point is with that much blur added because of the strong film grain they are using, it's premature to say that TLOU2 is blurred by TAA. We don't know what kind of TAA (if any) they are using. But we do know (because of the photo mode) that the film grain they are using is very strong.

Uncharted 4 used TAA, and I believe Last of Us 2 does as well. In the video I linked the uncharted 4 TAA is listed as Xu16, which has history clipping/clamping, which comes with some drawbacks. It has a sharpening filter to try to combat some of the blur that's added from the way they try to minimize ghosting.
 
I remember when HZD's upscaling tech was amazing back in 2017. It still is in it's own way, but it pales in comparison to DLSS, it's newer tech but still, things improve still from gen to gen.
 
I remember when HZD's upscaling tech was amazing back in 2017. It still is in it's own way, but it pales in comparison to DLSS, it's newer tech but still, things improve still from gen to gen.
It was also created for hardware pushing 4.2 teraflops. I think it looks amazing for the hardware it was designed for. I mean it took Nvidia till the beginning of this year to have tech like DLSS2.0 available. Yes I know DLSS1.0 was available before but certainly not as impressive as 2.0. Maybe @Dictator can do a comparison of the two technologies as that would be a better comparison imo.
 
What I found enlightening and interesting was that he said that the xsx would take approx 5ms to do ML upscaling, not sure what resolution that's from though.
Be interesting to know how many extra ms it takes to render 2160p from say 1440-1660p, as I don't remember there being any comparisons apart from to rtx and 4pro.
 
On 2018 hardware, it's to be expected they needed to iron things out in their software. DLSS is the most impressive upscaling tech ever imo, and likely it will only improve over time.
12TF+ hardware :)
Agreed and as Alex said baked in hardware with AI components. The PS4 Pro has none of that.
 
Status
Not open for further replies.
Back
Top