Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
I know having a greater FOV can have an impact overall performance because more geometry/textures/assets are being rendered on screen at higher FOV settings. And this can impact DRS as well (meaning, the greater increase in FOV and rendered assets, the greater chance DRS will drop from the upper-bound resolution setting, in this case 2160p).

So, my question is to Alex or Tom, what FOV settings were used in the analysis? Hopefully the defaults...

Edit: And if increased FOV settings does affect Destiny 2 performance on PS5/XBSX, it might be a great tool on comparing performance at various settings.
Yup. The video, at least, doesn't mention what FOV their framerate numbers were run at and if FOV affected framerate to begin with.

That stuttery barren icescape that mr magoo highlighted or any 120fps gameplay would be promising test material.
 
Last edited:
That's a really nice video.

Can we assume that Nvidia's solution is more performant due to the RT software/drivers being more efficient? Otherwise I can't understand why the proportional difference between them recedes with complexity.

I wonder if Nvidia have a more approximate shading in step three? Or if they're equally colour accurate between AMD and Nvidia.
 
Last edited by a moderator:
One of df's best videos in a while -- not surprised theres a big gap, bit a little surprised its that big.

Im suspicious the "diminishing returns" might just be big workloads not being possible to schedule as well (aka, noise that's specific to the layoout of the hardware), I wonder if the pattern would be the same or different in the weaker cards in the same families. Just speculation though.

One minor nitpick: I wish he'd explained the cost of the bigger acceleration structure a little better. The idea that rays have to go "further" is an ok analogy to what's actually going on (having a bigger data structure, having to descend deeper on average to confirm misses) but I already have a premonition of fanboys saying their platform's bigger, sparser, less complex scenes are proof it can send rays out "faster".
 
Considering how expensive RT global illumination is, you can totally understand why Epic in UE5 have chosen to disassociate the rendering time of GI from the frame. Their engine appears to calculate the light bounces over several frames.

...which makes sense considering the cost, and well, that's how observed light actually functions, since it takes time to traverse and bounce around an environment.
 
Considering how expensive RT global illumination is, you can totally understand why Epic in UE5 have chosen to disassociate the rendering time of GI from the frame. Their engine appears to calculate the light bounces over several frames.

...which makes sense considering the cost, and well, that's how observed light actually functions, since it takes time to traverse and bounce around an environment.
Yeah but that all happens at the speed of light.
 
Yeah but that all happens at the speed of light.

Oh yeah, but even the speed of light has a limit and isn't instant. I think it's a super nice compromise and actually is a better approximation of how light actually works.

Edit: observed light anyway. Even unobserved/unmeasured photons are approximated in modern engines, as they occlude polygons outside of the player's viewpoint.
 
Last edited by a moderator:
Oh yeah, but even the speed of light has a limit and isn't instant.

well less than a frame though -- a light milisecond is 300,000 meters. At 120 fps, that's 2 400 000 meters a second -- 1491 miles. How many times do you think it's going to (perceptibly) bounce? Compared to lumen which takes actual seconds to accumulate.

(also note, the horizon is about 3 miles away, so even in a wide open field with mountains far away you should get hundreds of bounces in a 'frame' irl)
 
well less than a frame though -- a light milisecond is 300,000 meters. At 120 fps, that's 2 400 000 meters a second -- 1491 miles. How many times do you think it's going to (perceptibly) bounce? Compared to lumen which takes actual seconds to accumulate.

(also note, the horizon is about 3 miles away, so even in a wide open field with mountains far away you should get hundreds of bounces in a 'frame' irl)

It's surely a smart move to disassociate the number of light bounces across several frames rather than forcing them to fit within a single frame, thereby causing a frame to not fit within 30/60/120hz budget.

Explains why UE5 has nicer looking GI compared to some RT implementations with only X number of allowed bounces.
 
It's surely a smart move to disassociate the number of light bounces across several frames rather than forcing them to fit within a single frame, thereby causing a frame to not fit within 30/60/120hz budget.

Explains why UE5 has nicer looking GI compared to some RT implementations with only X number of allowed bounces.
I definitely agree with you -- I'm glad we're moving past the point of having temporal accumulation (such as TAA) be optional, and I think accumulating gi is the way forward for the near term. I expect we'll also see some hardware raytraced lighting accumulate extra detail (wouldn't be surprised if even lumen uses RT hardware to accelerate some parts when the engine comes out or in a subsequent update, depending on how it works).

It's wrong to pretend it's not a big artifact though (especially as slow as lumen was in the demo) -- it outright precludes many kinds of lighting (fast rotating or flickering lights for sure) from looking right even to untrained eyes unless a pre-baked solution is kludged in.
 
I definitely agree with you -- I'm glad we're moving past the point of having temporal accumulation (such as TAA) be optional, and I think accumulating gi is the way forward for the near term. I expect we'll also see some hardware raytraced lighting accumulate extra detail (wouldn't be surprised if even lumen uses RT hardware to accelerate some parts when the engine comes out or in a subsequent update, depending on how it works).

It's wrong to pretend it's not a big artifact though (especially as slow as lumen was in the demo) -- it outright precludes many kinds of lighting (fast rotating or flickering lights for sure) from looking right even to untrained eyes unless a pre-baked solution is kludged in.

In special intrest would be to see Ampere performing in Lumen. Thats before RT hw being used.
 
Yeah that's a great video. Feel like I need to watch it at 1/4 speed to take it all in though! Alex doesn't half pack in a huge amount of info into a relatively short space! Very educational though, almost feel like I've just sat a university course in RT basics.

I found it really strange that the consoles are using the inferior denoiser 1 setting in watchdogs when the 2 setting used on PC both looks better and runs better than the 1 setting on the 6800XT. I wonder what's behind that ??
 
Yeah that's a great video. Feel like I need to watch it at 1/4 speed to take it all in though! Alex doesn't half pack in a huge amount of info into a relatively short space! Very educational though, almost feel like I've just sat a university course in RT basics.

I found it really strange that the consoles are using the inferior denoiser 1 setting in watchdogs when the 2 setting used on PC both looks better and runs better than the 1 setting on the 6800XT. I wonder what's behind that ??

6800XT has almost double the amp over consoles, anything to do with that?

Also, i understand the tensor hw for nv's gpus actually arent used yet for denoising in current games?
 
Deep dive RTX vs AMD RT
Well you could look at this video and think, that at some point with much higher quality AMDs solution no longer has the larger performance hit, but at this point the average framerate is still higher on the RTX card. Only the percentage drop might still be the same. This shows the AMD cards in a bit better light than they really are right now with the RT stuff. But it also shows that as soon as there is more to do than shadows (and that's the only RT "effect" where the new AMD cards shine) the AMD solution is way slower. But it might be enough for an entry into the RT market so more developers might support it.

But overall, RT games are still very uncommon. Have an RTX 3070 since last week and I really don't know what I should do with the RT stuff.
Quake 2, well impressive from a technical standpoint, but not so from a game with current graphics. It is also not that nice, that you can't just deactivate RT to see how it looks without. This also deactivates the better texture-packs etc.
Tried "Deliver us the Moon" and did not see much of a difference. The "fakes" are just to good to really see a big difference.


I really hope that at some point nvidia will use the tensor cores for the denoising and free up compute resources for other things, but I doubt they will do that. This would mean that DLSS might not have enough resources to work properly. Btw, not tried DLSS so far, but I really don't have those games on my PC that would work well with it.
 
PlayStation 5 owners with PS+ subscriptions get instant access to The Last Guardian via PS4 back-compat, but it's locked at 30fps. The system is capable of so much more - in fact, it can run pretty much locked at full 60fps - but you need an original disc copy of the game to do it. And there are some drawbacks... Tom takes you through it all.
 
Status
Not open for further replies.
Back
Top