Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
Looks about on par with something like Watch Dogs 2. Impressive compared to prior Saints Row titles I suppose.

Looks a lot better than watch dogs 2 to me. Lighting has a lot more depth and subtlety. WD2 looks pretty flat in comparison. It’s in a whole other league to saints row 4 of course which was has never been a graphics showcase.

Interesting. Can RTAO cast actual indirect shadows?

Probably not since AO doesn’t actually cast shadows. It just darkens pixels surrounded by nearby geometry. If those pixels happen to be lit indirectly there would be a similar effect though.
 
Looks a lot better than watch dogs 2 to me. Lighting has a lot more depth and subtlety. WD2 looks pretty flat in comparison. It’s in a whole other league to saints row 4 of course which was has never been a graphics showcase.
This looks just as flat to me and has that same card board look to the materials.

 
Interesting. Can RTAO cast actual indirect shadows?
Kinda. "Ambient Occlusion" is a proxy for indirect light visibility. (Screen space AO is not very good at all at capturing this property) It's not going to be accurate per light but it's basically "how exposed to the environment is this surface", and lights are typically somewhere else in the environment.

If those pixels happen to be lit indirectly there would be a similar effect though.
^ This is right, but all surfaces (except in a pure black environment, such as deep in a cave) are lit indirectly, so it's a closer analogue than you might think.
 
Interesting. Can RTAO cast actual indirect shadows?
Without directional information, not really.
It is just occlusion amount for a whole sphere/hemisphere from sampling point.

Multiply the occlusion percentage with incoming light and you get darkening near objects, but not depending on lighting environment.

If you sample amounts of occlusion from different directions and shade accordingly, you can get quite amazing indirect shadows.
 
Without directional information, not really.
It is just occlusion amount for a whole sphere/hemisphere from sampling point.

Multiply the occlusion percentage with incoming light and you get darkening near objects, but not depending on lighting environment.

If you sample amounts of occlusion from different directions and shade accordingly, you can get quite amazing indirect shadows.
Oh, so it doesn't even take direction into account.
 
Without directional information, not really.
Direction information comes from the lighting setup and environment.
Imagine rays being casted in an enclosed area, such as room, with just a single window.
If a ray hits a wall in the room, it will add darkening with the RTAO, but if it flies away through the window - it won't add any darkenning, so here comes the directionality, shadows won't be no longer uniform (though the effect will be hard to notice). In other words, since RTAO is an integral of shadowing coming from all possible directions, the window in an enclosed area will make certain areas less dark, creating a directional shadowing, it's just that rays must be long enough to capture the effect. The ray that hadn't hit anything can be then multiplied by a constant ambient term to simulate some basic form of GI, from the sky dome for example.
 
Direction information comes from the lighting setup and environment.
Imagine rays being casted in an enclosed area, such as room, with just a single window.
If a ray hits a wall in the room, it will add darkening with the RTAO, but if it flies away through the window - it won't add any darkenning, so here comes the directionality, shadows won't be no longer uniform (though the effect will be hard to notice). In other words, since RTAO is an integral of shadowing coming from all possible directions, the window in an enclosed area will make certain areas less dark, creating a directional shadowing, it's just that rays must be long enough to capture the effect. The ray that hadn't hit anything can be then multiplied by a constant ambient term to simulate some basic form of GI, from the sky dome for example.
As long as you just use a single value like percentage of occluded rays and multiply incoming light, you lose most directionality.

If you find the openings and use something like bent normals and to sample ambient light, you start to see different results.

Just came to my mind that bent normals were not used in real-time graphics for really long time, even though they were presented in same presentation as AO for Pearl Harbor.

Some games have stored multidirectional AO and thus Objects could have multiple shadows from ambient light.
 
As long as you just use a single value like percentage of occluded rays and multiply incoming light, you lose most directionality.

If you find the openings and use something like bent normals and to sample ambient light, you start to see different results.

Just came to my mind that bent normals were not used in real-time graphics for really long time, even though they were presented in same presentation as AO for Pearl Harbor.

Some games have stored multidirectional AO and thus Objects could have multiple shadows from ambient light.

Exactly. Its not impossible that their implementation does take dorectionality into account. Dark Souls Remaster does this with Screen Space only and it already achieve very nice results.

But my guess is that won't really be the case with this game though. The simplest way to do this is cast a bunch short of rays in multiple directions for each pixel (jittered and filtered in some way to improve coverage) and accumulate that into a uniform buffer later used by the deffered render to atenuate whatever their idirect lighing solution is. And I expect this game to go that route and call it a day.
 
As long as you just use a single value like percentage of occluded rays and multiply incoming light, you lose most directionality.
This really comes down to how good your sampling is. Bent normals point towards the direction of where the the most of lighting comes from (for uniform lighting), which you can use to calculate self-occlusion with fewer samples (in the example of the room with window, this would be surface vectors pointing towards the window inside of the room), but the same results can be achieved via importance sampling, caching, more samples etc (in case of RT at least).
With Screen Space technics, the bent normals (as a world space hint) I guess is the only option to add directionality when lets say the window I mentioned in the previous post is out of view.

If you find the openings and use something like bent normals and to sample ambient light, you start to see different results.
With RT, there is no such problem of some stuff (the window) being out of camera view since we trace 3D scene, so you get the same directionality anyway without the need to prebake anything.
You can still do importance sampling or whatever else to improve quality without baking bent normals for a scene with all related complexities.

Unrelated to bent normals, I fully agree that accounting for scene lighting brings another layer of directionality, so RTGI >> RTAO.
 
Last edited:
Status
Not open for further replies.
Back
Top