Next gen lighting technologies - voxelised, traced, and everything else *spawn*

On this screenshot from devs, I can see all kitchenware is affected by RT

Is this from a gamer playing the game or it is specially prepared by the devs to make you believe?

You can see that the shadows are present in both screenshots. Casted shadows perfectly coincide in the two sceenshots. So the shadows are rasterized, while the AO is added later by RT means(devs say it is RT, it is just words, could be lies, could be truth).

....

So in the scene with the two characters, i guess they didn't wanted to use one rasterized point light shadow with 6 backbuffers. They used two rasterized point lights with one backbuffer each. One of the backbuffer is pointing toward the guy with the guitar, and the other backbuffer is pointing to the woman. The two point lights are placed in the same light source. It was(for some reason) too expensive for them to use even one single point light with a cube depth map to rasterize the shadows. RT has nothing to do with casted shadows in that scene. It is a surprising decision taking in account the scene is very small and definitively any modern GPU can rasterize a single point light with 6 depthbuffers.


What i did not liked in that image is the "Realistic Shadowing" label... Just because it uses "shaders" to compute the color it doesn't mean it has anything to do with shadows. To be precise, the place where the "Realistic Shadowing" points to is not dark because it is inside shadows, it would be the very same color without casting any shadows of any kind. It is dark because of the angle to light. It is backfacing the light.
 
It's even more crazy i get attacked for exposing such obvious tricks...
By and large whenever you point out what's wrong with some graphics, people who like the game/tech/company can get defensive. It's interesting seeing two shots side by side one one person feel there's a big difference and another feel it's not much. It's all subjective and expectations. Me pointing out things wrong with Uncharted 4 met with a fair bit of resistance! ;)

Just keep going. B3D is about discussing tech. It also doesn't matter if you're wrong or not as long as it's fair discussion.

As for the voxelighting, there's lots of bleed through on the back of the couch, and the cushions are lit but not occluded, so it's obvious that the light volumes lack fidelity. But then how far can the tech be pushed to improve that?
 
By and large whenever you point out what's wrong with some graphics, people who like the game/tech/company can get defensive. It's interesting seeing two shots side by side one one person feel there's a big difference and another feel it's not much. It's all subjective and expectations. Me pointing out things wrong with Uncharted 4 met with a fair bit of resistance! ;)

Just keep going. B3D is about discussing tech. It also doesn't matter if you're wrong or not as long as it's fair discussion.

As for the voxelighting, there's lots of bleed through on the back of the couch, and the cushions are lit but not occluded, so it's obvious that the light volumes lack fidelity. But then how far can the tech be pushed to improve that?
You'd have to increase the voxel resolution but the memory requirements would grow exponentially.
 
This is why I don’t/stopped reverse engineering screenshots.
Agree with your points, and i have to say i do not have the game so the shots and videos are all i have - no idea if it has improved with patches, how many objects are excluded still and if the reasons are artistic or technical.
There is just the obvious difference between the close up shot above where even the fork casts AO, and the shot with the couple where nothing on the table casts AO.
I do not criticize this - i plan to work on 'receiver only' objects myself too. Personally i think the game looks quite fine, and i tried to defend the devs myself in a discussion about epic store elsewhere.

As for the voxelighting, there's lots of bleed through on the back of the couch, and the cushions are lit but not occluded, so it's obvious that the light volumes lack fidelity. But then how far can the tech be pushed to improve that?
I don't know anything and did not try to understand how it works. Running the demo again i see you're right, there is light bleeding through the cushions.
Obviously the voxelization is used for occlusion and for the thin cushions the voxelization has holes. (Even with conservative rasterization HW it is very complicated to get robust vozelizations - mostly people use some compromise good enough.)
There exist alternatives to rasterization based voxelization. For example if you have UVs for each object with uniform texel area, you could just add each texel with an atomic or and there is no need for rasterization. Would be similar to splatting in Dreams.
Voxels are also interesting if they are only used for occlusion. In this case they only need one bit, and also there are many compression options beyond octrees (utilizing the fact many bricks of voxels are equal in shape to their neighbors). Efficient large worlds, streaming, etc...
The lighting can be stored in refractive shadow maps in this case, like CryEngine seems to do.

The common argument of regular grids / voxels is linear memory access: With RT you check neighbouring voxels one after another, so good access pattern. In contrast traditional RT with a tree does incoherent access, but less of it.
With multi level grids (mip maps) you can make larger and so less steps too through empty space, which is key of success for voxel and also SDF stuff. And of course you can put regular triangles inside your voxels and at some distance you can fall back to the voxel as a approximation of the triangles. (works well for secondary rays)
With increasing memory amounts voxels become more interesting. Even RTX could use regular grids instead BVH under the hood.

Personally i neither like voxels nor RTX - but i could be wrong with both :)
 
You'd have to increase the voxel resolution but the memory requirements would grow exponentially.
Not necessarily, e.g. in this paper: http://jcgt.org/published/0006/02/01/paper-lowres.pdf
They get Sponza in 16.000^3 voxels down to 42 MB.
Ofc. you likely want to decompress this partially, but you could do this even within a compute shader: Decompress to LDS and process all potentially intersecting rays. Rigid transforms still possible.
 
You'd have to increase the voxel resolution but the memory requirements would grow exponentially.
Only if you brute-force the solution. There's plenty of opportunity to find clever solutions, the same as finding clever ways to accelerate limited RT performance. eg. sampling noisy images and denoising. Before denoising, you'd say that to get good lighting quality without noise, ray counts would need to increase exponentially. But by adding a denoising process, ray counts can be kept down to something workable. The more devs play with voxelights, the more workarounds they'll find.

One thing I'm wondering, not being involved in any of this research whatsoever, is whether surface normals can be used to determine when a surface is away from a voxel light and occlude it? But there are far, far cleverer and more knowledgeable people than me actually experimenting with stuff. We can only hope that all solutions get the necessary attention to get the best possible in our games.
 
One thing I'm wondering, not being involved in any of this research whatsoever, is whether surface normals can be used to determine when a surface is away from a voxel light and occlude it?
Not sure if i get you, but there is the idea of 'anisotropic' voxels, which store material properties for each of it's 6 faces and so allow for back face culling for example. NVs initial voxel GI solution had this feature, IIRC.
 
Not necessarily, e.g. in this paper: http://jcgt.org/published/0006/02/01/paper-lowres.pdf
They get Sponza in 16.000^3 voxels down to 42 MB.
Ofc. you likely want to decompress this partially, but you could do this even within a compute shader: Decompress to LDS and process all potentially intersecting rays. Rigid transforms still possible.
Looks good but even at such high resolutions the scene is still blocky.

Only if you brute-force the solution. There's plenty of opportunity to find clever solutions, the same as finding clever ways to accelerate limited RT performance. eg. sampling noisy images and denoising. Before denoising, you'd say that to get good lighting quality without noise, ray counts would need to increase exponentially. But by adding a denoising process, ray counts can be kept down to something workable. The more devs play with voxelights, the more workarounds they'll find.

One thing I'm wondering, not being involved in any of this research whatsoever, is whether surface normals can be used to determine when a surface is away from a voxel light and occlude it? But there are far, far cleverer and more knowledgeable people than me actually experimenting with stuff. We can only hope that all solutions get the necessary attention to get the best possible in our games.
Even high quality voxel representations are limited. They have their place but they're no replacement for RT.

Then again, it's not like people couldn't come up with optimizations in their usage of fixed-function hardware. It was only relatively recently that devs discovered post processing effects could run significantly faster by having a giant triangle with skewed UVs cover the screen instead of a a quad.
 
Not necessarily, e.g. in this paper: http://jcgt.org/published/0006/02/01/paper-lowres.pdf
They get Sponza in 16.000^3 voxels down to 42 MB.
Ofc. you likely want to decompress this partially, but you could do this even within a compute shader: Decompress to LDS and process all potentially intersecting rays. Rigid transforms still possible.

Even this level of compression is no where near adequate. Sponza is a single room. Game developers want San Andreas. There's a massive scalability gulf here.
 
Even this level of compression is no where near adequate. Sponza is a single room. Game developers want San Andreas. There's a massive scalability gulf here.

In BF5 the objects that are a little bit too far, are ignored for reflections. So I don't think RTX can render a whole San Andreas either. Taking in account the tweet of those devs saying "reflections are for free" i could think that RTX can not render AO much farther than a Sponza room.

A good developer could mix various techniques to achieve better results.

This guy says DXR doesn't support multiple RTX cards. As he explains, two RTX cards can be used for rasterization and compute etc. but only one of the two RTX cards can use its RTX cores for RT. I don't know how is the current situation. Somebody knows it?

(time offset in the link)
 
This guy says DXR doesn't support multiple RTX cards. As he explains, two RTX cards can be used for rasterization and compute etc. but only one of the two RTX cards can use its RTX cores for RT. I don't know how is the current situation. Somebody knows it?

(time offset in the link)
Actually no, he's not saying that. He's saying that the game doesn't support SLI with DirectX 12 at all, nothing to do with DXR (and quick googling reveals this to be true, people using driver hacks to enable some sort of SLI support)
 
Even this level of compression is no where near adequate. Sponza is a single room. Game developers want San Andreas. There's a massive scalability gulf here.
At 16K^3 the voxel resolution is higher than the texture resolution i guess. You don't need that much. Check the paper for the numbers of the very complex power plant scene. Like GTA you would reduce resolution at distance.

Improved version of the Optix RTRT demo posted a few pages ago:
Some artist could become famous by giving us a new model with proper PBR textures finally, haha :)
 
Actually no, he's not saying that. He's saying that the game doesn't support SLI with DirectX 12 at all, nothing to do with DXR (and quick googling reveals this to be true, people using driver hacks to enable some sort of SLI support)

He says exactly what i said - DX12 doesn't support SLI, but DXR requires DX12, so the limitation is not imposed by the game, but by the API.

I don't think RTX is working for mGPU. Googling it is not giving me an yes for an answer -

https://forums.geforce.com/default/topic/1098263/sli/sli-with-metro-exodus-/7/
 
He says exactly what i said - DX12 doesn't support SLI, but DXR requires DX12, so the limitation is not imposed by the game, but by the API.
Semantically, it's very different. What you said implies rasterisation and compute in DX12 works in SLI. It's not RTX that doesn't work in SLI, but DirectX 12.
 
Semantically, it's very different. What you said implies rasterisation and compute in DX12 works in SLI. It's not RTX that doesn't work in SLI, but DirectX 12.

I wanted to point out that hardware accelerated RT part of two RTX cards is mutilated. The second RTX card is intentionally beheaded. If the same game uses DX11, the two cards can use anything, compute and graphics, except RTX cores. So we pay for 2K series but get 1K series.

This is my new and updated point of view - mGPU hardware accelerated RT is mutilated by the API, not by the game.
 
I wanted to point out that hardware accelerated RT part of two RTX cards is mutilated. The second RTX card is intentionally beheaded. If the same game uses DX11, the two cards can use anything, compute and graphics, except RTX cores. So we pay for 2K series but get 1K series.

This is my new and updated point of view - mGPU hardware accelerated RT is mutilated by the API, not by the game.
Mutilated and beheaded are really strong and negative words. You mean 'mGPU DX12 is currently broken on DX12'? presumably also it always has been, so really this is nothing new nor pertinent to RTX.

Whatever the reality is, it needs to be discussed with clarity and reference to good sources and without the hyperbolic language.
 
I wanted to point out that hardware accelerated RT part of two RTX cards is mutilated. The second RTX card is intentionally beheaded. If the same game uses DX11, the two cards can use anything, compute and graphics, except RTX cores. So we pay for 2K series but get 1K series.

This is my new and updated point of view - mGPU hardware accelerated RT is mutilated by the API, not by the game.
mGPU in DX12 requires game developers to integrate. It can no longer be something provided by Nvidia/AMD in drivers to support like SLI and Crossfire. You're claiming RTX on mGPU is broken because mGPU in DX12 is broken. Well of course it is when these games can't even address the 2nd GPU in DX12. But RT cores are not intentionally beheaded.

We have no idea if RT cores can be utilized on both GPUs in an mGPU configuring on DX12 because there are less than a handful of games supporting DX12 mGPU and they're definitely not BFV or Metro Exodus. So unless you have some reliable information that RT cores are definitely not supporting on mGPU in DX12, we simply don't know yet. If it is possible then it's likely something both Nvidia and the game developers need to code in to function.
 
mGPU in DX12 requires game developers to integrate. It can no longer be something provided by Nvidia/AMD in drivers to support like SLI and Crossfire. You're claiming RTX on mGPU is broken because mGPU in DX12 is broken. Well of course it is when these games can't even address the 2nd GPU in DX12. But RT cores are not intentionally beheaded.

Just a quick correction in case it was just incorrectly worded.

Dx12 can access all GPUs and in theory all resources on all GPUs.

It's just that driver forced multi-GPU in an application isn't a thing in Dx12. Multi GPU support in applications has to be done at the application level by the application.

So in theory, if a game developer wished to access RTX on both cards they could theoretically do so, unless something in DXR or the hardware driver for the RTX cards prevents it.

But yes, there's no toggling on mGPU in the drivers for Dx12 or DXR and just having it auto-magically work.

Regards,
SB
 
Back
Top