Next gen lighting technologies - voxelised, traced, and everything else *spawn*

What's the possibility or a hybrid approach to reflections? When buildings and terrain geometry is static, is it feasible to apply a high quality cube map to a window for example, then overlay Ray traced dynamic objects like animated models on top of that? I could imagine it would be hard to get look right, you would have to have the ability for rays to ignore certain pixels based on geometry and no idea if it would be a significant increase n performance to be worth it.

For planar surfaces like reflecting floors, mirrors or pools, there is a method to get perfect reflections, as good as raytraced but using rasterizing.
AFAIK this method 'planar reflections' was used long before SSR was invented.
It makes use of the stencil buffer, and requires to rasterize the whole scene again from another point of view, but limited to pixels of the reflecting surface.
So more expensive as SSR, but cheaper as raytracing and looks as good (but less general as both SSR and raytracing)
More details in for example the Unreal SDK.
Why BVF does not use this high quality reflection technique, is a mistery. (or maybe it does indoors?)
Was already used in 3DMark 2001 (yes 17 year ago)
 
Last edited:
What's the possibility or a hybrid approach to reflections? When buildings and terrain geometry is static, is it feasible to apply a high quality cube map to a window for example, then overlay Ray traced dynamic objects like animated models on top of that? I could imagine it would be hard to get look right, you would have to have the ability for rays to ignore certain pixels based on geometry and no idea if it would be a significant increase n performance to be worth it.

In addition to Voxillas response, planar reflections require to render the scene once for each plane normal, so if you have two windows side by side, but both have a slightly different angle, you'd need to render once for each window.
See the recent Hitman game. They implement this method for planar reflections (windows share the same plane). Half Life 2 used it for water (just one plane), i guess it has been used even before that.

So, for perfect reflections on a sphere, you would need to raster the whole scene for each pixel of the sphere, because normals differ everywhere.
Of course you can just render a cube map, so 'only' 6 times the scene, but then although you can fetch along the correct normal direction, the origin is not at the surface. Hard to notice for a convex object like a sphere, but on a torus it would fail pretty badly (it would miss self reflections of the torus in the best case).
Actual games 'solve' this with projection tricks, often requiring manual artist work like placing a box together with the reflection probe, and this box is used to 'fix' projections a bit.

So, cases where planar reflections can work at all are rare, and ray tracing quickly becomes faster if geometry is complex.


Your idea with the cube map is not bad however. It is possible to store low resolution cubemaps (or another data structure covering just the visible hemisphere) at dense locations on the surface. Usually this is precomputed, and the low resolution limits the usecase to rough materials. See the PS4 game The Order. (Personally i work on doing this in realtime)
This approach is good enough for most real life materials! Exceptions are mainly water and human made materials which show sharp reflections.

For sharp reflections, you would need to calculate one 4K cube map for every pixel on screen just to fetch a single texel :) (This data would also allow to calculate full GI of course. But we do not need so high res for that.)
At this point it becomes clear, no matter if we like it or not, ray tracing is absolutely necessary if we want progress. I've already criticized the way NV handles the problem with a black box approach, but that's another story - we need to trace rays because it is the most efficient way to solve those problems correctly.

But RT is good for high frequency reflections, and slow for low frequency diffuse reflections (because it requires many rays and / or temporal filtering for the latter). So i agree about a hybrid approach being the way to go, but it's less about static / dynamic geometry and more about sharp / diffuse reflections.

BFV really is about sharp reflections only, so exactly what RTX is good for and i hoped for better performance here. (It's not so bad either, but... will there be a RTX 2060 at all? Seems to make no sense.)
I'm pretty sure we can gain a lot with texture space lighting so reusing results over multiple frames, but it will take years until we see this in game engines.
For now my hope goes towards reducing material complexity or lower LOD for raytracing, but i do not know if this is a bottleneck at all.
 
In addition to Voxillas response, planar reflections require to render the scene once for each plane normal, so if you have two windows side by side, but both have a slightly different angle, you'd need to render once for each window.

Why would that be the case? It's the floor that is reflecting, not the windows.
(you can have reflections in windows too, probably what you mean)
 
Last edited:
Why would that be the case? It's the floor that is reflecting, not the windows.
(you can have reflections in windows too, probably what you mean)

Yeah i did not talk about your video, just about the limitations of planar reflections and why it is no general solution. (It is only practical for 'all floor or water on the same plane', or 'all windows at the house front share the same plane' situations)
 
BFV really is about sharp reflections only, so exactly what RTX is good for and i hoped for better performance here.
It's definitely not being applied intelligently. Ultra RTX adds reflections to the cables, which end up just glowing. They look better without reflections, and you could fake the shiny-shine in such cases. If the tracing was happening more localised, it'd perform better.

Does anyone know if the tracings are native res? Dropping to half res won't be at all apparent and save a lot of time.
 
...adding to planar reflections limitations: It would break already if you add a normal map to the floor.

It's definitely not being applied intelligently. Ultra RTX adds reflections to the cables, which end up just glowing. They look better without reflections, and you could fake the shiny-shine in such cases. If the tracing was happening more localised, it'd perform better.

Does anyone know if the tracings are native res? Dropping to half res won't be at all apparent and save a lot of time.

Yes, i saw reflections on FPS gun and they look off - seems they need to work on better integration with PBR. (e.g. reflection on rough material looks too sharp)
Maybe the problem here is: The gun has a cylinder shape, and the cables are low frequency. As a result, denoising becomes hard because there are few pixels with similar normal in the neighborhood.
Denoising must have issues with low frequency normal variance- i'm no expert but assume it is one major limitation of the 'denoise vs. many rays' approach.
But i do not think removing reflections from some cables would help with performance. I mean, it just has to work for the whole screen in the worst case.

I've read they wanted to implement half resolution reflections until release, but i don't know if they did yet. I assume so, however.
 
I wonder what types of games would benefit most from current ray tracing capabilities? I would be keen to see something like new nhl game with reflections+gi or possibly a car racing game with proper lightning including day/night cycles and weather simulation. Car game could be good candidate for excellent dlss implementation.
 
Yes, i saw reflections on FPS gun and they look off - seems they need to work on better integration with PBR. (e.g. reflection on rough material looks too sharp)
Yeah. Deferred rendering should composite nicely. I don't know where the tracing fits into the pipeline. Somene linked to something from nVidia about it, I think, but I haven't time to check it out.

Maybe the problem here is: The gun has a cylinder shape, and the cables are low frequency. As a result, denoising becomes hard because there are few pixels with similar normal in the neighborhood.
Denoising must have issues with low frequency normal variance- i'm no expert but assume it is one major limitation of the 'denoise vs. many rays' approach.
There shouldn't be denoising on the reflections.

But i do not think removing reflections from some cables would help with performance. I mean, it just has to work for the whole screen in the worst case.
I thought there was a clear correlation between amount of reflections and framerate? Every reflected pixel needs a trace, so less of them will mean less tracing. In offline renderers, it's certainly the case that every reflected surface increases render times as you evaluate those secondary rays. There's no reason for it to be any different in game; same as evaluating a complex surface shader - the more pixels filling the screen with that shader, the more the performance hit.
 
BFV really is about sharp reflections only, so exactly what RTX is good for and i hoped for better performance here. (It's not so bad either, but... will there be a RTX 2060 at all? Seems to make no sense.)
I'm pretty sure we can gain a lot with texture space lighting so reusing results over multiple frames, but it will take years until we see this in game engines.
For now my hope goes towards reducing material complexity or lower LOD for raytracing, but i do not know if this is a bottleneck at all.

Texture space will not do much for reflections, unfortunately.
Sharp reflections are the easy case, and it's hard to see why they have such a massive impact on performance.
Nvidia really needs to rethink their GPU architecture if they are serious about raytracing in games.
It will be interesting to see what Intel comes up with, given their previous Larrabee raytracing experience.
 
Last edited:
There shouldn't be denoising on the reflections.
You still need it, because even sharp reflections require a narrow cone, not a single ray which has zero area. Even for a perfect mirror reflection you'd need multiple rays if only for AA, but for any rougher material you need many rays, or accumulate temporaly with denoising.
(Denoising so gives similar benefit than using render at half res and upscale. In fact both is somehow the same idea, denoising is just much more advanced.)

I thought there was a clear correlation between amount of reflections and framerate? Every reflected pixel needs a trace, so less of them will mean less tracing. In offline renderers, it's certainly the case that every reflected surface increases render times as you evaluate those secondary rays. There's no reason for it to be any different in game; same as evaluating a complex surface shader - the more pixels filling the screen with that shader, the more the performance hit.
Yes, but we get a lot of different options, difficult to differentiate if we do not know how stuff works exactly.

For example, for an offscreen renderer specular reflection rays are 'cheap', because they require small number of rays, and all go to a similar direction (small divergence). Diffuse rays are much more expensive because we need much more and in all directions. Also: Both cases are just 'reflections', so it becomes easy to confuse each other.

For a game like BFV specular reflection is 'expensive', because it's the only case where rays are used at all. But on the long run it makes no sense to say we can improve performance by using less materials with sharp reflections. And if we aim for full ray traced lighting like a path tracer does it's actually the opposite. (Remember the impressive Brigade demo with city scene and robot - all sharp reflections - super impressive for gamers because it's new, but with path tracing a diffuse scene would have been more expensive / causes more noise)

Texture space will not do much for reflections, unfortunately.

No. Texture space can give a speedup equal to hardware vs. software raytracing.
If you can reuse a result over ten frames: 10 times speed up. Additionally you get decoupling from screen resolution and refresh rate to shading rate, so no need to choose between 'frame rate' and 'quality' - it all becomes a question of acceptable temporal lag.
Sharp reflections are the hardest case for this, because cached reflections will be wrong on rotating objects. Reprojetion trickerey becomes necessary, bot for mostly, even sharp reflections can be cached i think. (I can only confirm it works for my low res rough materials, so i'm pretty unsure about that.)
For diffuse reflections it just works, and that's the real performance problem. Being able to cache and reuse irradiance at the surface, not in incomplete screenspace but in worldspace, means something like path tracing becaomes jus raytracing, because you no longer need paths, as all necessary information can be fetsched from the first hit.
So this is the real upcoming revolution - more important than RT, but the engineering effort is huge. There are many problems to solve for this. (Curious if NV quickly extends its new Texture Space Shading feature to handle caching, actually it lasts only one frame.)

For now, one could reproject reflections from older frames in screenspace instead to retrace them each frame - but with denoising this is already happening to some degree.
 
No. Texture space can give a speedup equal to hardware vs. software raytracing.
If you can reuse a result over ten frames: 10 times speed up. Additionally you get decoupling from screen resolution and refresh rate to shading rate, so no need to choose between 'frame rate' and 'quality' - it all becomes a question of acceptable temporal lag.

Now I lost you. Reflections depend on the camera position, right, as much as normal projected scenery depends on the camera position.
In games we want high framerate, as the camera moves all the time, usually at high speed.
If for example the floor acts as a perfect mirror, you want to see floor and above floor rendered at same framerate.
 
You still need it, because even sharp reflections require a narrow cone, not a single ray which has zero area. Even for a perfect mirror reflection you'd need multiple rays if only for AA, but for any rougher material you need many rays, or accumulate temporaly with denoising.
For optimal rendering, AA can be a post effect. Machine-learnt AA might be very capable. For rough materials I'd just apply a blur to the perfect one-ray reflections. It won't have 'DOF' variable clarity with near objects versus far, but the hardware's not fast enough for that yet, clearly. Alternative to a blur when you've got good denoising might be a scatter effect, jittering near pixels before denoising. That should allow good quality variable blur, and work for screen DOF too.

For example, for an offscreen renderer specular reflection rays are 'cheap', because they require small number of rays, and all go to a similar direction (small divergence). Diffuse rays are much more expensive because we need much more and in all directions. Also: Both cases are just 'reflections', so it becomes easy to confuse each other.
Indeed, all the different aspects people associate with shading don't really exist, and they're just reflections and rough/smooth surfaces. 'Specular highlights' are bright reflections of area lights. We just always simulate them with hacks, so people come to associate them with a discrete feature.
 
Nvidia really needs to rethink their GPU architecture if they are serious about raytracing in games.
Unfortunately i think it's the exact opposite what will happen. Now they force the OTHER vendors AND game devs to follow their path, which might be right or wrong.

Now I lost you. Reflections depend on the camera position, right, as much as normal projected scenery depends on the camera position.
In games we want high framerate, as the camera moves all the time, usually at high speed.
If for example the floor acts as a perfect mirror, you want to see floor and above floor rendered at same framerate.

Just think of baking everything to lighmaps as a background process, but you would update them only 10 times per second, while the scene display still runs at 60 Hz.
As most lighting does not change much over short periods of time, you can improve this by updating changing stuff more frequently than the rest. And you use some blending to hide hard transitions. You would hardly notice a difference. (Offscreen data is already there if you rotate your camera quickly - you need that for proper GI anyways.)
Relighting each pixel each frame as current games do is nothing more than a waste and terrible brute force.

For optimal rendering, AA can be a post effect. Machine-learnt AA might be very capable. For rough materials I'd just apply a blur to the perfect one-ray reflections. It won't have 'DOF' variable clarity with near objects versus far, but the hardware's not fast enough for that yet, clearly. Alternative to a blur when you've got good denoising might be a scatter effect, jittering near pixels before denoising. That should allow good quality variable blur, and work for screen DOF too.

Yes, that's what denoising is doing - accumulate additional sampels from screen neighborhood and previous frames, similar to TAA.
But there must be a catch, right? That was my question when i cam here to this form - what's the catch, does it really enable real time path tracing as seemingly claimed by denoising papers? Can one ray per pixel be enough to replace 4000 of them?

Now i think the main limitation is this:
If the normals and positions near a pixel vary, the neighbor information becomes very inaccurate or even useless. All we have is some history of previous frames, but we can't maintain 4000 frames, or approx that with a running average, or accept the huge temporal lag.
While those methods work for flat diffuse surfaces, geometric detail and rich materials (requiring full hemisphere information per pixel) will never be solved by denoising alone, although those guys still achieve impressive progress. Even something simple like normal mapping is a problem. Small cables another.
(Also the comparison with doing stuff at lower res and upscale holds, because it has the same limitations)

So, no photorealistic pathtracing yet. Still far from it. V-Ray guys showed something using realtime RTX, but it does not look better than current games do.

However, also denoising will work much better in worldspace, because there is always a proper neighborhood at least - just another minor argument why texture space is the future. (The real argument is that infinite bounces become for free.)
 
Just think of baking everything to lighmaps as a background process, but you would update them only 10 times per second, while the scene display still runs at 60 Hz.
As most lighting does not change much over short periods of time, you can improve this by updating changing stuff more frequently than the rest. And you use some blending to hide hard transitions. You would hardly notice a difference. (Offscreen data is already there if you rotate your camera quickly - you need that for proper GI anyways.)
Relighting each pixel each frame as current games do is nothing more than a waste and terrible brute force.

I perfectly understand the concept of texture space, and for baking lightmaps with slowly moving lights it works fine.
You can bake reflections at 10 hz, but that will look terrible, it will be completely out of sync with the 60hz scenery that is reflected.
And I agree texture space as a cache is a great concept, it was already used in the Quake software renderer long time ago.
 
I also think texture-space lighting with temporal accumulation and caching is the way to go, but for highly specular mirror-like surfaces, I thing games will unavoidably have to re-render them every frame. The best way to reuse past-frame data for those situations is screen-space reprojection as done in the pica pica demo.
 
Yes, that's what denoising is doing - accumulate additional sampels from screen neighborhood and previous frames, similar to TAA.
But there must be a catch, right? That was my question when i cam here to this form - what's the catch, does it really enable real time path tracing as seemingly claimed by denoising papers? Can one ray per pixel be enough to replace 4000 of them?
It can only go so far. Where it works well is with general lighting which can be soft and inaccurate and still look good, so for me, I'd trace just lighting and use it in deferred rendering with bump and texture detail overlaid. Can use simplified geometry and far simpler shaders, basically colour and luminance. It won't have the fidelity of full-on tracing but it'll be very fast for games.
 
You can bake reflections at 10 hz, but that will look terrible, it will be completely out of sync with the 60hz scenery that is reflected.
but for highly specular mirror-like surfaces, I thing games will unavoidably have to re-render them every frame. The best way to reuse past-frame data for those situations is screen-space reprojection as done in the pica pica demo.

You may be right guys, but we do not have the power to do more if BFV reflections already eat up a high end GPU, considering specular reflections are the 'easiest' thing after shadows.
So what about things at a distance, or near fast moving objects with heavy motion blur - isn't there a chance stochastic updates at lower frequency will be mostly acceptable? And we can still reduce resolutions temporary, reproject, use bag of tricks?
We can store stuff in different buffers: specular irradiance, diffuse irradiance, material textures, and mix it all at 60Hz, so if some data lags it might be more acceptable than inconsistent frame rate.
Also think how acceptable compression artifacts are in movies streamed at bad quality - we want something like this for games too.

I'm really not sure myself, and it's not on my realistic todo list... :)
 
Nice. All artifacts i can spot are due to messed up geometry or missing screenspace info, but i do not notice any wrong lighting.

Both shadows and reflections stutter a lot. It's arguable if it's an acceptable trade-off, but I couldn't say it doesn't bother me at all. They are interpolating every other frame innitially, and 3 of every 4 on the later part of the video. So it could be considered something comparable to doing lighting at 30Hz for a 60 Hz rendering at first, and lighting at 15Hz later. 10Hz would be even more annoying.
But I imagine if you separate specular reflections from diffuse ones, and lower the reflection resolution and use screen-space reprojection to do temporal AA, the reflection stutters would be much less noticeable at possibly similar performance cost.
For the shadow problem, it might also be possible to separate direct lighting from bounced lighting so that direct shadows can be updated faster as well.
 
Back
Top