Next gen lighting technologies - voxelised, traced, and everything else *spawn*

RT can be used for dynamic objects and assets that have to be updated frequently or each frame. Having your whole scene and all assets fully RT'd for each frame is quite ridiculous
Totally disagree. Even just opening a door changes the lighting in both rooms. It's subtle, but enough you instantly realize it is not real if it does not happen. We need dynamic GI even for mostly static scenes.
 
Totally disagree. Even just opening a door changes the lighting in both rooms. It's subtle, but enough you instantly realize it is not real if it does not happen. We need dynamic GI even for mostly static scenes.
you do realise that this has been solved by using light probes..
It won't be perfect but it's not as if baking/pre-computing is trash...

Neither of those games approaches the quality you get with ray tracing.

Feel free to show me that "Quality you get with ray tracing" in a similar setting with the same performance
 
Last edited:
Quality costs performance.
Which why RT isn't the solution yet no? This is exactly what I'm saying. Devs aren't going: "Oh Shit, RT Cores! Let's scrap all baked lights and fully dynamically light our whole scenes and assets using RT! Something which isn't even done in offline rendering where baking is still used to save rendering time." We will always have baked lights around in most games, especially for static objects.
 
you do realise that this has been solved by using light probes


Feel free to show me that "Quality you get with ray tracing" in a similar setting with the same performance.
Raytracing gives far better results at far more performance requirements. It's not been seen how well light probes etc. scale. I already argued perhaps alternative solutions like light volumes (referenced SEGI demos) but in real games, they hit a performance wall that didn't represent what we saw in demos.

Where you mention studios don't gain cost reduction, but instead gain a quality improvement through RT, that's kinda the same thing. By using more and more hacks, they could get closer to RT quality using non-RT solutions (Toy Story put loads of lights sources everywhere to simulate GI) but the cost and complexity snowballs. The same could be said of non-RT solutions in games, but eventually when you want 'real' quality, RT offers the ideal solution, even beyond automated fakes like building lightmaps. If RT is attainable, it should be sought. At the moment we're still not getting decent comparisons of different strategies, mostly due to a lack of investment.
 
Raytracing gives far better results at far more performance requirements. It's not been seen how well light probes etc. scale. I already argued perhaps alternative solutions like light volumes (referenced SEGI demos) but in real games, they hit a performance wall that didn't represent what we saw in demos.

Where you mention studios don't gain cost reduction, but instead gain a quality improvement through RT, that's kinda the same thing. By using more and more hacks, they could get closer to RT quality using non-RT solutions (Toy Story put loads of lights sources everywhere to simulate GI) but the cost and complexity snowballs. The same could be said of non-RT solutions in games, but eventually when you want 'real' quality, RT offers the ideal solution, even beyond automated fakes like building lightmaps. If RT is attainable, it should be sought. At the moment we're still not getting decent comparisons of different strategies, mostly due to a lack of investment.
The HW just isn't there yet (in real-time and in offline). Sure having RTRT everywhere updated at every frame is the goal but in the mean time (and for several years to come) hacks will still be common place (and in some cases a better solution as they free up resources for other compute intensive feature like physics etc). It's all about balance (in VFX and Games..).
 
Last edited:
you do realise that this has been solved by using light probes
How do you mean this? Light probes are static, and even if they have dynamic updates they can not capture a simple opening door good enough to look like real life.

A huge move towards realism will happen, soon, but RT is just a visibility test. It always sounds a bit polarizing when you guys differentiate 'pre-' and 'post-raytracing-era'. That's not justified i think. RT alone is not a revolution and just a piece of progress. Any tool has it's use...
 
Which why RT isn't the solution yet no? This is exactly what I'm saying. Devs aren't going: "Oh Shit, RT Cores! Let's scrap all baked lights and fully dynamically light our whole scenes and assets using RT! Something which isn't even done in offline rendering where baking is still used to save rendering time." We will always have baked lights around in most games, especially for static objects.
In offline rendering you have control over exactly what happens in each shot, that's why you can use baking. In a dynamic game world you don't which is why realtime computations are needed.

The HW just isn't there yet (in real-time and in offline). Sure having RTRT everywhere updated at every frame is the goal but in the mean time (and for several years to come) hacks will still be common place (and in some cases a better solution as they free resources for other compute intensive feature like physics etc). It's all about balance (in VFX and Games..).
Using ray tracing now doesn't mean discarding all rasterization techniques...

How do you mean this? Light probes are static, and even if they have dynamic updates they can not capture a simple opening door good enough to look like real life.

A huge move towards realism will happen, soon, but RT is just a visibility test. It always sounds a bit polarizing when you guys differentiate 'pre-' and 'post-raytracing-era'. That's not justified i think. RT alone is not a revolution and just a piece of progress. Any tool has it's use...
We're in the hybrid raster+rt era.
 
We're in the hybrid raster+rt era.

I wouldn't even go that far. I'd say we started flerting for real with it just now. If it will turn into a healthy relationship in the near future, we can't be sure just yet.
 
We're in the hybrid raster+rt era.
But that's a term NV brought up to explain their intention with current hardware. Personally i refuse to get classified by such an almost offtopic simplification.
Maybe some other people feel a bit offended by this too because Turing has zero influence on their work. (this applies to all work shown in recent topics here - one approach does not even use visibility at all, so neither raster nor RT)
If we discuss realtime GI, there are many interesting topics: How to represent geometry and LOD, irradiance cache yes or no and where, reflect light or diffuse it... How to determinate visibility is just one of many, and often the least interesting. Diffusion and antiradiosity approaches avoid it completely.
The term RT is so over hyped now for what it is.
 
I wouldn't even go that far. I'd say we started flerting for real with it just now. If it will turn into a healthy relationship in the near future, we can't be sure just yet.
It's the very beginning but I think it will work out very nicely.

But that's a term NV brought up to explain their intention with current hardware. Personally i refuse to get classified by such an almost offtopic simplification.
Maybe some other people feel a bit offended by this too because Turing has zero influence on their work. (this applies to all work shown in recent topics here - one approach does not even use visibility at all, so neither raster nor RT)
If we discuss realtime GI, there are many interesting topics: How to represent geometry and LOD, irradiance cache yes or no and where, reflect light or diffuse it... How to determinate visibility is just one of many, and often the least interesting. Diffusion and antiradiosity approaches avoid it completely.
The term RT is so over hyped now for what it is.
We won't see full path tracing in big games for a while but I'm pretty sure RT will become a standard feature in one way or another.
 
I'm pretty sure RT will become a standard feature in one way or another.
'Standard feature' sounds good to me :)

We have seen things here in this threat that completely destroy AAA graphics just because of better lighting, IMO.
How long it lasts until RT becomes widespread depends mostly on next gen consoles. If they have RT API it will make things easy. If they have RT FF HW as well, even easier. If they have neither, they surely have enough compute power to do it anyways (alternate geometry ->BIG speed up, at the cost of reduced detail which rarely matters)
It seems to come much faster than i thought.

We won't see full path tracing in big games for a while
We may not want this, all we want is quality that matches path tracing.

The Minecraft video is perfect to explain why i say no to 'classic' path tracing. I first got triggered because the guy mentioned 'path tracing with infinite bounces', which is impossible. But after reading his blog i saw he only sticks to the term path tracing because this was his starting point.
What he does is caching irradiance in the voxels. So he no longer needs to trace paths, just rays. At the first hitpoint of each secondary ray he gets the radiance by fetching cached irradiance multiplied by material. No need to trace any further rays for the diffuse term.

Classical path tracing needs to trace paths, and each ray segment adds one bounce. So for 5 bounces (minimum for realistic interiors) you have 5 times the cost than with caching.

So the guy does not do path tracing at all anymore, he implemented the 'radiosity method' instead, which is a bit older than the path tracing algorithm. I do the same. It's faster but requires more memory for the caching.

(I omitted mentioning light sampling here because it's not relevant for the difference.)


Now likely the question is: Why isn't this in use by offline rendering if it's so much faster?
Well first it is used this way, but the algorithm is much more complex because of the caching. Caching requires global parametrization which alone took me more than a year of work. So if you don't mind to spend money on hardware, slow path tracing is easier, more flexible and more accurate.
Why more accurate? Because any caching can be done only at discrete sampling locations.

I do not know if he subdivides the voxels for the caching, but if not the results show that using radiosity method as a fallback at the first bounce already deliver very high quality.
This is something i planned to do in the future, he already does it now and i may use RTX for this instead just for reflections, eventually.
I targeted 10cm samples for current gen consoles, on arbitary geometry so no voxel limitations. Performance is not much worse than traditional deferred shading with shadow maps.
Others work on similar things. A leap will happen with next gen, no matter if RT HW yes or no.

Back to topic RTX, notice that caching also reduces noise for two reasons: 1. The information you get from cached irradiance is complete, not just a zero area ray. 2. You can trace more rays instead a long path.
So a denoiser like in Quake demo has much better input. Likely you can reduce kernel size and capture important high frequency details like contact shadows.
You can also invest more performance in the specular term, which is becoming the main future challenge. Diffuse is more important, but i consider it as solved already.
 
Have you seen the way Quantum Break runs? It's a performance hog even with the scaling technique.
I wonder why so many complained about perf. I've played it after release. CPU is ancient i7-930 with FuryX GPU. It was smooth like butter at 1080, although i'm sensible to low FPS. But i did not mess with settings, i liked the smooth upscaling.
I remember scenes in labs the most. White and orange stuff, time frozen scientists in hazard suits. Most photorealisitc stuff i've seen in a game at that time. But outdoors look very average to me, even a bit bad.
 
I wonder why so many complained about perf. I've played it after release. CPU is ancient i7-930 with FuryX GPU. It was smooth like butter at 1080, although i'm sensible to low FPS. But i did not mess with settings, i liked the smooth upscaling.
Without scaling fps tanks massively.
 
you do realise that this has been solved by using light probes..
Ah ok, now i see the video - wasn't there before. It's nice but this can never become realistic. Fakery and tricks, this no longer can progress much further. We can do better, but it requires heavy changes so it will take time of course.
Are you more impressed by this (or any other AAA game state of the art), or from the Minecraft video? It's surely subjective and a matter of assets as well, but soon all this will be PacMan :)
 
Of all uses of RT, shadows seem like the most promising in the short term to me. They might be not much more demanding than the already costly shadow maps we use today, or in a best case scenario, they are actually cheaper for a similar visual quality.
 
Back
Top