Last year, NVIDIA proposed a new approach RTXGI [MGNM19] using its ray tracing accelerated hardware, by means of discretizing the spatial distribution of the irradiance function. Compared to common probe-based GI [McA15], its main contribution is the use of depth information and Variance Shadow Maps (VSM) [DL06] as well, in order to prevent light leaking artifacts that arise from the discretization of irradiance. However, its effect on GI of details at real-time frame rates is not optimal. Besides, light leaking artifacts can also appear with very thin objects and it depends severely on RTX-accelerated hardware, which affects its use extent.
On that basis, we realised that SDF [Har96] can be used to simplify the scene representation for low-frequency global illumination like diffuse GI. SDF is a scalar field in the space domain, which represents the distance from a point in space to the nearest surface in the scene. A positive value is assigned if the point is in the outer region of the nearest surface and negative if it is inside, thus producing a compact representation of the geometry information of a scene.
Inspired by RTXGI, we proposed a novel approach SDFDDGI,which overcomes aforementioned limitations and has the following advantages:
• It does not need any precomputations.
• It can manage both dynamic geometry and dynamic lighting, as well as animations and skylight.
• It provides interframe stability and low delay response for dynamic changes.
• It completely eradicates light leaking problems.
• Our technique is not limited to specific hardware, it can also be used in lower-end hardware
Our approach achieved less than 5 ms per frame on GTX 970M hardware with the lowest acceptable quality, while on RTX 2080Ti even achieved a performance within 1 ms per frame.
However, our method still has room for improvement. For example, according to the relative placement of the camera and the probes, we could use importance sampling around this direction in order to further stabilize global illumination because only the normal facing to camera can be seen by the camera. Our research interest also focuses on dynamic GI, so for specular GI we still have to rely on a mixed approach using other methods such as SSR or Ray Tracing, but using SSR on top does not add any extra cost to achieve diffuse-specular path. Last of all, our approach uses simplified SDF primitives to represent the scene, until now we manually provide its simplified SDF representation, what requires an enormous amount of work for large and complex scenes. For the future, it would be necessary to research on the automatization of this process.
Budget
Finally, the budget… I’ll tackle the “production” budget below, but hope this one is self-explanatory. If a new technique needs some memory or CPU/GPU cycles, they need to be allocated.
One common misconception is that 30ms is “real time technique”. It can be, but only if this is literally the only thing rendered, and only in a 30fps game.
For VR you might have to deal with budgets of sub 10ms to render 2 eye views for the whole frame! This means that a lighting technique that costs 1ms might already way be too expensive!
Similarly with RAM memory – all the costs need to be considered holistically and usually games already use all available memory. Thus any new component means sacrificing something else. Is a feature X worth reducing texture streaming budget and risking texture streaming glitches?
Finally, another “silent” budget is disk space. This isn’t discussed very often, but with modern video game pipelines we are at a point where games hardly fit on Blu Ray disks (it was a bigger challenge on God of War than fitting in memory or the performance!).
Similarly, patches that take 30GBs – it’s kind of ridiculous and usually a sign of an asset packaging system that didn’t consider patching properly… But anyway, a new precomputed GI solution that uses “only” maybe 50MB per level chunk “suddenly” scales to 15GB on disk for the whole 30h game and that most likely will not be acceptable!
Raytracing will only -add- complexity at the top end. It might make certain problems simpler, perhaps (note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...), but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.
We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.
Nice responses in comment section as well.http://c0de517e.blogspot.com/2020/12/why-raytracing-wont-simplify-aaa-real.html
Another interesting blog post
Nice responses in comment section as well.
There is no question that RT is the ultimate solution to the rendering equation, I think we have decades of research proving it.
My articles is not against RT nor against progress - it is only dispelling an imho naive idea that RT will make our engines simpler - as most of the complexity has nothing to do with anything related to this or that technology.
Also, we could go into technical reasons why RT stuff is not easy at all, not even "simple" things like pure RT shadows are actually simple (see the recent presentation on RT shadows in COD:CW for example - research.activision.com) - but this would be besides the point.
Nice responses in comment section as well.
Great one and great reference to RT shadows in COD, he was director of R&D rendering graphic until this year. And RT shadows is not simple and worse for RT reflection because realtime is a matter of compromise and performance.
The comments provided much needed context in my opinion. The blog post is well written but came across as unfocused. The author’s key point seems to be that games will continue to get more complex because of the never ending drive to create bigger, more detailed and more beautiful game worlds. That much is a given.
The point about technology advancements and raytracing in particular didn’t hit home for me. It seems the author is saying that RT enabled engines will be more complex but is that due to the technology itself or simply due to the fact that future engines will be more complex anyway with or without RT?
I would have preferred if he separated the two concerns. i.e. all else equal, would the “same” game implemented using RT hacks instead of the old school lighting hacks be more or less complex from an engineering and art perspective?
High quality shadow mapping isn’t easy either. The relevant question is whether RT ultimately offers a simpler solution for the same result.
Like he said the engine continue to be complex in offline rendering and he said at least for the next decade or more probably the next two decades, engine will continue to be very complex. Raytracing is simple in offline rendering not in realtime because of performance.
We are far from high quality pathtracing or like manuka(WETA digital offline rendering engine) spectral raytracing.
Sure but my point was that RT doesn’t exist in a vacuum. When we say it’s complex/simple it should be framed in context of competing real-time methods.
I agree with the first part, the transition bit. But not te second one - the film industry has shown that indeed production does get simpler with raytracing. Artists now spend more time doing beautifil art than working with (or against?) the renderer. You can argue perhaps that film had already reached peak fidelity before raytracing (ILM's pirates, WETA's apes) so there wasn't much margin for growth in that axis, while games still have plenty to improve before they look good. Still I'm convinced raytracing gives back lots of time to artists
Yes, it is costly and devs are already choosing what features to implement based on rendering hardware. Nothing different from past rendering technologies where tradeoffs had to be made.Raytracing is a costly feature and other part of realtime rendering did not peak at all. devs needs to choose their fight.
That's a horrible graphic. I don't know why you keep showing it in threads.
Yes, it is costly and devs are already choosing what features to implement based on rendering hardware. Nothing different from past rendering technologies where tradeoffs had to be made.
I think it’s difficult to parse right now because developers are still ramping up support for RT in their engines and workflows. The COD shadows presentation spent quite a bit of time on BVH construction and required changes to the art pipeline.
Once they settle in it’ll be interesting to see the choices made for mid and late generation games. Having to layer multiple shadow mapping techniques just to end up with an inferior result certainly sounds like a less ideal option on paper.
In the case of Spiderman and other cross gen titles that have to run on previous consoles, I suspect RT reflections are one of the least intrusive ways to harness the new consoles performance while providing more image improvement than a simple resolution increase. They may prove a staple for the generation as there is no fallback with fairly decent quality unlike shadows, but it’s also possible that developers decide that rendering budget is better spent elsewhere.In the end it comes down to the quality that devs are shooting for. Insomniac didn’t have to choose RT. They could’ve stuck with lower quality or missing reflections and spent the performance elsewhere. Same goes for shadows. Some devs will decide that RT is required for the quality they want to achieve and sacrifice performance elsewhere. Either way it’s almost certain that RT performance will improve dramatically throughout the generation as devs learn their way around the tech same as they always do.
In the end it comes down to the quality that devs are shooting for. Insomniac didn’t have to choose RT. They could’ve stuck with lower quality or missing reflections and spent the performance elsewhere. Same goes for shadows. Some devs will decide that RT is required for the quality they want to achieve and sacrifice performance elsewhere. Either way it’s almost certain that RT performance will improve dramatically throughout the generation as devs learn their way around the tech same as they always do.
They could model foliage with capsule skeletons with a stochastic soft fall off. Good enough for shadows or GI, fine for reflections of distant plants.I doubt we will see RT in Horizon game.
They could model foliage with capsule skeletons with a stochastic soft fall off. Good enough for shadows or GI, fine for reflections of distant plants.
Some full res foliage close around the camera to please those who look for reflections of reflections of reflections, or like to detect material mismatch, some missing particles, or other other proofs of lazy devs and Jensen being wrong
Some will no use RT at all (UE5, Demon's souls),
Jensen being wrong