Game development presentations - a useful reference

https://80.lv/articles/how-naughty-dog-created-the-immersive-world-of-the-last-of-us-part-ii/

EDIT: Some high level two pages presentation of TLOU2 from SIGGRAPH 2020
https://dl.acm.org/doi/10.1145/3388767.3407393

Volumetric Effects of The Last of Us: Part Two

https://dl.acm.org/doi/10.1145/3388767.3407358

Lighting Technology of The Last of Us Part II

https://dl.acm.org/doi/10.1145/3388767.3407359

Low-level Optimizations in The Last of Us Part II

https://dl.acm.org/doi/10.1145/3388767.3407349

GPU Driven Effects of The Last of Us: Part Two



 
Last edited:
https://arxiv.org/pdf/2007.14394

Signed Distance Fields Dynamic Diffuse Global Illumination, another GI alternative method. Some advantages, it is noise free and no light leaking at all. It can be used in large and complex scene too or Open world. I like the contact GI inspired by ground truth Ambient Occlusion.

Last year, NVIDIA proposed a new approach RTXGI [MGNM19] using its ray tracing accelerated hardware, by means of discretizing the spatial distribution of the irradiance function. Compared to common probe-based GI [McA15], its main contribution is the use of depth information and Variance Shadow Maps (VSM) [DL06] as well, in order to prevent light leaking artifacts that arise from the discretization of irradiance. However, its effect on GI of details at real-time frame rates is not optimal. Besides, light leaking artifacts can also appear with very thin objects and it depends severely on RTX-accelerated hardware, which affects its use extent.

On that basis, we realised that SDF [Har96] can be used to simplify the scene representation for low-frequency global illumination like diffuse GI. SDF is a scalar field in the space domain, which represents the distance from a point in space to the nearest surface in the scene. A positive value is assigned if the point is in the outer region of the nearest surface and negative if it is inside, thus producing a compact representation of the geometry information of a scene.

Inspired by RTXGI, we proposed a novel approach SDFDDGI,which overcomes aforementioned limitations and has the following advantages:
• It does not need any precomputations.
• It can manage both dynamic geometry and dynamic lighting, as well as animations and skylight.
• It provides interframe stability and low delay response for dynamic changes.
It completely eradicates light leaking problems.
• Our technique is not limited to specific hardware, it can also be used in lower-end hardware

This is looking promising. Less than 5 ms on Sponza scene on 970M and less than 1 ms on 2080Ti at 1080p

Our approach achieved less than 5 ms per frame on GTX 970M hardware with the lowest acceptable quality, while on RTX 2080Ti even achieved a performance within 1 ms per frame.


One big disavantage:

The creation of the sdf structure is a manual process, another improvement can be done with usage of importance sampling. But on the other hand the probe needs no manual placement.

However, our method still has room for improvement. For example, according to the relative placement of the camera and the probes, we could use importance sampling around this direction in order to further stabilize global illumination because only the normal facing to camera can be seen by the camera. Our research interest also focuses on dynamic GI, so for specular GI we still have to rely on a mixed approach using other methods such as SSR or Ray Tracing, but using SSR on top does not add any extra cost to achieve diffuse-specular path. Last of all, our approach uses simplified SDF primitives to represent the scene, until now we manually provide its simplified SDF representation, what requires an enormous amount of work for large and complex scenes. For the future, it would be necessary to research on the automatization of this process.
 
Last edited:
https://bartwronski.com/2020/12/27/...allenge-productionizing-rendering-algorithms/

Interesting and a great blog post about rendering constraint

Budget

Finally, the budget… I’ll tackle the “production” budget below, but hope this one is self-explanatory. If a new technique needs some memory or CPU/GPU cycles, they need to be allocated.

One common misconception is that 30ms is “real time technique”. It can be, but only if this is literally the only thing rendered, and only in a 30fps game.

For VR you might have to deal with budgets of sub 10ms to render 2 eye views for the whole frame! This means that a lighting technique that costs 1ms might already way be too expensive!

Similarly with RAM memory – all the costs need to be considered holistically and usually games already use all available memory. Thus any new component means sacrificing something else. Is a feature X worth reducing texture streaming budget and risking texture streaming glitches?

Finally, another “silent” budget is disk space. This isn’t discussed very often, but with modern video game pipelines we are at a point where games hardly fit on Blu Ray disks (it was a bigger challenge on God of War than fitting in memory or the performance!).

Similarly, patches that take 30GBs – it’s kind of ridiculous and usually a sign of an asset packaging system that didn’t consider patching properly… But anyway, a new precomputed GI solution that uses “only” maybe 50MB per level chunk “suddenly” scales to 15GB on disk for the whole 30h game and that most likely will not be acceptable!


http://c0de517e.blogspot.com/2020/12/why-raytracing-wont-simplify-aaa-real.html

Another interesting blog post

Raytracing will only -add- complexity at the top end. It might make certain problems simpler, perhaps (note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...), but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.
We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.
 
Last edited:
Nice responses in comment section as well.

Great one and great reference to RT shadows in COD, he was director of R&D rendering graphic until this year. And RT shadows is not simple and worse for RT reflection because realtime is a matter of compromise and performance.

There is no question that RT is the ultimate solution to the rendering equation, I think we have decades of research proving it.

My articles is not against RT nor against progress - it is only dispelling an imho naive idea that RT will make our engines simpler - as most of the complexity has nothing to do with anything related to this or that technology.

Also, we could go into technical reasons why RT stuff is not easy at all, not even "simple" things like pure RT shadows are actually simple (see the recent presentation on RT shadows in COD:CW for example - research.activision.com) - but this would be besides the point.

And in the blog post he said RT reflection is hard too, harder than RT shadows.

EDIT: The twitter topic is interesting


This is formidable too


Always the same problem performance...

EDIT: I was not knowing Rise of the planet apes did not use raytracing but Pixar RenderMan REYES with probably like Pirates of the caribean some custom GI.


 
Last edited:
Nice responses in comment section as well.

The comments provided much needed context in my opinion. The blog post is well written but came across as unfocused. The author’s key point seems to be that games will continue to get more complex because of the never ending drive to create bigger, more detailed and more beautiful game worlds. That much is a given.

The point about technology advancements and raytracing in particular didn’t hit home for me. It seems the author is saying that RT enabled engines will be more complex but is that due to the technology itself or simply due to the fact that future engines will be more complex anyway with or without RT?

I would have preferred if he separated the two concerns. i.e. all else equal, would the “same” game implemented using RT hacks instead of the old school lighting hacks be more or less complex from an engineering and art perspective?

Great one and great reference to RT shadows in COD, he was director of R&D rendering graphic until this year. And RT shadows is not simple and worse for RT reflection because realtime is a matter of compromise and performance.

High quality shadow mapping isn’t easy either. The relevant question is whether RT ultimately offers a simpler solution for the same result.
 
Last edited:
The comments provided much needed context in my opinion. The blog post is well written but came across as unfocused. The author’s key point seems to be that games will continue to get more complex because of the never ending drive to create bigger, more detailed and more beautiful game worlds. That much is a given.

The point about technology advancements and raytracing in particular didn’t hit home for me. It seems the author is saying that RT enabled engines will be more complex but is that due to the technology itself or simply due to the fact that future engines will be more complex anyway with or without RT?

I would have preferred if he separated the two concerns. i.e. all else equal, would the “same” game implemented using RT hacks instead of the old school lighting hacks be more or less complex from an engineering and art perspective?



High quality shadow mapping isn’t easy either. The relevant question is whether RT ultimately offers a simpler solution for the same result.

Like he said the engine continue to be complex in offline rendering and he said at least for the next decade or more probably the next two decades, engine will continue to be very complex. Raytracing is simple in offline rendering not in realtime because of performance.

We are far from high quality pathtracing or like manuka(WETA digital offline rendering engine) spectral raytracing.
 
Like he said the engine continue to be complex in offline rendering and he said at least for the next decade or more probably the next two decades, engine will continue to be very complex. Raytracing is simple in offline rendering not in realtime because of performance.

We are far from high quality pathtracing or like manuka(WETA digital offline rendering engine) spectral raytracing.

Sure but my point was that RT doesn’t exist in a vacuum. When we say it’s complex/simple it should be framed in context of competing real-time methods.
 
Sure but my point was that RT doesn’t exist in a vacuum. When we say it’s complex/simple it should be framed in context of competing real-time methods.

Complex is always a matter of performance and hack for make RT sustainable for realtime rendering. When sebbbi said for Dreams, Teardown or his new prototype virtual shadow mappings is probably better than Raytracing shadows, it is on performance side. Naive shadow mapping is slower than RT shadows but now teams can combine virtual shadow mapping, screen space shadows and capsule shadows for a good compromise quality is lower but performance is probably much better. Same GI has some good compromise Voxel cone tracing like in Cryengine, SDFGI, PBGI, Froxel GI. Lumen and so on with much better performance. And in GI you can combine with some RT too.

The only case where there is no satisfying compromise is RT reflection. After it depends of the game RT Reflection is good in a city but RT is heavier with vegetation and the rendering budget is precious. The best usage of SSR + cubemaps + planar reflection is The Last Us 2 but artists crunch like crazy to reach this level and the game is wide linear not open world you can't take care of every water or mirror surface in an open world game and it continues to have the screenspace artifact.

And this answer from Ingo Quilez is good.

I agree with the first part, the transition bit. But not te second one - the film industry has shown that indeed production does get simpler with raytracing. Artists now spend more time doing beautifil art than working with (or against?) the renderer. You can argue perhaps that film had already reached peak fidelity before raytracing (ILM's pirates, WETA's apes) so there wasn't much margin for growth in that axis, while games still have plenty to improve before they look good. Still I'm convinced raytracing gives back lots of time to artists

Raytracing is a costly feature and other part of realtime rendering did not peak at all. Devs needs to choose their fight.

latest




There is no raytracing and it shits on realtime rendering from a fidelty point of view. And pathtracing improve the fidelity too. The last two Apes movies looks much better from a lighting point of view. Other part of the rendering improve too but this is less dramatic.
 
Last edited:
That's a horrible graphic. I don't know why you keep showing it in threads.

Raytracing is a costly feature and other part of realtime rendering did not peak at all. devs needs to choose their fight.
Yes, it is costly and devs are already choosing what features to implement based on rendering hardware. Nothing different from past rendering technologies where tradeoffs had to be made.
 
Last edited:
That's a horrible graphic. I don't know why you keep showing it in threads.

Because it looks much better than what we have currently in realtime rendering in assets quality, image quality, motion blur, depth of field and scene complexity. We have better shading because realtime rendering introduce PBR. The rendering looks old compared to more recent movies but games are far from this. I prefer Rise of the Apes rendering to Caribean pirates too but one is a 2003 movie, the other is a 2011 movie.

And the two movies rendering looks outdated.

7f5e847561668e9fe7b8702fd989f611ba-10-apes.rhorizontal.w700.jpg


This looks much better for sure from War of the Apes and in offline rendering PBR and raytracing arrive at the same time (motion blur and depth of field improve too with raytracing) but from an asset complexity point of view or image quality they were peaking.

Yes, it is costly and devs are already choosing what features to implement based on rendering hardware. Nothing different from past rendering technologies where tradeoffs had to be made.

Like in Unreal Engine 5, I think he wants to say RT is only a tool in the rendering pipeline at short and mid term. Some will not use it at all, other will use it selectively, other will do a full hybrid rendering high end option for PC. Long term raytracing is the future of realtime rendering.
 
Last edited:
I think it’s difficult to parse right now because developers are still ramping up support for RT in their engines and workflows. The COD shadows presentation spent quite a bit of time on BVH construction and required changes to the art pipeline.

Once they settle in it’ll be interesting to see the choices made for mid and late generation games. Having to layer multiple shadow mapping techniques just to end up with an inferior result certainly sounds like a less ideal option on paper.
 
I think it’s difficult to parse right now because developers are still ramping up support for RT in their engines and workflows. The COD shadows presentation spent quite a bit of time on BVH construction and required changes to the art pipeline.

Once they settle in it’ll be interesting to see the choices made for mid and late generation games. Having to layer multiple shadow mapping techniques just to end up with an inferior result certainly sounds like a less ideal option on paper.

If the performance is better, it is useful. Again performance matters, it means you can use the GPU cycle on other things. On Unreal Engine 5 and Demon's souls it means performance are good enough all light cast shadow and everything cast shadows, all the dense geometry is useful. Can you do the same with RT shadows? Maybe you will need to only cast shadows for some part of the scene and not other part. RT performance scale with geometry complexity. I prefer a shadow of lower quality than no shadow at all for part of the scene. And this is not like shadow quality in the UE 5 demo or Demon' s souls is weak like in many old generation title. This is not COD Black ops Cold War RT shadow quality but the shadow maps quality is good.

RT reflection cost is bigger but if someone has enough GPU cycle to choose between RT shadows and RT reflection but can't do the two together; imo RT reflection is a better choice because I don't think cubemap + SSR + special case planar reflection is a good compromise. And it is so complicated and a hard process to do during the production of the game than in open world people don't have time to care about cubemaps quality for every puddle. It impacted Insomiac devs who were accused of downgrade after doing the E3 2017 demo where they did extra work for reflection into a puddle. Something they can't do on a full openworld. The first thing Insominac did after the puddlegate was to implement RT reflection on PS5.

1yiPZ5_0XAZAxRw00


This image perfectly shows the problem of RT reflection or not. This is all or nothing.

EDIT: And even if it was humanely possible to create personalized cubemap for every puddle, reflective material and windows in Spiderman for every fixed weather and day/or night condition. I think the storage would be a problem for so much cubemaps.:mrgreen:
 
Last edited:
In the end it comes down to the quality that devs are shooting for. Insomniac didn’t have to choose RT. They could’ve stuck with lower quality or missing reflections and spent the performance elsewhere. Same goes for shadows. Some devs will decide that RT is required for the quality they want to achieve and sacrifice performance elsewhere. Either way it’s almost certain that RT performance will improve dramatically throughout the generation as devs learn their way around the tech same as they always do.
 
In the end it comes down to the quality that devs are shooting for. Insomniac didn’t have to choose RT. They could’ve stuck with lower quality or missing reflections and spent the performance elsewhere. Same goes for shadows. Some devs will decide that RT is required for the quality they want to achieve and sacrifice performance elsewhere. Either way it’s almost certain that RT performance will improve dramatically throughout the generation as devs learn their way around the tech same as they always do.
In the case of Spiderman and other cross gen titles that have to run on previous consoles, I suspect RT reflections are one of the least intrusive ways to harness the new consoles performance while providing more image improvement than a simple resolution increase. They may prove a staple for the generation as there is no fallback with fairly decent quality unlike shadows, but it’s also possible that developers decide that rendering budget is better spent elsewhere.
 
In the end it comes down to the quality that devs are shooting for. Insomniac didn’t have to choose RT. They could’ve stuck with lower quality or missing reflections and spent the performance elsewhere. Same goes for shadows. Some devs will decide that RT is required for the quality they want to achieve and sacrifice performance elsewhere. Either way it’s almost certain that RT performance will improve dramatically throughout the generation as devs learn their way around the tech same as they always do.

You don't know the puddlegate. The community manager stop to post on social media for some times because of this. When they learn the game will release on PS5, this is probably the first feature they decided to investigate. First Spiderman MM is a cross gen game, it si one of the easiest things to do(RT Shadows or RT reflection) and it is a game in a city made of many big building where Spiderman can climb them. Same the implementation is great and logic because of the verticality of the game. The annoucement of the feature was made by the community manager like a revenge.

https://gamerant.com/spider-man-ps4-puddle-controversy/

The PS4 first Spiderman game has an easter egg about this, they were shocked of the stupidity of people.


After it does not mean all people will use RT reflection, like I said if they had the rendering budget. Some will no use RT at all (UE5, Demon's souls), some will use it selectively (Spiderman MM, COD Black Ops Cold War, Watchdog Legions...), and some will do a high end mode for PC with settings with all the RT effect inside an hybrid engine (Cyberpunk 2077). RT is just something inside the toolbox available to dev.

I just said than if the dev have the rendering budget available for one of the two effects, I think RT reflection is a better choice imo. For example, RT effect is probably very difficult to do in a game with tons of vegetation because of the performance problem. It will depend of the game developer choice and the type of game. I expect RT reflection to be a must for game inside a city.

EDIT:

IT was from a GDC 2019 presentation of raytracing in Unreal Engine 4

tvto1jz.jpeg


Spiderman has few vegetations but the environnement is ideal for RT. I doubt we will see RT in Horizon game.


https://auzaiffe.wordpress.com/2019...acing-worth-it/amp/?__twitter_impression=true
 
Last edited:
I doubt we will see RT in Horizon game.
They could model foliage with capsule skeletons with a stochastic soft fall off. Good enough for shadows or GI, fine for reflections of distant plants.
Some full res foliage close around the camera to please those who look for reflections of reflections of reflections, or like to detect material mismatch, some missing particles, or other other proofs of lazy devs and Jensen being wrong :D
 
They could model foliage with capsule skeletons with a stochastic soft fall off. Good enough for shadows or GI, fine for reflections of distant plants.
Some full res foliage close around the camera to please those who look for reflections of reflections of reflections, or like to detect material mismatch, some missing particles, or other other proofs of lazy devs and Jensen being wrong :D

This is possible too. I don't think the current way to do vegetation will survive this gen.
 
Back
Top