Game development presentations - a useful reference

ID moved away from virtualized textures as they took up too much disk space versus resolution, and because hdd streaming caused too much pop in.

Of course now there's nvme ssds and enough compute just to do runtime blending, which is what is expect UE5 users to do. Combined with procedural texturing, eh, it's fine. Slap decals everywhere too and the whole thing can look unique enough in an artist friendly manner.

As for texture space shading, eh. If it's co-planar then you can prefilter well enough, Ghost of Tsushima managed this remarkably efficiently: https://www.google.com/url?q=https://blog.selfshadow.com/publications/

The challenges for artist workflow seem more along the lines of mesh pain. Hi res mesh, remeshing, lods, uv mapping, normal bake, only then can you paint it. in wondering what exactly Epic meant by "you can just import it". No lod obviously, assuming no normal bake. What else can you do?

As for rendering, GI, animations, and model detail feel like much of the remaining challenges, other than image quality. But texture space shading isn't some sinecure for image quality. Need enough raster samples to avoid moire, was playing around with GTAV and even 8x msaa, 1.5x ssaa (from 1440p) and fxaa combined wasn't enough to get rid of moire, and texture space shading would help not at all.
 
Combined with procedural texturing, eh, it's fine. Slap decals everywhere too and the whole thing can look unique enough in an artist friendly manner.
Decals can only add variation to the texture of objects, not to the geometry.
Even if you manage to hide pattern repetition successfully (which is mostly the case), you are still left with the task to arrange your modules so the result looks natural. But this does not work well for things like terrain.
We can see in every game how the artists have placed some rocks here and there, a procedural system which added some additional smaller rocks, eventually. But the result looks wrong. It misses the flow, how everything is connected, affecting each other by large scale dynamics like erosion.
We could solve this with large scale simulations / real world scans, but then every piece of geometry becomes unique and we get to the storage problem.
The easiest solution is to use procedural placement to arrange our modules and texture tiles so they resemble such large scaled template data. I expect we will see increasing eye candy from there, but this is restricted and can't do everything well. The more high frequency details we add, the more resulting discontinuities at lower frequencies pop up.
The challenges for artist workflow seem more along the lines of mesh pain.
Which is also a result of the above limitations. It's not only that the split geometry / texture workflow is tedious by itself, artists also are constrained by the task of making things modular while hiding repetition at the same time.
It's hard to be an artist if you have to find that narrow path to something that finally looks like smoke and mirrors at best. It's also much harder in games, where our facade can be seen from any angle and distance, unlike in movies.
But texture space shading isn't some sinecure for image quality. Need enough raster samples to avoid moire
Moire is mostly a result of regular patterns, as required by tiling and other repetive / modular practices. It's much more caused by content creation limitations than by rendering.

So, if we find a way to get rid of tiling, things should become a lot easier. That's a much higher goal than having just faster rendering from something like TS.
At the moment, state of procedural content creation and compression is quite poor in games. Automatic placement and smaller zip files barely address the real problem we have to solve.
 
Decals can only add variation to the texture of objects, not to the geometry.
Even if you manage to hide pattern repetition successfully (which is mostly the case), you are still left with the task to arrange your modules so the result looks natural. But this does not work well for things like terrain.
We can see in every game how the artists have placed some rocks here and there, a procedural system which added some additional smaller rocks, eventually. But the result looks wrong. It misses the flow, how everything is connected, affecting each other by large scale dynamics like erosion.
We could solve this with large scale simulations / real world scans, but then every piece of geometry becomes unique and we get to the storage problem.
The easiest solution is to use procedural placement to arrange our modules and texture tiles so they resemble such large scaled template data. I expect we will see increasing eye candy from there, but this is restricted and can't do everything well. The more high frequency details we add, the more resulting discontinuities at lower frequencies pop up.

Good art just takes care of that. Speaking of, GTAV and Red Dead II both look amazing in this respect. Even with the heavy limitations of the PS3/360 Rockstar did a great job on GTAV's map, you can find water gulleys from erosion, and tiling is relatively minimized compared to what they had to work with. Heck Hollywood doesn't need some crazy single megatexture, and they can do whatever they want. And we have crazy huge megatextures if necessary, but I don't see it.

There's a ton of research focused simulation of terrain weathering in conferences, non of them require texture space shading or totally unique texturing. Hell it's a favorite paper topic of a certain subset of researchers, you can get as detailed as you want just with normal meshes and a limited subset of texture rules. You can scan through kesen's helpful lists, though of course it's huge and those are only some of the papers: kesen.realtimerendering.com/

Which is also a result of the above limitations. It's not only that the split geometry / texture workflow is tedious by itself, artists also are constrained by the task of making things modular while hiding repetition at the same time.
It's hard to be an artist if you have to find that narrow path to something that finally looks like smoke and mirrors at best. It's also much harder in games, where our facade can be seen from any angle and distance, unlike in movies.

Moire is mostly a result of regular patterns, as required by tiling and other repetive / modular practices. It's much more caused by content creation limitations than by rendering.

So, if we find a way to get rid of tiling, things should become a lot easier. That's a much higher goal than having just faster rendering from something like TS.
At the moment, state of procedural content creation and compression is quite poor in games. Automatic placement and smaller zip files barely address the real problem we have to solve.

I don't see how giving artists the tools to get them MORE work is going to be helpful. The procedural stuff is as much about a timesaver for them as it is about conserving memory. Finished Jedi: Fallen Order recently, and you can see from the artists artstation posts just how helpful tiling and procedural texturing is for introducing variation: https://www.artstation.com/artwork/mqRGr8

In relatively short order a single asset can look wildly different, click click click. More procedural tools might be helpful, but I still don't see how fundamentally changing the way materials are stored and presented would help with that. Unity has a few cool recent tricks where brushes use physically simulated particles for vertex painting. Which is brilliant, more of that I say, heck make a button that scripts "simulate rain" and runs a navier stokes sim over the whole scene, accumulating whatever dirt and grim pattern you want with that. But that doesn't necessarily require a single giant megatexture to work either. You don't need each individual pebble to grain of sand or spec of dirt to be unique, a general tiling texture of that works fine. You just want the distribution of those textures to be unique, which is a great idea, but is also something that could be done with today's pipeline.

And I'm not seeing moire on tiling textures so much as straight edge geometry. Which certainly isn't going to be gotten rid of. And even if you got rid of normal tiling problems, a lot of things are tiled in real life, clothing is fairly tiley. Moire appears in cameras, it happens due to undersampling in real life, so unless you choose an art style specifically to get rid of it the only thing you can do is get more samples somehow. Now, as stated, texture space shading helps one part of the problem there. But save those cycles and prefilter I say, and get more geo samples, which can moire just as easily. In fact the moire was often from road guard rails in GTAV, obviously fairly straight edge repeating patterns in real life.
 
Good art just takes care of that.
But at what cost? Why invest so much manpower to model a background? And still, the results are pretty bad, depending on the scale you choose to look at it:
cTXSpe3AGpX5MD67ET3o4h-970-80.png

Great!
RDR-2-Fundorte-der-Saurierknochen-658x370-9953145ff99f73c4.jpg

Bad. The rocks do not interact with the flow of soil.

Most games do it the other way around:
fly-like-an-albatross-1.png

Total patchwork. All disconnected.
deas8wa-a39dc270-15af-437e-a688-0ec80d73b886.png


Nice. Though that's cherry picked. While playing you can see hard seems between different biomes, slopes nearby the streets, etc.

How does it look on something that does not cost a billion to make?
upload_2021-3-2_10-49-12.png
Copy pasted rocks with seams.
This is the geometrical form of 'tiling'. It's irregular, not in a grid like Super Mario, but the repetition is still visible and illusion braking.

Heck Hollywood doesn't need some crazy single megatexture, and they can do whatever they want.
They have precise control over the viewport, they have no performance or storage limitations. It's easy for them to tweak.
We can not learn from the movie industry how to do this - they just don't have this problem.

I don't see how giving artists the tools to get them MORE work is going to be helpful.
I propose to give them a tool that generates the whole back ground of an open world with a button click. After that, when adding stuff like buildings or tunnels, local re-simulation can easily fix the breaks. Terrain should be dynamic in production, because it can be part of level design.
So nothing i said would add work to artists - instead i think we can reduce this by an order of magnitude or more, while achieving higher quality results. Actually artists spend too much time on fixing and tweaking around technical limitations. We want to automate this further, of course.

you can see from the artists artstation posts just how helpful tiling and procedural texturing is for introducing variation: https://www.artstation.com/artwork/mqRGr8
Simple procedural variation is better than nothing, but can not react to its environment. It can not fix seams. Blending can't either. Substance designer is already outdated, if you ask me.
You know, perlin noise does not know what it is doing, so it is made to be uniform to avoid accidental spots where we don't want them. Because it is uniform, we can mix octaves, etc., and have control of frequencies of detail. We can work with that.
But it's very limited. It does not react to the geometry, and helping out with detecting edges to drive appearance of random scratches does not solve this.
Some iterations of simulation can fix this, and i guess ML will become useful here too. But to do this, we need to treat each surface individually. This is where i see the relation to things like megatexture, megamesh... anything which does not build upon repetition of static building blocks.

More procedural tools might be helpful, but I still don't see how fundamentally changing the way materials are stored and presented would help with that.
It would help my making content creation less manual work, and thus much cheaper. Procedural creation can not take off as long as we are restricted to modules, instances, other forms of tiling. It's already hard for humans. IT makes more sense to ignore limitations during creation, and only after that we care about compressing it down. Only after that we want to introduce forms of repetition for (then lossy) compression purposes.

And I'm not seeing moire on tiling textures so much as straight edge geometry.
Agree, and i've meant the same, so the geometrical meaning of tiling, which comes from the need to have modules. Becasue all those modules are equal, we get moire (or just too much regularity) from geometry.
 
Naughty Dog Siggraph 2020 presentations.

  • Gpu driven effects of the Last of us Part II - Artem Kovalovs
  • Low-level optimizations in the Last of us Part II - Parikshit Saraswat
  • Volumetric effects of the Last of us Part II - Artem Kovalovs
  • Lighting technology of the Last of us Part II - Hawar Doghramachi
  • The technical art of the Last of us Part II - Waylon Brinck, Qingzhou “steven” Tang
 
Tried to find the first paper describing ambient occlusion from ILM, Production Ready Global Illumination. (Siggraph 2002)
The thing that really escaped many and especially in gaming side is that it also introduced bent normals.

I find the connection of this paper (2006)
https://developer.amd.com/wordpress/media/2012/10/Oat-AmbientApetureLighting.pdf

with directional AO in general, very interesting. They do use bent-normals (direction + aparture-angle), and they also calculate solid angle overlap (the same way as GTAO https://www.activision.com/cdn/research/PracticalRealtimeStrategiesTRfinal.pdf). Great stuff at the time (and maybe still).
 
https://graphics.tudelft.nl/Publications-new/2021/VSE21/SDAO.pdf

Fascinating, how even after so many years better and simpler solutions come up. :)
Opens up new approaches for SM based soft shadows too, i think.

It's simple, and neat. Ok still has all the "offscreen" problems, but as a simple extension it's relatively efficient. Throw in a multi sampled albedo for some color bleed like tricks, and you've got something pretty nifty.
 
Ok still has all the "offscreen" problems, but as a simple extension it's relatively efficient.
Depth peeling is an old and kind of obvious concept with obvious shortcomings. I remember it was applied first time in Nvidia's HBAO+ in Assassin's Creed Syndicate, where they generated 2 separate layers of depth buffer for static and dynamic geometry, the effect was quite pleasant since dynamic geometry is the main contributor to screen space artifacts. I see in this paper they go a little bit further and instead of generating 2 depth buffers in separate passes, they use msaa or something along this line to generate 4x depth buffers in a single pass. That's a good tradeoff since depth only rasterization has 4x speed. That's still only approximation since approaching ray tracing quality would require infinity number of layers. Screen space methods still sample small screen areas to not trash caches and be performant, hence every time I look at the result of work of such technics I wish there were ground truth RT reference shots since that dark dirt in room corners is not something which would produce RT AO with much larger sampling area (it usually produces monotonous lit corners)
I guess this method is a swan song of screen space AOs, but it still suffers from being screen space.
 
I became more confident towards SS methods in recent months. Impressed from Path Of Exile (shadows, GI) and CP2077 (reflections).
But those are far range effects. Interestingly artifacts seem more acceptable here because it's easier to fade them out.
I hope we can achieve the same improvements for small scales and high frequency details, because i think that remains interesting even with the rise of RT.
Not sure if the stochastic depth samples idea is worth it for that. Showing SSAO in isolation always looks shitty - especially those dark bands in corners. RT reference would not look much better to me.
I want it more subtle and make the gradient more logarithmic than linear. This could help more than getting rid of artifacts.

But maybe this has been done already and i just don't know because there is not much to complain then. For example, i always wanted to see how much it helps to separate characters from static world, and i was unaware this has been used for years already, just not in every game :)
 
I want it more subtle and make the gradient more logarithmic than linear. This could help more than getting rid of artifacts.
That's how GTAO looks and it was calibrated against reference RT implementation. There is a large parameter space with AO, so it really has to be tuned against reference implementations to look good.
 
That's how GTAO looks and it was calibrated against reference RT implementation. There is a large parameter space with AO, so it really has to be tuned against reference implementations to look good.
Still i want a slider in game to tune it further down, because... i just hate it. :) Usually the only things i do in gfx options is vsync on and AO off.
Surley no option for far range AO as in Exodus, but for must games it would be.

I had used raytraced AO for artworks. Decades ago it felt something new and beautiful. But after getting used, it started to feel 'technical'. I did not understand the background of how it's meant to approximate GI, but i saw it's no silver bullet.
Same happened with SSAO: Crysis was awesome, but after some years i started to turn it off mostly in games. Improved SSAO methods did not help so much.

Though i think i'm a rare exception here. Most people seemingly have no problem with it.
 
Back
Top