Game development presentations - a useful reference

You mean like placing probes, fill lights, even portals manually, while keeping draw distance limited, watching at triangle counts, number of shaders, RAM usage, etc, etc, etc...

The difference between them is that we are seeing fixes or improvements to those to those problems by introducing features such as ray tracing or more powerful hardware ...

Even with more powerful hardware, the fundamental issues still remain with TAA and this problem isn't getting better but it's getting worse as TAA keeps proliferating through out the industry ...

As far as limitations on content authoring go, you now have to worry about how well your content will play with TAA as you yourself hinted with TAA artifacts ...
 
The difference between them is that we are seeing fixes or improvements to those to those problems by introducing features such as ray tracing or more powerful hardware ...
Not sure. They are there for decades, much longer than TAA. Progress happens to any field, ofc.
Even with more powerful hardware, the fundamental issues still remain with TAA and this problem isn't getting better but it's getting worse as TAA keeps proliferating through out the industry ...
Disagree. Yes, we get faster HW (and hopefully some improvements on the software side as well). So we can do new things like RT (which comes with its own temporal side effects, which are more noticeable than TAA), and we can do more things like 1M chickens in Battle Simulator 2.
But we don't have to. We could also decide to ditch all that, and use the power for perfect AA instead. (Which probably would still using multiple frames and combining them to get AA and motion blur at the same time, just we render all those frames within one visible.)
And that's the whole problem: It sells better to have improved lighting, epic detail, more hair, more rigid bodies, etc., than it is to have better AA. The industry is not invaded by some cancer beginning with a T, they make this choice freely because they conclude it's the best option for most games. This implies limiting artifacts also with compromised content here and there is ok for them? But forget about it. IDK why i end up defending TAA so often, although i'm not really interested in AA at all, and i have no experience with implementation either.

There seems one special thing about TAA: Everybody agrees SSR or SSAO artifacts suck, because they really stand out. But TAA is subjective. Some people stare happily at the awesome quality of a TAA still image (like me), others stare frustrated at a very subtle motion trail of a character walking over a noisy bush. Why is this? I mean, from all those SS hacks TAA works surely the best. Can we somehow classify the cases where TAA artifacts affect sensible people the most? When does it happen, and how does it hinder the gaming experience, personally?

Like said, i really have to search for artifacts - i do never notice them while playing.
I do notice ghosting artifacts, e.g. recently in CP, where walking characters leave trails of different noise over glossy surfaces. That's very visible, and yep it hurts immersion. Agreed.
But in my definition this is not a TAA artifact. Their goal is to solve reflections, not to limit aliasing. It uses temporal accumulation too, but it's not related to AA. (And even with those artifacts, their stuff looks great and those SS hacks work much better nowadays than i would have believed some years ago.)
 
A related story...
Once, my father asked me: 'Hey! Where is my Atari? Didn't you borrow it as a child? You lost it?'
I said it was stolen (which is true!)
Then he said, what he liked the most were the motion trails those bright squares made on the CRT screen.

So maybe my liking of TAA is a side effect of being born in the seventies, hahaha :)
upload_2021-2-3_1-5-46.png
total hip!
 
We have no viable alternative to TAA. All other alternatives are either prohibitively more expensive or have far more glaring drawbacks.
 
Not sure if that's an argument, because real life is a gradual process of smooth change too. Would you call the sunset a hysteresis, just because it happens over some period of time?
Would you call any physics simulation we do wrong too, because it works by taking a previous state and integrating the change of a timestep? Eventually improving cached contact forces over multiple frames? Surely not.

So why is TAA different here, why is it bad or wrong, although it works the same way?
The only answer can be subjective perception of error. But the success of TAA implies only a minority is affected. Still that's a problem, so what would you propose as an alternative?



From the paper :

Fast Convergence Heuristics
We further accelerate convergence with new heuristic based on per-texel thresholding for irradiance data. Our lower threshold detects changes with magnitude above 25% of maximum value and lowers the hysteresis by 0.15f. Our higher threshold detects changes with magnitude above 80% and lowers the hysteresis to 0.0f—we assume in this case that the distribution the probe is sampling has changed completely. These thresholds are active only for irradiance updates—we found them to be too unstable when updating visibility

In presence of TAA the latter won't work as intended.

It seems its (TAA's) real use is to create a haphazard stop band , to lessen symptoms of undersampling.
 
Last edited:
In presence of TAA the latter won't work as intended.
It seems they describe using a temporal exponential average to smoothen irradiance in GI probes, which is a similar practice than accumulating multiple samples per pixel in any raytracing method.
They also mention this does not work for them with visibility. I guess they mean the average distance per probe texel, which they use to 'solve' the leakage problem we know from VXGI, for example.

But how is any of this related to TAA? Notice, a error in mapping GI from probes can affect large areas of space. TAA works on a cone with a tiny angle. Errors affect only the neighborhood of a single pixel - this is why it works so well, better than SSR, SSAO, SSGI...
Also, DDGI will work equally well (or bad) no matter what AA method you use. It's a world space method and not related to any image reconstruction method. Even if both methods cause lag, it is not coupled.
I remember NVs early research presentation of DDGI had really bad smearing effects and motion trails, but this is not related to DDGI - it only was because they did not spend time to improve their very basic TAA implementation. (In case you confuse that.)

It seems its (TAA's) real use is to create a haphazard stop band , to lessen symptoms of undersampling.
What's a haphazard? All this sounds like kinds of diseases :)
TAA solves undersampling, yes. It's not about symptoms - it completely fixes undersampling in practice (AFAIK it's common to use 256 samples before the sequence repeats, but we could use infinite samples as well).
When does it fail? And when does it become wrong for real? I'll answer myself:
* All samples except the current are outdated. If lighting changes rapidly, this error becomes visible, but it is bounded. (No problem in practice.)
* Getting the samples involves reprojection of the previous frame(s). If this fails (it will, e.g. in presence of transparency, or inaccurate motion vectors due to skinning and other non rigid transformation), the unbounded error creeps in and stays for a long time. The root of all problems.

Can this be fixed? Yes. We could just store a history of N frames for N samples. Then reprojection errors could be rejected robustly.
The reason we don't likely see this happening is the same we have no alternative at all: Finite performance. It would cost too much bandwidth. Nobody is to blame here.
However, now with RT there can be additional reason to have at least some small number of frames like 4. Maybe we'll get better AA along as well.
The other hope is ML, which has a better chance to detect and resolve smearing and ghosting things, eventually.
 
Wouldn't it be better to just move shading to texture space?
With DX12 Ultimate feature set hardware, we can query texture visibility with texture feedback/image_footprint methods and shade only visible texels instead of shading several 4K texture atlases like in the Ashes of the Singularity.
Then we can simply resample already shaded parts of textures in following frames unless lighting state has changed. Sounds way more natural (no need for error prone motion vectors) and less error prone than screen space TAA resampling, but would probably require a lot of video memory to store shaded textures per object.
Texture space shading can also work well with RT, no need to store g-buffer or v-buffer, RT and BVH can be used for primary visibility, RT should be way faster for micropolygons and should enable all kinds of flexible analytical/regular screen sampling and texture space shading would eliminate shading aliasing.
 
Wouldn't it be better to just move shading to texture space?
With DX12 Ultimate feature set hardware, we can query texture visibility with texture feedback/image_footprint methods and shade only visible texels instead of shading several 4K texture atlases like in the Ashes of the Singularity.
The promise is really huge, but there are so many problems:
We need unique texture space for every triangle we render. This also breaks instancing - we now need to manage memory for each instance on its own.
We need global UV parametrization for everything, and UV seams become more a visual problem than they already are, because they now also affect lighting.
The memory management for all those fragments and mip levels alone seems challenging.
Big advantage would be to use it for irradiance caching to get multiple bounces for free, or to do denoising in world space eventually. But then we would need to keep also stuff which is not visible on screen, at lower res, increasing complexity even further.

Sampler feedback does not help much with any of this - it only helps with efficiency. I'm surprised you propose it. Likely it would require to make a new gfx engine from scratch, and for AAA FPS it becomes really hard i think. Ashes is just baby steps in comparison to that. Even UE5 seems not very impressive, after somebody gets this to work.

Personally i always consider it an option for myself, because i work on some alternative geometry thingy (a bit like PTEX - no texture seams). Goal would be to reuse my GI LOD and BVH system also for patches of visible geometry and texture, and Texture Space might be just a extension to the system and relatively easy to add.
But currently it looks my geometry stuff can only work for terrain, and coarse architecture if i'm lucky. Dynamic objects like characters, vehicles, foliage, or small scale human made details (furniture, guns...) remain traditional triangle meshes. Because of that, TS just seems too much pain, although i'm a big fan of the idea.

So what do you think? Could it be a serious option already for the industry, seriously?
I expected it to happen during next gen era, but the longer i think about it, the less doable it seems :) ... the success of SS techniques (VRS, denoising) speaks against it too.
 
So what do you think? Could it be a serious option already for the industry, seriously?
It might be doable for certain things at least, not for view dependent stuff such as reflections though.
It looks very attractive and promising in my imagination:D, but I agree, it would require hell a lot of reengineering and research, that's for sure.
I guess we should see at least some hybrid implementations first where TSS makes the most sense.
 
I guess we should see at least some hybrid implementations first where TSS makes the most sense.
Yeah... probably the only way to get going. Otherwise risk is high the company does not exist anymore, or everybody has moved to some combined geometry and texture something, until done with the new engine.
But who knows... waiting for Artjoms lone adventure in Texture Space :D
 
Yeah... probably the only way to get going. Otherwise risk is high the company does not exist anymore, or everybody has moved to some combined geometry and texture something, until done with the new engine.
But who knows... waiting for Artjoms lone adventure in Texture Space :D

I agree that implementing a texture space lighting engine would require a very unique engine. But once one has the robust parametrisation/atlasing/virtualization systems working, a bunch of cool techniques open up. Diffuse lighting caching/decoupling from raster fps is just one of them.

A robust univesal virtual atlas system like that can allow unlimited decals everywhere, for example. Layered materials also become cheaper. One could implement a run-time material generation system akin to quixel for infinite texture variety for very little storage.

I bet vfx people can come up with some pretty cool effects that would not be as viable otherwise too.

One can hitch a ride on the system to do some persistent texture space effects or simulations that are not view dependant.

If you have some form of heightmap displacement system that you can also use universally for all geo, be it through geometry tessellation or some form of POM, one can layer and blend many different heightmaps dynamically to deform geometry very easily (ex. every bullet hole decal creates an actual dent or crack in any object of the scene)

Whoever builds an engine that can take advantage of stuff like that first, will have a game that will feel a generation apart from everything else.
 
But once one has the robust parametrisation/atlasing/virtualization systems working, a bunch of cool techniques open up.
There are two options to do this:
1. Keep current workflows and data formats, but reserve unique texture space for each instance of a model. Downside: Intersecting geometry (think of 20+ onion layers of quixel stuff in UE5 demo), will waste a lot of texture space which is not visible.
2. Resolve those problems offline, ending up with e.g. remeshing the static world to minimize surface and texture area. Downside: Breaks instancing, beacuse each copy of a model now becomes unique. Neither geometry nor textures remain shared.
It's very difficult to find a practical compromise here. It affects offline tools and editor as much as it does runtime engine. It's a revolution, and we don't want revolutions but a smooth transition.

One can hitch a ride on the system to do some persistent texture space effects or simulations
Yeah, but this point is also tied to content creation, because the promise is much bigger on that side. It's nice to have dynamic displacement at runtime for decals, but it's much nicer to do such simulations (also) offline to generate a natural environment with little manual work. (Epics proposal to build worlds with a limited set of repetive quixel models will be very restrictive here, i guess.)
But to make this practical due to limited storage, we then need to find other ways to utilize 'instancing'.
Personally i currently think it might work to have a large (volume) texture of rock, and then project parts of that to small patches of geometry. We need some blending to avoid seams, and we need an offline synthesis system which generates natural results on any geometry, e.g. by looking at curvature.
Then we can get high detail everywhere from little data, eventually. And things like SDF bricks, point cloud splats, etc. seam attractive to render such stuff. How to do LOD is related too.
However, I think the concepts of duplicating models and using tiled textures starts to feel artificial and uncanny. The split workflow of geometry, UVs, texturing feels horribly inefficient too.

It's a lot of things, all connected and depending on each other. Doing a smooth transition might not work, and any revolution is very likely to fail, so very risky. Still i'm crazy enough to keep working on this crap. :)
 
There are two options to do this:
1. Keep current workflows and data formats, but reserve unique texture space for each instance of a model. Downside: Intersecting geometry (think of 20+ onion layers of quixel stuff in UE5 demo), will waste a lot of texture space which is not visible.
2. Resolve those problems offline, ending up with e.g. remeshing the static world to minimize surface and texture area. Downside: Breaks instancing, beacuse each copy of a model now becomes unique. Neither geometry nor textures remain shared.
It's very difficult to find a practical compromise here. It affects offline tools and editor as much as it does runtime engine. It's a revolution, and we don't want revolutions but a smooth transition.

How did the games with virtual texturing that already shipped did it? The Id Tech games and Trials?
 
How did the games with virtual texturing that already shipped did it? The Id Tech games and Trials?
I know only Rage, and back then i was not yet aware about the 'breaking instancing problem' of option 2, but there was at least the claim to have unique texture everywhere. Would be interesting to play again and look for repeated geometry or intersections.
I also wonder why id moved away from it. Low texture resolution? Preprocessing times? IDK. I always expected id would have it easier than others to switch to TS, but they didn't. They also show little interest towards realtime GI til yet.

Trials has ingame editor and is all about instancing, so must be more like option 1.

Both games 'only' focus on the current frustum. It's not about caching presistant things like indirect bounces or decals.
If we want caching and construction of multi-layer materials at runtime as a background task, we want to extend this to the whole environment with LOD.
(This is also why many devs sound much mor optimistic towards TS than i do, because people have different goals in mind.)

Some games also use a lot of impostors, e.g. Metro Exodus. I assume they relight them together with the rest of the gBuffer every frame, but this would be an interesting option to implement TS and get some of it's promises with little effort.
Idea would be to divide large models into small patches, each patch becomes a impostor which can update lighting only each Nth frame. Then raster using reprojection (holes / inaccuracy should be a problem - think about a rotating sphere made of many impostors).
We would not only get TS advantage, but also solve the 'brute force problem' of games, which is to render and lighten the same pixels again and again, spending lots of processing power on things that barely change.
Though, we now have a need to generate a LOD hierarchy of patches for all stuff, so the effort on tools side is still large.
 
Instancing is about rendering the same *geometry* many times, with whichever texture(s) you want thanks to bindless...
With MegaTextures you give a unique set of texture coordinates to each geometry, but you could still store offsets per instance to access other parts of the MegaTexture.
As for using virtual textures, WDDM is sooooo horribly slow that it's unpractical, even though it would be the best solution. (Virtually allocate all of your resources and only commit the pages you need on-demand through very fast streaming.)
 
Instancing is about rendering the same *geometry* many times, with whichever texture(s) you want thanks to bindless...
Yeah, i use the term 'instancing' meaning any form of reusing stuff multiple times, both geometry or textures. I don't mean specific instancing as offered from gfx APIs. Confusing, but i don't know what other term i should use? 'Repetition' would sound like criticizing art direction.
As for using virtual textures, WDDM is sooooo horribly slow that it's unpractical
To me, the idea to render stuff first, then see what's visible, then upload textures as needed, is not that attractive at all. Even if latency issues were no big problem.
Using only camera position to prepare the surrounding so it is already there when the player turns the camera feels more promising. Even if it needs more memory, it starts to make sense if we add decompression, cached lighting, compositing complex materials, etc. into account.
 
To me, the idea to render stuff first, then see what's visible, then upload textures as needed, is not that attractive at all. Even if latency issues were no big problem.
Using only camera position to prepare the surrounding so it is already there when the player turns the camera feels more promising. Even if it needs more memory, it starts to make sense if we add decompression, cached lighting, compositing complex materials, etc. into account.

With a fixed or computed world unit/texel ratio you could also compute tiles/lods of surroundings objects, alternatively you could render multiple low resolution points of views (like a cubemap) to know what's needed around the player...
 
Back
Top