Next gen lighting technologies - voxelised, traced, and everything else *spawn*

The inability to edit after 10 minutes makes me sad.
There are quite a few cvars leftover from Q2PRO that are ignored by Q2VKPT. In particular it ignores the anistropic filtering cvar. It seems I accidentally disabled it at some point though :( (see src/refresh/vkpt/textures.c (579))
This is one I am thinking about. Isn't the usual concept of texture filtering not at all the same here with this path traced q2 since everything is path traced? Isn't the textured surface and its entire look reconstructed/composed during the denoise step? That is its "filtering", right?
 
^^ I think the RTX Mode ON (10 ray per pixel crazy bounce of mirrors with a simple surface) shows how trivial the ray/triangle interection tests are on the RT core (it runs really well!) vs. how much more expensive shading of the results are with "RTX off" and less ray/pixel. The performance difference in context aligns with what we have been seeing across BFV and the Northlight presentations.
No, it should be much better. In BFV the shading cost is high, probably because they have many shaders so divergence, also they need to iterate over shadow maps to shade hit points.
Here there are no shaders used at all. Instead one random light is chosen and a shadow ray is necessary to determinate light visibility, plus a single texture fetch to get aldebo.

So, if 10 mirror reflections work with good performance, you could use 5 indirect bounces instead (one ray plus one shadow ray each). Performance should be very similar. (In a modern game with complex geometry perf would drop, but non the less this is really good)
However, i assume it would give best IQ to use just 2 bounces but 3 samples. Eventually 3 samples prevent the filter from washing important high frequency details like contact shadows.
In any case the game could look a lot better with a bit of tuning.

This is one I am thinking about. Isn't the usual concept of texture filtering not at all the same here with this path traced q2 since everything is path traced? Isn't the textured surface and its entire look reconstructed/composed during the denoise step? That is its "filtering", right?
No. The filter works only on incoming light. The textures are composited later with this result similar to deferred shading (see earlier posts).
So the only spot where textures affect the filter is their contribution to indirect lighting, which still gives a smooth signal on the filtered result.
 
... We can say that in this case here textures are not very interesting. This would change only with more complex shading like PBS. But even then, the most interesting part of path tracing is how to apply lighting.
There are several options (but i'm unused to proper terminology):


Random sampling: This is the only method that needs no knowledge about the lights in the scene. It traces rays in random directions and with luck it hits a light source. Works well for large area lights / cloudy day.
This is basically the method that works in any case, but it requires much more samples and is very noisy. I doubt the filters presented here would work.

Light Sampling: At each hit do an additional ray towards one or several known light sources. This is what Q2 is doing here, but it is a huge limitation to manage light sources and where they will probably affect the scene. (We have the same limitation in all current games)
Because the method captures direct lighting pretty well, it is much less noisy than the above, but restricted.

Bidirectional: A mix of both, aiming to improve specular lighting and causitcs. Same limitations, but less specular noise than the above.


All methods can converge to ground truth (one must only be careful to relate weighting to probability correctly).
 
It has to include light sampling. Random sampling will only ever hit large enough lights realistically. Great for a spherical environment lightmap, but useless for lighting from a small spot-light or a handheld match in a horror game, let alone an infinitely small head-shadow point-light. Ray directions shouldn't be randomised but statistically selected within a meaningful range based on light source size, so the rays aren't wasted on constant misses. Point light's would thus be traced from light origin to surface exactly, every ray, whereas a large rectangular source like a window will be sampled over a larger range and a sky map sampled randomly.
 
t has to include light sampling.
Yes, but on the long run we have to get rid of the need to treat lights specially.
I have only recently realized this is a big problem - my own work builds on radiosity method which does not have this problem, but we need to combine both to solve all limitations.
But that's really thinking very far ahead, for now light sampling is fine.
 
So it goes from big pile of browns to big pile of reds...
 
What puzzles me is, at the area starting at 1:05 RTX looks pretty bad. There is a higher pathway and under its platform RTX shows greenish light (specular?), while the baked lighting is properly dark.
There seems no light source that could explain this. Overall the baked lighting appears much more correct to me, but the number of bounces can not be the reason.
 
What puzzles me is, at the area starting at 1:05 RTX looks pretty bad. There is a higher pathway and under its platform RTX shows greenish light (specular?), while the baked lighting is properly dark.
There seems no light source that could explain this. Overall the baked lighting appears much more correct to me, but the number of bounces can not be the reason.
Maybe it is an artistic thing? Like the strip area lights are just set to be really bright...
 
Yeah, agree. I guess it's a reflection of those lights at the floor, they are brighter than in the original game. Still i miss some occlusion under the platform but some artist work on the lighting could do wonders.
 

So raytracing reflection 1 is RT and raytracing reflection 0 is SSR. (1 = on, 0 = off)
Nice comparison of both methods.
RT not always looking better though like:
SSR shows nice stable image.
RT shows disturbing twinkling pixels, while the camera is in a static position.
(best rewind a bit and wait till streaming settles on 4K quality, otherwise you might see a blurry mess)
RT twinkling even stronger in the semi sphere mirror, although at least it reflects while SSR doesn't.
As far as I can see RT at 1/4 resolution, 80 FPS versus 120 FPS with SSR.
If this 2080Ti could run at 1 practical Grays/s instead of 10 marketing Grays/s it already might look a bit cleaner :)
 
Last edited:
Sadly ubi titles are now AMD sponsored :(
Sadly? At least it pushes their games towards open technologies and standards, something that couldn't be said when they were sponsored by NVIDIA.
Also, AMD sponsoring them doesn't suggest in any way they wouldn't include DXR support in future titles
 
Ah. A fantastic glimpse of The Division 3.

RT goodness!

At around 1:30 he is switching between RT AO on and off. Going with the GPU time RT AO cost around 2.5ms. It is really time that we need more information about RT shadows and AO.
 
What does that have to do with Division 3 future title? Because the scene looks like something from The Division?
In The division a big part of it is the subways. It’s modelled pretty similar here as how they modelled it in the division.

That’s all, not saying it’s actually linked, more like thinking what the division could look like with RT on
 
Back
Top