DirectX Ray-Tracing [DXR]

Suddenly I got flashbacks from the old good Doom3 flashlight shadows are not proper shadows discussions.. :)
Baking / texel shading could be decent fit for the tech currently, I do not know how feasible it is without huge amount of shading reuse.
Well, actually I'm sure developers will find funny little tricks and structures they can use it for.
 
Last edited:
So I was watching epic's star wars demo.

I got the distinct impression that it was just rasterized scene with reflections and shadows tracing into some kind of modest resolution voxel grid? The reflections seem to have a slight blur and boxy nature.

Things like the depth of field still looked like typical screen space to my eye too.

Is there anything solid on what's actually going on?

FWIW I think the future is very much rasterized geometry mixed with voxelized indirect effects (cone tracing or whatnot). I don't see the use of true precise triangle level ray tracing or primary rays.

[Edit]

^^ watching the Nvidia video above I really feel this is the wrong approach to the problem. Being so heavily reliant on noise and what appears to be temporal surface denoising is really concerning (quite a bit of bleed introduced which didn't look too be screen space.. not sure).
High frequency sparse sampling with super noise + blur to lower frequency to hide the noise feels like the wrong approach here because it's being super precise in your first step then trying to blur out the resulting high frequency inaccuracy.. it feels especially wrong because most things are very low frequency. It feels like everything they showed there would be better with something like cone tracing, where your samples can be low frequency to being with.

I'd much rather low frequency imprecision (eg looking lower res in the blurry parts of a shadow) than high frequency noise and temporal changes. And it feels like it'd be much more efficient too.
 
So I was watching epic's star wars demo.

I got the distinct impression that it was just rasterized scene with reflections and shadows tracing into some kind of modest resolution voxel grid? The reflections seem to have a slight blur and boxy nature.

Things like the depth of field still looked like typical screen space to my eye too.

Is there anything solid on what's actually going on?

FWIW I think the future is very much rasterized geometry mixed with voxelized indirect effects (cone tracing or whatnot). I don't see the use of true precise triangle level ray tracing or primary rays.

[Edit]

^^ watching the Nvidia video above I really feel this is the wrong approach to the problem. Being so heavily reliant on noise and what appears to be temporal surface denoising is really concerning (quite a bit of bleed introduced which didn't look too be screen space.. not sure).
High frequency sparse sampling with super noise + blur to lower frequency to hide the noise feels like the wrong approach here because it's being super precise in your first step then trying to blur out the resulting high frequency inaccuracy.. it feels especially wrong because most things are very low frequency. It feels like everything they showed there would be better with something like cone tracing, where your samples can be low frequency to being with.

I'd much rather low frequency imprecision (eg looking lower res in the blurry parts of a shadow) than high frequency noise and temporal changes. And it feels like it'd be much more efficient too.

In the video description it says:

Next-generation rendering features shown in today’s demo include:
● Textured area lights
● Ray-traced area light shadows
● Ray-traced reflections
● Ray-traced ambient occlusion
● Cinematic depth of field (DOF)
● NVIDIA GameWorks ray tracing denoising ABOUT ILMxLAB


Something I see missing in all these demos are transparencies. I wonder why...
 
Suddenly I got flashbacks from the old good Doom3 flashlight shadows are not proper shadows discussions.. :)
Ah yes, that, and the reflections of the cars and the puddles, we have been talking about this since PGR3. :LOL:
 
Quake 2 Realtime GPU Pathtracing
https://r.tapatalk.com/shareLink?share_fid=30086&share_tid=60290&url=https://forum.beyond3d.com/index.php?threads/Quake-2-Realtime-GPU-Pathtracing.60290/&share_type=t

Real-time path tracing is certainly feasible within limitations.

IIRC this was done in OpenGL 3, and I think the results looked very promising.
Could this be considered path tracing?
direct-light-only-3d-rendering-unnatural-dark.png

photorealistic-direct-light-gi-well-lit-room.png


Btw, a video from the link you shared. It looks better to me than Quake 3 and Quake 4 videos. But it is more recent though.

 
That demo is quite terrible, IMHO. Other than the poor showcasing of real-time ray trace lighting, the physics (reaction to wind) is totally wrong. So many little things just made this demo off-putting.

I think the demo is okay but I am not convinced by the raytracing in this case...

The best demo is the Seed one imo.

Seeing the performance requirements I am not sure it will be the solution in the future very soon...
 
Last edited:
Are these demos running 4k? Can we assume performance requirements would be lower at 1080p or 720p?

What about the ai denoising, what's the word on performance gains and required performance? Can we assume it benefits from half precision, as I've heard half precision is sufficient for some aspects dNN software? what about quarter precision?
 
So I was watching epic's star wars demo.

I got the distinct impression that it was just rasterized scene with reflections and shadows tracing into some kind of modest resolution voxel grid? The reflections seem to have a slight blur and boxy nature.

Things like the depth of field still looked like typical screen space to my eye too.

Is there anything solid on what's actually going on?

FWIW I think the future is very much rasterized geometry mixed with voxelized indirect effects (cone tracing or whatnot). I don't see the use of true precise triangle level ray tracing or primary rays.

[Edit]

^^ watching the Nvidia video above I really feel this is the wrong approach to the problem. Being so heavily reliant on noise and what appears to be temporal surface denoising is really concerning (quite a bit of bleed introduced which didn't look too be screen space.. not sure).
High frequency sparse sampling with super noise + blur to lower frequency to hide the noise feels like the wrong approach here because it's being super precise in your first step then trying to blur out the resulting high frequency inaccuracy.. it feels especially wrong because most things are very low frequency. It feels like everything they showed there would be better with something like cone tracing, where your samples can be low frequency to being with.

I'd much rather low frequency imprecision (eg looking lower res in the blurry parts of a shadow) than high frequency noise and temporal changes. And it feels like it'd be much more efficient too.

They say ai denoising, I assume recurrent NN ai denoising which would be more advanced than that. Something based on the following research. Also likely to improve over time.

 
Back
Top