Suddenly I got flashbacks from the old good Doom3 flashlight shadows are not proper shadows discussions..
Baking / texel shading could be decent fit for the tech currently, I do not know how feasible it is without huge amount of shading reuse.
So I was watching epic's star wars demo.
I got the distinct impression that it was just rasterized scene with reflections and shadows tracing into some kind of modest resolution voxel grid? The reflections seem to have a slight blur and boxy nature.
Things like the depth of field still looked like typical screen space to my eye too.
Is there anything solid on what's actually going on?
FWIW I think the future is very much rasterized geometry mixed with voxelized indirect effects (cone tracing or whatnot). I don't see the use of true precise triangle level ray tracing or primary rays.
[Edit]
^^ watching the Nvidia video above I really feel this is the wrong approach to the problem. Being so heavily reliant on noise and what appears to be temporal surface denoising is really concerning (quite a bit of bleed introduced which didn't look too be screen space.. not sure).
High frequency sparse sampling with super noise + blur to lower frequency to hide the noise feels like the wrong approach here because it's being super precise in your first step then trying to blur out the resulting high frequency inaccuracy.. it feels especially wrong because most things are very low frequency. It feels like everything they showed there would be better with something like cone tracing, where your samples can be low frequency to being with.
I'd much rather low frequency imprecision (eg looking lower res in the blurry parts of a shadow) than high frequency noise and temporal changes. And it feels like it'd be much more efficient too.
Ah yes, that, and the reflections of the cars and the puddles, we have been talking about this since PGR3.Suddenly I got flashbacks from the old good Doom3 flashlight shadows are not proper shadows discussions..
Could this be considered path tracing?Quake 2 Realtime GPU Pathtracing
https://r.tapatalk.com/shareLink?share_fid=30086&share_tid=60290&url=https://forum.beyond3d.com/index.php?threads/Quake-2-Realtime-GPU-Pathtracing.60290/&share_type=t
Real-time path tracing is certainly feasible within limitations.
IIRC this was done in OpenGL 3, and I think the results looked very promising.
Recording of yesterday's full presentation on the Star Wars demo. Starts at 1:42:46.
They used four Nvidia Tesla V100s (around 10000€ each). In tearms of money vs. performance money this is not as high as this link (https://arstechnica.com/gaming/2018...just-how-great-real-time-raytracing-can-look/) suggested.
That demo is quite terrible, IMHO. Other than the poor showcasing of real-time ray trace lighting, the physics (reaction to wind) is totally wrong. So many little things just made this demo off-putting.
So I was watching epic's star wars demo.
I got the distinct impression that it was just rasterized scene with reflections and shadows tracing into some kind of modest resolution voxel grid? The reflections seem to have a slight blur and boxy nature.
Things like the depth of field still looked like typical screen space to my eye too.
Is there anything solid on what's actually going on?
FWIW I think the future is very much rasterized geometry mixed with voxelized indirect effects (cone tracing or whatnot). I don't see the use of true precise triangle level ray tracing or primary rays.
[Edit]
^^ watching the Nvidia video above I really feel this is the wrong approach to the problem. Being so heavily reliant on noise and what appears to be temporal surface denoising is really concerning (quite a bit of bleed introduced which didn't look too be screen space.. not sure).
High frequency sparse sampling with super noise + blur to lower frequency to hide the noise feels like the wrong approach here because it's being super precise in your first step then trying to blur out the resulting high frequency inaccuracy.. it feels especially wrong because most things are very low frequency. It feels like everything they showed there would be better with something like cone tracing, where your samples can be low frequency to being with.
I'd much rather low frequency imprecision (eg looking lower res in the blurry parts of a shadow) than high frequency noise and temporal changes. And it feels like it'd be much more efficient too.