Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Ehh, I'd take PCF hacks over the residual aliasing and temporal smearing ... with exact hard shadows they can make it look quite nice.
 
Last edited:
Ehh, I'd take PCF hacks over the residual aliasing and temporal smearing ... with exact hard shadows they can make it look quite nice.
One Thing to take Note of is that the denoiser in shadow of the Tomb Raider is not temporal at all. It has Zero ghosting. Purely spatial.
So you can have cake and eat it I guess considering that game has area light shadows.
 
One Thing to take Note of is that the denoiser in shadow of the Tomb Raider is not temporal at all. It has Zero ghosting. Purely spatial.
Interesting.

One could argue TR still needs fallback to SM in the distance, but that's what i hope sotchastic lod could solve. (Gradually removing alpha planes and lots of vegetation would work without popping)
Also TR is not designed for area lights, which would help a lot against the gamey look caused from ugly artificial point lights and occlusion lacking probes.
But that's only possible on a platform with guaranteed RT support, so i expect Sony will show this first.
Though, there was some offscreen footage from Cyberpunk. Low quality recording, but it looked like soft shadows everywhere and very good.
 
Interesting.

One could argue TR still needs fallback to SM in the distance, but that's what i hope sotchastic lod could solve. (Gradually removing alpha planes and lots of vegetation would work without popping)
Also TR is not designed for area lights, which would help a lot against the gamey look caused from ugly artificial point lights and occlusion lacking probes.
But that's only possible on a platform with guaranteed RT support, so i expect Sony will show this first.
Though, there was some offscreen footage from Cyberpunk. Low quality recording, but it looked like soft shadows everywhere and very good.

Thought I read somewhere that Cyberpunk only uses skydome occlusion and emissive stuff.
 
Thought I read somewhere that Cyberpunk only uses skydome occlusion and emissive stuff.
From the readings i also thought they would do similar than Exodus. I remember some indoors with nice area shadows from the video, also visible here:
cyberpunk2077-rtx-support-2-900x506.jpg

Pretty good example for what i mean. Looks much more CGI than gamey.

Saying 'uses only emissive stuff' sounds a bit like playing down, and assuming analytical lights would be better for some reason.
That's not justified, because emissive surfaces usually have no restrictions in shape or number, while analytical lights are limited in both ways more likely, and need special handling.
However that's rhetorical. We won't get rid of handling lights in their own data structures so we can sample them efficiently.
(Ironically we may soon end up traversing softwere BVH of lights on each ray hit :) I'm still requesting exposing BVH for flexible general purpose use!)

Interesting in above screenshot the reflection still assumes a point light.
There was some interesting work from Eric Heitz about polygon area reflection and decoupling light contribution and occlusion, implemented in this up to date demo: http://hd-prg.com/areaLightShadows.html
I could not run it because requires DXR fallback, but i guess that's as far as SM based tech can go. I wonder what's the limitations here.

Stochastic RT really seems easier. We'll see what wins for next gen...
 
So how many rays per second does RTX get in ye average game scene?

If the game engine has done all the work to get a BVH for each frame then replacing the code for primary visibility and shadows is comparatively little work. The ray tracing works in parallel to shaders/tensor-cores and less than a single billion rays per second is plenty for primary visibility and shadow rays for the major light sources (hard shadows, but tricks to soften them like hybrid frustum tracing works just as well with ray tracing ... and there's the stochastic approaches, though I think they'll have disadvantages). You can cut out part of the occlusion culling too. Seems like a plain win to just drop rasterization at that point.

When primary and shadow ray visibility stops being a significant part of the work and/or most triangles go subpixel, it's time to say goodbye to rasterization. Are we there yet or are the RTX rays per second a bit overhyped?
 
Last edited:
So how many rays per second does RTX get in ye average game scene?
I think the metric that nvidia provides is very difficult for us to gauge RT performance. I'm not sure if that' is an all encompassing metric or just for primary rays etc.
 
Even if it's just primary and shadow rays my argument remains the same, with less than a billion per second you can get rid of a lot of the headaches of traditional rendering (fine grained occlusion culling, worrying about efficiency impact from triangles size etc).
 
Even if it's just primary and shadow rays my argument remains the same, with less than a billion per second you can get rid of a lot of the headaches of traditional rendering (fine grained occlusion culling, worrying about efficiency impact from triangles size etc).
Triangle size still has impact: If you trace against a tree that's only 10px high on screen, the ray still has to descent down to hit a tiny subpixel leaf. LOD still necessary.
Tracing primary visibility is likely too wasteful considering the high cost RT shows to have, and as long as there is raster HW we will use it. But ofc. this does not answer your question.

AFAIK the 10GRays/s numbers from NV comes from tracing primary visibility a single but detailed model and display the normal at the hitpoint. No materials or lighting, only perfectly coherent rays. This is what i got after asking the same question here a year ago - not sure about it.

The best resource to get practical numbers seems still the early presentation from Remedy. They gave both numbers for Volta and Turing, and for different kind of rays (OA, GI, probably shadow rays, but no primary rays).
I do not remember the numbers well in detail, but my personal rule over the thumb conclusion was 4-8 rays per pixel. In BFV it was less than one.

Additionally i remember those projects that used RT for primary visibility:
Quake 2 RTX,
and this:
unfortunately both do too much other things to give us a clue about primary ray costs.


But, i think it will take at least 5 years until first GPUs appear that only emulate raster HW. But not sure. Moores Law may just stop and we could never get rid of it.
 
Triangle size still has impact: If you trace against a tree that's only 10px high on screen, the ray still has to descent down to hit a tiny subpixel leaf. LOD still necessary.
Tracing primary visibility is likely too wasteful considering the high cost RT shows to have, and as long as there is raster HW we will use it. But ofc. this does not answer your question.

AFAIK the 10GRays/s numbers from NV comes from tracing primary visibility a single but detailed model and display the normal at the hitpoint. No materials or lighting, only perfectly coherent rays. This is what i got after asking the same question here a year ago - not sure about it.

The best resource to get practical numbers seems still the early presentation from Remedy. They gave both numbers for Volta and Turing, and for different kind of rays (OA, GI, probably shadow rays, but no primary rays).
I do not remember the numbers well in detail, but my personal rule over the thumb conclusion was 4-8 rays per pixel. In BFV it was less than one.

Additionally i remember those projects that used RT for primary visibility:
Quake 2 RTX,
and this:
unfortunately both do too much other things to give us a clue about primary ray costs.


But, i think it will take at least 5 years until first GPUs appear that only emulate raster HW. But not sure. Moores Law may just stop and we could never get rid of it.
You can force quake 2 vkpt and quake 2 rtx to just run primary rays I am sure with some console commands
 
Yeah, it's all about painting it... Even though there are some geometric approaches for wood, now that you talk about pine. Let me see if I find it.
EDIT:

Not exactly the example you mentioned, but still...
They're pretty good. I was trying to do similar to make marble, but couldn't get anything to work. Wood grows in clylinders so that's geometrically reproduceable, whereas marble is fractal patterns. Granite could be made from loads of geometry pieces, but the blending and mixing of marble...well, if it can be done, it's not straight forward. That contrasts with super easy textures or even 3D procedural shaders. Is it possible to include 3D shaders in an SDF engine?
 
They're pretty good. I was trying to do similar to make marble, but couldn't get anything to work. Wood grows in clylinders so that's geometrically reproduceable, whereas marble is fractal patterns. Granite could be made from loads of geometry pieces, but the blending and mixing of marble...well, if it can be done, it's not straight forward. That contrasts with super easy textures or even 3D procedural shaders.
I've seen some painted marble stuff, but I can't remember in what dream... Funnily enough, I was just thinking about creating something with marble, these days.

Is it possible to include 3D shaders in an SDF engine?
I don't know. Maybe we could ask @sebbbi. Where is he?o_O
And maybe these posts could be moved to https://forum.beyond3d.com/threads/...-our-geometry-be-made-of-in-the-future.59982/, since we're derailing this thread. :) Sorry!
EDIT
Hey, I found this:

I think it's an SDF snail, and it has a texture. There's even the page with the code running in real time: https://www.shadertoy.com/view/ld3Gz2
 
Last edited:
Ehh, I'd take PCF hacks over the residual aliasing and temporal smearing ... with exact hard shadows they can make it look quite nice.

I'd take pixel perfect shadows (given enough rays of course) over grainy low res shadow maps. Given the incredible engineering effort that has gone into making better shadow maps over the years I would think people would be happy for an easy button (given enough rays of course).
 
I'd take pixel perfect shadows (given enough rays of course) over grainy low res shadow maps.
Hybrid Frustum Tracing is pixel perfect shadows plus a PCSS hack (used the wrong acronym before). Raytracing would simplify the algorithm, but it gives the same result as irregular z-buffering.
 
Last edited:
Hybrid Frustum Tracing is pixel perfect shadows plus a PCSS hack (used the wrong acronym before). Raytracing would simplify the algorithm, but it gives the same result as irregular z-buffering.

Yeah HFTS looked pretty good in the Division. It was dropped for Division 2 though. Wonder why.
 
Back
Top