Next gen lighting technologies - voxelised, traced, and everything else *spawn*

*noise filter* aka mod censorship

From my side it's a lot more than being excited or not or being a fanboy of something.

The problem is, with RTX the way RT has to happen in games is carved in stone from now on.
Limitations has been discussed in older threads: RTX is an implementation of classical raytracing triangle meshes. The BVH is blackboxed. Potential optimizations tailored to a specific need are impossible due to fixed function. (I mean real optimizations, not minor things discussed here)

Let's make a stupid example with rasterization: Imagine a guy in the 90s works on a FPS and software rasterizer that can draw curved triangles.
He can draw full 360 degree enviroment by pinching the player backside to the screens edges which allows cool gameplay.
He can also draw a environment map with just one pass which allows cool reflections.
He can also raster voxel landscapes and bump maps. Awesome and so cool.
He can do a lot more things he never told us.

Then 3Dfx comes up with HW rasterization, but because it can not draw curved triangles the guys work is incompatible. 3Dfx can't do bump maps or voxels either at that time.
The guy is frustrated because he can't catch up with high resolution. He has the greatest game of that time, but all people just play lame Quke on their 3Dfx GPUs. He jumps out of the window and all the greatness is lost. Haha :)


With raytracing the damage is much larger because unlike rasterization there are infinite ways to optimize it. The restriction to triangles is also a big one.
That's why i personally wanted improved compute instead. Classic rt would be slower than with fixed function HW, but with clever optimization this could be compensated a lot if not completely already, and on the long run it would be even better.
Now most people say: "It will be opened up with time, there will be programmable features, Fixed function is necessary for perf, blah, blah..." - but still, we can not draw curved triangles. ;)
But where are all those alternative implementations of RT? The fact is that without Microsoft and NVIDIA leading the charge discussion on real-time ray tracing would be pretty much nonexistent much less being applied to popular commercial games.
 
Last edited:
But where are all those alternative implementations of RT? The fact is that without Microsoft and NVIDIA leading the charge discussion on real-time ray tracing would be pretty much nonexistent much less being applied to popular commercial games.

This isn't really true. Screen-space rays are still rays. Fortnite does some kind of sdf tracing for ao, maybe. Other games trace voxels for some effects, I think. Then you have standard depth-buffer ray-marching. They're all forms of ray tracing. Dxr is just the first step in making world-space a reality.

One basic conclusion from battlefield v and developer talks is that a full path tracer is long ways off, so hybrid raster and ray is still the way to go. Battlefield v is an interesting case. I'm curious why they picked reflections vs soft shadows or ao, which sound like they might be easier wins.

I do think the ultra setting looks very good and the surfaces of rocks and other diffuse surfaces look more natural. I think performance is starting to look good, and quality is pretty much best in class compared to any other reflection method.

I'd be curious to see how their reflections would compare to ray traced ao in presenting more natural and grounded surfaces.it'll be good if dice does a presentation at gdc or something. Guessing is too hard. Remedy says they can cast 1 ray per pixel at 1080p in less than 1ms and that was before the big nvidia driver improvements. I'd be curious to see what the frame time budget is for rays in bfv. Assuming ultra is 40% of screen resolution, is that just primary rays or the secondary reflected rays? Assuming its just primary, thats still on 80% of 1080p so it should be well under 1ms now. It means the real costs are shading and updating the bvh.
 
Last edited by a moderator:
From my side it's a lot more than being excited or not or being a fanboy of something.

The problem is, with RTX the way RT has to happen in games is carved in stone from now on.
Limitations has been discussed in older threads: RTX is an implementation of classical raytracing triangle meshes. The BVH is blackboxed. Potential optimizations tailored to a specific need are impossible due to fixed function. (I mean real optimizations, not minor things discussed here)
...

But this is where api versions come into play. For now, it's limited, but I would imagine, like D3D, OpenGL, Vulkan, more and more programmable features will be exposed through new DXR api versions, like it happened from Direct3D 2 to Direct3D 12. The question is how long it takes to get there, and who decides what is and isn't programmable in each api update. I'm guessing they'll pick the best of the intrinsics offered by Nvidia, AMD, Intel.

Overall, I think looking at the results, and the performance they were able to get in Battlefield 5, now is actually the right time to get some kind of RT api started. Battlefield V looked bad at launch, and the first major patch yielded bigger improvements than I was expecting. I'm curious if all of the big wins are taken care of, or if we'll see more large improvements. Metro and Control should be more interesting test cases.
 

I expect a bit less from Tomb Raider because the game was released quite a while ago. I'm not sure if they'll invest as much in it as a game like Metro and Control. They're both under active development and they seem to be making more extensive use of ray tracing. That may be a totally unfair assumption about Tomb Raider, so we'll see when it's out.
 
This isn't really true. Screen-space rays are still rays. Fortnite does some kind of sdf tracing for ao, maybe. Other games trace voxels for some effects, I think. Then you have standard depth-buffer ray-marching. They're all forms of ray tracing. Dxr is just the first step in making world-space a reality.

One basic conclusion from battlefield v and developer talks is that a full path tracer is long ways off, so hybrid raster and ray is still the way to go. Battlefield v is an interesting case. I'm curious why they picked reflections vs soft shadows or ao, which sound like they might be easier wins.

I do think the ultra setting looks very good and the surfaces of rocks and other diffuse surfaces look more natural. I think performance is starting to look good, and quality is pretty much best in class compared to any other reflection method.

I'd be curious to see how their reflections would compare to ray traced ao in presenting more natural and grounded surfaces.it'll be good if dice does a presentation at gdc or something. Guessing is too hard. Remedy says they can cast 1 ray per pixel at 1080p in less than 1ms and that was before the big nvidia driver improvements. I'd be curious to see what the frame time budget is for rays in bfv. Assuming ultra is 40% of screen resolution, is that just primary rays or the secondary reflected rays? Assuming its just primary, thats still on 80% of 1080p so it should be well under 1ms now. It means the real costs are shading and updating the bvh.
Fortnite also does shadows after first cascade with SDF tracing.

For AO why games do not use bent normal/cone AO?
AO is really ugly hack and directionality would make it better.

It would need some more rays to get final cone though.
 
Last edited:
But where are all those alternative implementations of RT? The fact is that without Microsoft and NVIDIA leading the charge discussion on real-time ray tracing would be pretty much nonexistent much less being applied to popular commercial games.

Agree.
We have things like that:
Screen space raymarching (SSR, contact shadows, AO, even GI) - but don't want to count this.
NVs earlier 'raytraced shadowmaps' (they used some other term) - true RT, used in some games, but expensive and still limited to point lights.
Voxel Cone Tracing: Full GI, can even trace cones, used in games quite a lot. But no triangles and voxel LOD causes probably unsolvable problems like lightleaking.

But there was a constant and inevitable move towards raytracing in games before RTX. Some examples:
https://www.gamedev.net/projects/380-real-time-hybrid-rasterization-raytracing-engine/ This guy achieved a lot with denoising approach in SS, but very limited DX11 tracing performance. He aimed to improve this using DX12 compute.
https://gamejolt.com/games/half-dead-2-tech-demo/372732 Realtime RT reflections in a game you can try yourself (To bad they did it only on a floor plane - could be even fake, but performance wise this is totally possible with such a low poly env.)
Myself, using compute RT for visibility in realtime GI, claiming similar perf than RTX but mad about incompatibility

My prediction would have been: Move to texture space lighting, RT for triangles eventually with surfels / voxels or something as fallback at some greater distance where we want blurry results anyways for anything except perfect mirrors. I expected this to happen during next gen consoles. I planned to do this myself for sharp reflections and still consider it because next gen consoles likely have no RT HW.

However, asking for 'Wouldn't the 3 examples you gave benefit from RTX as it is? Why are you against technical progress bringing RT to games now?' the answer is yes for all of them of course. So why do i still wine about it?
It's because i thought the dark ages of fixed function would be over. Seeing the rise of function again is shocking to be. This is where i say it is a push in progress on the short run, but serious damage on the long run, and also it makes it more difficult to saturate GPUs.
We would have seen raytraced games in any case. Making compute flexible enough to do RT more exxicient would have been possible for sure (just not as profitable for NV).

And finally: The real reason we have RT now is progress in denoising, you know this the best i guess. This is an achievement of software development, and it still runs in software. Fixed function PREVENTS such achievements!
 
But this is where api versions come into play. For now, it's limited, but I would imagine, like D3D, OpenGL, Vulkan, more and more programmable features will be exposed through new DXR api versions, like it happened from Direct3D 2 to Direct3D 12. The question is how long it takes to get there, and who decides what is and isn't programmable in each api update. I'm guessing they'll pick the best of the intrinsics offered by Nvidia, AMD, Intel.

Overall, I think looking at the results, and the performance they were able to get in Battlefield 5, now is actually the right time to get some kind of RT api started. Battlefield V looked bad at launch, and the first major patch yielded bigger improvements than I was expecting. I'm curious if all of the big wins are taken care of, or if we'll see more large improvements. Metro and Control should be more interesting test cases.

I hope you're right about improvement and more flexibility over time, but my experience is it takes much too many years and major obstacles never fall.

With the results i am very happy. Considering there is a lot of room for improvement, performance is great. BFV reflections is exactly what i need for my work. And it is very easy to use.


I hope my contradicting appearing opinions about RTX have been addressed enough once more. :)
I just should resist to mention my doubts again - makes me sound like a grumpy old man hindering progress i guess...
 
Just theoretically, would it be an option to:
  • Use a generic hit-shader which is solely responsible for re-dispatch, and records actual hit sorted by actual shader in the history with hit vector, list of dispached vectors, normal and UV. Only distinguishing flags for specular, diffuse and translucent dispatch, at most based on vertex/triangle attributes. Maybe even try pessimistic energy conservation estimation for early cut-off. Need to form buckets, to avoid global atomics.
  • Delayed evaluation of all hit types, sorted by specialized shader, annotating each recorded hit as 1x3 fp16 self contribution plus list of 3×3 fp16 color twist matrices. Use as many different shaders as you want, just iterate over all of them.
  • Delayed reduction of all ray trees based on computed PBR, depth first. Single shader again, pure random access + arithmetic, hopefully very little register pressure, because needs all the latency masking it can get.

I assume they do something like this in hardware:

Halt ray generating shaders after rays have been dispatched.
Sort rays by direction and origin.
Trace batches of rays, gathering hit results.
Sort hit points by material shader and hit position.
Process material shader if enough results are ready.
Resort generation shader threat lanes if enough similar rays are ready to continue recursive processing.

Just as a simple example - there are infinite options. We really want to know more details, but it's NV :)

Earlier i made a comment assuming it is not so complex because BFV dev made a statement to bin rays for better cache utilization. But i got that wrong! He talks about SSR here.

The more complex (and so faster) their implementation is, the harder it is for other vendors to catch up.
 
However, asking for 'Wouldn't the 3 examples you gave benefit from RTX as it is? Why are you against technical progress bringing RT to games now?' the answer is yes for all of them of course. So why do i still wine about it?
It's because i thought the dark ages of fixed function would be over. Seeing the rise of function again is shocking to be. This is where i say it is a push in progress on the short run, but serious damage on the long run, and also it makes it more difficult to saturate GPUs.
We would have seen raytraced games in any case. Making compute flexible enough to do RT more exxicient would have been possible for sure (just not as profitable for NV).

And finally: The real reason we have RT now is progress in denoising, you know this the best i guess. This is an achievement of software development, and it still runs in software. Fixed function PREVENTS such achievements!

I'm on the opposite side here, as i believe fixed function is the way to go. I know that devs don't want to hear such thing, but Moore's law is ending slowly. In a few years a doubling of compute power will take 4,5 years. It's not like Turing isn't also strong in compute, so that devs can try new software solutions. But it'll be necessary to have devs/researchers working on new software solutions, but also on ways to put this into fixed function. The easy times in which some software devs could just sit and wait for the gpus to get faster and hardware devs could just count on the next node to just add more shader are over.
 
It's because i thought the dark ages of fixed function would be over. Seeing the rise of function again is shocking to be. This is where i say it is a push in progress on the short run, but serious damage on the long run, and also it makes it more difficult to saturate GPUs.
Pardon me, I mean no disrespect, but I think this is way overblown, the way I see it is that RT is going down the same path as rasterization, fixed function at first, then general function later on when the hardware reaches a certain level of maturation. This cycle repeats itself many times already in the field of graphics.
 
For AO why games do not use bent normal/cone AO?
AO is really ugly hack and directionality would make it better.

Crysis 2 is one I know does. I don't know about later crytek/cryengine titles, but I assume other may use it as well.
I remember a blog post where someone dissected a Crysis 2 frame and there is a point where the renderer generates a screen-space bent normals buffer by filtering the regular normal buffer. That step does not come for free, but I bet that once you have it, it can make all lights look more natural, not just the AO pass. But you still need a typical normal buffer for specular, bent normals make the highlights look as if surfaces are wavy.
 
Last edited:
I expected this to happen during next gen consoles.

Agree, see GoW/hzd spiderman ps4 or any sony game, reflections look subpar, DF mentions it in their analysis, cube maps etc. RT could do much more to those titles.
The saving grail is artwork; but that perhaps wont be enough for next gen.
 
Last edited:
This isn't really true. Screen-space rays are still rays. Fortnite does some kind of sdf tracing for ao, maybe. Other games trace voxels for some effects, I think. Then you have standard depth-buffer ray-marching. They're all forms of ray tracing. Dxr is just the first step in making world-space a reality.

One basic conclusion from battlefield v and developer talks is that a full path tracer is long ways off, so hybrid raster and ray is still the way to go. Battlefield v is an interesting case. I'm curious why they picked reflections vs soft shadows or ao, which sound like they might be easier wins.

I do think the ultra setting looks very good and the surfaces of rocks and other diffuse surfaces look more natural. I think performance is starting to look good, and quality is pretty much best in class compared to any other reflection method.

I'd be curious to see how their reflections would compare to ray traced ao in presenting more natural and grounded surfaces.it'll be good if dice does a presentation at gdc or something. Guessing is too hard. Remedy says they can cast 1 ray per pixel at 1080p in less than 1ms and that was before the big nvidia driver improvements. I'd be curious to see what the frame time budget is for rays in bfv. Assuming ultra is 40% of screen resolution, is that just primary rays or the secondary reflected rays? Assuming its just primary, thats still on 80% of 1080p so it should be well under 1ms now. It means the real costs are shading and updating the bvh.
Like I said, context matters. Special cases of ray tracing indeed exist but the ray tracing that everybody thinks of as the "holy grail" is only being considered (and applied) in current and near future games thanks to DXR and RTX.

Small/indie games could actually be the best showcases for RT because they can have low polycounts and minimalist art styles (in which lighting is king) which means fast BVH updates, minimal cache trashing and very fast shading.

Agree.
We have things like that:
Screen space raymarching (SSR, contact shadows, AO, even GI) - but don't want to count this.
NVs earlier 'raytraced shadowmaps' (they used some other term) - true RT, used in some games, but expensive and still limited to point lights.
Voxel Cone Tracing: Full GI, can even trace cones, used in games quite a lot. But no triangles and voxel LOD causes probably unsolvable problems like lightleaking.

But there was a constant and inevitable move towards raytracing in games before RTX. Some examples:
https://www.gamedev.net/projects/380-real-time-hybrid-rasterization-raytracing-engine/ This guy achieved a lot with denoising approach in SS, but very limited DX11 tracing performance. He aimed to improve this using DX12 compute.
https://gamejolt.com/games/half-dead-2-tech-demo/372732 Realtime RT reflections in a game you can try yourself (To bad they did it only on a floor plane - could be even fake, but performance wise this is totally possible with such a low poly env.)
Myself, using compute RT for visibility in realtime GI, claiming similar perf than RTX but mad about incompatibility

My prediction would have been: Move to texture space lighting, RT for triangles eventually with surfels / voxels or something as fallback at some greater distance where we want blurry results anyways for anything except perfect mirrors. I expected this to happen during next gen consoles. I planned to do this myself for sharp reflections and still consider it because next gen consoles likely have no RT HW.

However, asking for 'Wouldn't the 3 examples you gave benefit from RTX as it is? Why are you against technical progress bringing RT to games now?' the answer is yes for all of them of course. So why do i still wine about it?
It's because i thought the dark ages of fixed function would be over. Seeing the rise of function again is shocking to be. This is where i say it is a push in progress on the short run, but serious damage on the long run, and also it makes it more difficult to saturate GPUs.
We would have seen raytraced games in any case. Making compute flexible enough to do RT more exxicient would have been possible for sure (just not as profitable for NV).

And finally: The real reason we have RT now is progress in denoising, you know this the best i guess. This is an achievement of software development, and it still runs in software. Fixed function PREVENTS such achievements!
When it comes to GPUs, speed comes first, flexibility later. Imagine if GPUs were fully programmable. Lots of possibilities but the best graphics would look like 360 games due to speed limitations.
 
I'm on the opposite side here, as i believe fixed function is the way to go. I know that devs don't want to hear such thing, but Moore's law is ending slowly. In a few years a doubling of compute power will take 4,5 years. It's not like Turing isn't also strong in compute, so that devs can try new software solutions. But it'll be necessary to have devs/researchers working on new software solutions, but also on ways to put this into fixed function. The easy times in which some software devs could just sit and wait for the gpus to get faster and hardware devs could just count on the next node to just add more shader are over.
Phew... valid argument against my point. My feeling turns from 'fighting' towards proper argumentation... just thanks for that, haha!
But i really see it different. Whatever i do, hardware is always fast enough. It's my software development which is too slow, not the hardware. At some point, if hardware progress would become impossible, this would just turn around and implementing software progress with fixed function could not catch up fast enough - ASICs would be the proper solution then i think.
Making something fixed function implies you have found the best algorithm already for sure, and no more development on the algorithm is necessary outside chip design. I'm certain we are very far from that point with RT.

Pardon me, I mean no disrespect, but I think this is way overblown, the way I see it is that RT is going down the same path as rasterization, fixed function at first, then general function later on when the hardware reaches a certain level of maturation. This cycle repeats itself many times already in the field of graphics.
Sure, that's how things used to be and it works. But at the same time GPGPU brought the real success of GPUs other than just for gaming, with applications like AI, which came pretty unexpected. Fixed function likely could be a step back seen from that angle?
I understand this appears overblown, but i claim to have solved most problems RTX addresses before on old mid range GPUs. If i could demonstrate and explain this now, you would understand why i see so many limitations here. I know if have to hurry...

The saving grail is artwork; but that perhaps wont be enough for next gen.
Offtopic, but yeah, currently if a game looks good or bad mostly depends on artwork and very little on tech i think.
I have shown RTX dancing robot video to regular gamers - they were not more impressed than from any other good looking game, and they do not spot a difference, they don't know about RTX either. (Still they'll buy 2060 like crazy i guess)
Hard to say how much interest the game industry has in improved tech in general. Likely they soon have larger problems to address like stagnating market but growing gamer boredom with still growing costs.
Personally i think gfx and tech progress can remain a key factor to sell games, but i'm biased and may be wrong. You hear 'We want better games, not just better gfx' more and more often.
 
When it comes to GPUs, speed comes first, flexibility later. Imagine if GPUs were fully programmable. Lots of possibilities but the best graphics would look like 360 games due to speed limitations.
I do not think Dreams looks like 360, and it uses zero fixed function.
 
Seems Battlefield V is confirmed to be receiving DLSS support, according to VC NVIDIA has some marketing materials claiming RTX 2060 is capable of Med DXR @1080p60. But more with DLSS on.

https://videocardz.com/79505/nvidia-geforce-rtx-2060-pricing-and-performance-leaked

Right now the 2070 is around 76 fps at 1080p with RT on Medium. I wonder if this suggests another performance improvement to come. A 10fps gap between the 2060 and 2070 with RT at medium seems small. It's kind of a shame the 2060 is only a 6GB card, because the performance looks to be about a 1080 in a bunch of titles. Even Rainbow Six siege needs 8GB for the ultra textures.
 
It's kind of a shame the 2060 is only a 6GB card, because the performance looks to be about a 1080 in a bunch of titles. Even Rainbow Six siege needs 8GB for the ultra textures.
Agreed, 6GB looks even worse future wise, new consoles are on the horizon, and those will raise the bar in VRAM consumption significantly.
A 10fps gap between the 2060 and 2070 with RT at medium seems small.
The difference spec wise between a 2070 and 2060 is rather small, the 2060 also compensates with slightly higher clocks.
 
Fortnite also does shadows after first cascade with SDF tracing.

For AO why games do not use bent normal/cone AO?
AO is really ugly hack and directionality would make it better.

It would need some more rays to get final cone though.
Ryse does that and also applies the AO in a different ay to fake multi-bounce GI.

I do not think Dreams looks like 360, and it uses zero fixed function.
Ok, I exaggerated. Still, doesn't look anywhere as good as the top tier titles of this gen.
 
Back
Top