Next gen lighting technologies - voxelised, traced, and everything else *spawn*

RTDFS which are already fully supported in UE4 (and work on all DX11± GPUs & don't have the performance cost of direct RT area lights) have a vastly better chance of being implemented in the near future..(though current implementation in UE doesn't support skeletal meshes)
Looks very good. And no noise. Combining this with analytical capsule shadows for characters and you have a good area light solution and no need for RTX.
I agree this is more practical now.
Area light make a huge difference. It's not such a marketing instrument like BFV reflections because it's too subtle, but it's overall more important for image quality i think.

Games take years to make, and there's backwards compatibility... so it all will take time of course. (From developer perspective it looks different.)
 
Looks very good. And no noise. Combining this with analytical capsule shadows for characters and you have a good area light solution and no need for RTX.
Yes, capsule shadows + Distance Field Ambient Occlusion (DFAO) is the current workaround for skeletal meshes.
 
RayTracedDFS which are already fully supported in UE4 (and work on all DX11± GPUs & don't have the performance cost of direct RT area lights) have a vastly better chance of being implemented in the near future..(though current implementation in UE doesn't support skeletal meshes)
Yeah, this is a very cool solution too, and I believe fortnite already used it on high settings for distant shadows.
I think it's unlikely they ever will support skeletal meshes. How do you skin an SDF? Unless they voxelize and generate a new SDF for each skinned mesh every frame... Well, if we ever do that for voxel gi, se might as well do it for shadows such as this...
The advantage of RT is no need for storing all the SDFs (nor generating them either) you just ray trace the same mesh you already use for rendering. Also, for very hard shadows, SDFs start to show their imprecision, RT might be preferable there too.
 
RayTracedDFS which are already fully supported in UE4 (and work on all DX11± GPUs & don't have the performance cost of direct RT area lights) have a vastly better chance of being implemented in the near future..(though current implementation in UE doesn't support skeletal meshes)
Ray traced shadows do what you can't with SDFs. In this case, animated meshes. Capsule shadows are nice but the quality is too low compared to RT and require extra work from artists.
 
Pure RT shadows/réflexion/GI/whatever can do anything better than X, Y, Z. Nobody's questioning this. But the HW still isn't powerful enough to reach the same IQ (still waiting on a real-time 30/60fps Denoising tech that works without temporal artifacts..) with the same compute budget yet. So artists, devs etc are still going to find ways & solutions reach the desired looked in the vast majority of devices available.
 
Pure RT shadows/réflexion/GI/whatever can do anything better than X, Y, Z. Nobody's questioning this. But the HW still isn't powerful enough to reach the same IQ (still waiting on a real-time 30/60fps Denoising tech that works without temporal artifacts..) with the same compute budget yet. So artists, devs etc are still going to find ways & solutions reach the desired looked in the vast majority of devices available.
Well, you were fine with Quantum Break and that game has some really awful temporal artifacts. Also, they don't need to be constrained by the same computation budget as previous techniques, if that was the case we would still be using per-vertex lighting since per-pixel is more expensive.
 
Well, you were fine with Quantum Break and that game has some really awful temporal artifacts. Also, they don't need to be constrained by the same computation budget as previous techniques, if that was the case we would still be using per-vertex lighting since per-pixel is more expensive.
I'm fine with Quantum Break because that was best that could be done at the time on the HW (Xbox One.. Or you can run it on PC without temporal upscaling) available in response to folks wanting pure RT'd lit scenes from top to bottom which is still totally ridiculous (and wouldn't even perform or look better than QB today in 2019 on a 2080Ti).

I don't really see where the argument is going here besides "Oh full blown Ray Tracing is Awesome! So shiny balabla, let's use it everyevre. It just works! .. OK..I guess.. If that makes people happy... Welcome to the VFX world 19 years ago..didn't make artists, designers life's easier...or even speed up render times.. We just got physically correct content when & where we wanted to be..
 
I'm fine with Quantum Break because that was best that could be done at the time on the HW (Xbox One.. Or you can run it on PC without temporal upscaling) available in response to folks wanting pure RT'd lit scenes from top to bottom which is still totally ridiculous (and wouldn't even perform or look better than QB today in 2019 on a 2080Ti).

I don't really see where the argument is going here besides "Oh full blown Ray Tracing is Awesome! So shiny balabla, let's use it everyevre. It just works! .. OK..I guess.. If that makes people happy... Welcome to the VFX world 19 years ago..didn't make artists, designers life's easier...or even speed up render times.. We just got physically correct content when & where we wanted to be..
Games aren't VFX. Games are interactive. Baking and other static solutions limit that interactivity. Techniques like RTRT allow far better quality AND interactivity at the same time.

--------------------------------

More videos:


The denoiser for GI looks broken.
 
Games aren't VFX. Games are interactive. Baking and other static solutions limit that interactivity. Techniques like RTRT allow far better quality AND interactivity at the same time.

This doesn't make any sense whatsoever. Without baking or any of those "nasty" "static" solutions (including all other forms of Ray Tracing which aren't pure RT, right?) you wouldn't have any "interactive" game to play right about now. But then again who am I to judge? "It just works!" right? It comes a time when some topics just become tiresome.
 
Last edited:
I don't really see where the argument is going here besides "Oh full blown Ray Tracing is Awesome! So shiny balabla, let's use it everyevre. It just works! .. OK..I guess..
Nobody wants a full blown ray tracing experience now, we are still using a combination of rasterization and ray tracing. That seems a good way to go till the hardware is powerful enough.
I'm fine with Quantum Break because that was best that could be done at the time on the HW
The QB model is repeated to the exact same thing here but with better visuals, QB needed a temporal scaling solution to make it's visual work on weak hardware, they even upscaled it from 720p and operated it at 30fps Medium settings. Same thing with RT: a combination of DLSS and RT effects is enough to provide good fps on current RT hardware using Ultra graphics, with the added unique and much enhanced visual flair.
 
Nobody wants a full blown ray tracing experience now, we are still using a combination of rasterization and ray tracing. That seems a good way to go till the hardware is powerful enough.

The QB model is repeated to the exact same thing here but with better visuals, QB needed a temporal scaling solution to make it's visual work on weak hardware, they even upscaled it from 720p and operated it at 30fps Medium settings. Same thing with RT: a combination of DLSS and RT effects is enough to provide good fps on current RT hardware using Ultra graphics, with the added unique and much enhanced visual flair.

I'm all for that but some folks seem to be hellbent on "RTRT for everything! It's easier! " It is not and it's just one more tool in the giant toolbox that's already fully packed with, sometimes, great solutions so you don't have to re-invent the wheel every time. If it was so simple.. Where the he'll is the DXR update for Tombe Raider which was released 5 months ago ( and playable even before that)? I mean.. It's just shadows... Anyways..
Oh and me posting about QB was in response to the "you can't do great pre-baked interior GI.. RTRT is needed"
 
I'm all for that but some folks seem to be hellbent on "RTRT for everything! It's easier! " It is not and it's just one more tool in the giant toolbox that's already fully packed with, sometimes, great solutions so you don't have to re-invent the wheel every time. If it was so simple.. Where the he'll is the DXR update for Tombe Raider which was released 5 months ago ( and playable even before that)? I mean.. It's just shadows... Anyways..
Oh and me posting about QB was in response to the "you can't do great pre-baked interior GI.. RTRT is needed"
There are probably a few leaps and connections your making that you shouldn't be.
a) there are hundreds of games released each year, and only a fraction of the games can ship with graphical fidelity that can even approximate closely to ray traced AO/GI/shadows. So by definition, if we're seeing mods in which people can release for games manned by a single person and output better lighting and shadows, then, the agreement that it's easier woudl be justified at least with respect to costs, time, labour, and talent required to produce those visuals.

b) With respect to your, if DXR was so simple why wouldn't Tomb Raider already have it yet: that's a loaded question that can have a variety of reasons that increase the complexity of why that patch hasn't shipped yet. For something like BFV, perhaps the priority was there because sales for the title was being charted as weak, so anything they could do further to get more sales might have been the plan. The case for Tomb Raider is long done - if they reached targets already, where does the incentive to dedicate a lot of resources to this come from? SOTR is coming onto Xbox Game Pass, I mean, that's a pretty good sign of where it needs to go to continue to obtain marginal profits.

c) QB is just one example. We could look at a variety of curated titles and see a lot of great baked GI titles. I think without a doubt someone of your position can appreciate the desire in graphics to have things look good from every single angle and distance vs constrained camera angles.

So I want to be clear, I'm not saying your'e wrong. But a lot of techniques in rasterization/compute/sdf/voxels/vpl are there to get closer and closer to real time RT. Some at heavy computational costs, others with heavy limitations, and others with heavy drawbacks; and DXR is likely to have some of those problems too, but we're at the point now where the intersection of effort, computational power, and costs are converging such that hybrid RT is a viable strategy for games to tackle should they wish to, especially so if hardware can accelerate that process.

I do agree that getting to that 'baseline' of the user base being big enough for developers to switch towards a RT based engine will be a long while. And that the solutions you've put forward today are going to see use while we work through an eventual transition to RT.
 
Last edited:
Because in case of RT, I don't see how they can have similar performances without dedicated hardware, with this generation of shader units.
Maybe they won't call it "rt cores", but they will have to do something on the hardware side to have good performances vs nVidia.
Volta (without RT cores) offers the same (or even higher) performance as Turing (with RT cores), at least in Battlefield: https://www.guru3d.com/news-story/n...er-perf-(but-does-not-have-any-rt-cores).html
 
Volta (without RT cores) offers the same (or even higher) performance as Turing (with RT cores), at least in Battlefield: https://www.guru3d.com/news-story/n...er-perf-(but-does-not-have-any-rt-cores).html
That was debunked by proper benchmarks, the 2080Ti maintained a consistent 50% uplift over Titan V in Battlefield V (the game is mostly shading bound). In Quake 2 Path Tracing and Port Royal benchmark that advantage grew to 300%. The 2060 was on par or faster than Titan V.
 
Last edited:
There are probably a few leaps and connections your making that you shouldn't be.
a) there are hundreds of games released each year, and only a fraction of the games can ship with graphical fidelity that can even approximate closely to ray traced AO/GI/shadows. So by definition, if we're seeing mods in which people can release for games manned by a single person and output better lighting and shadows, then, the agreement that it's easier woudl be justified at least with respect to costs, time, labour, and talent required to produce those visuals.

b) With respect to your, if DXR was so simple why wouldn't Tomb Raider already have it yet: that's a loaded question that can have a variety of reasons that increase the complexity of why that patch hasn't shipped yet. For something like BFV, perhaps the priority was there because sales for the title was being charted as weak, so anything they could do further to get more sales might have been the plan. The case for Tomb Raider is long done - if they reached targets already, where does the incentive to dedicate a lot of resources to this come from? SOTR is coming onto Xbox Game Pass, I mean, that's a pretty good sign of where it needs to go to continue to obtain marginal profits.

c) QB is just one example. We could look at a variety of curated titles and see great baked GI titles. But those titles still pale in comparison to a completed solution. I get your from the VFX industry and that there are a lot of parallels between the 2 industries. But the biggest difference between the two is that in video games, things have to look good from every single angle and distance. In VFX, the camera is constrained, you can do all sorts of things to make the lighting right before you bake it for that one camera setup. And when you're looking to solve for 'every single angle and distance', even with the technology we have today, we still fall short of making everything look good everywhere, there's a very specific reason why curated adventure titles, tend to look better than other open world ones, and it has largely to do with constraint.

So I want to be clear, I'm not saying your'e wrong. But a lot of techniques in rasterization/compute/sdf/voxels/vpl are there to get closer and closer to real time RT. Some at heavy computational costs, others with heavy limitations, and others with heavy drawbacks; and DXR is likely to have some of those problems too, but we're at the point now where the intersection of effort, computational power, and costs are converging such that hybrid RT is a viable strategy for games to tackle should they wish to, especially so if hardware can accelerate that process.

Placing probes is not a pleasure for lightning artist.

I think raytracing can save much more time of lightning artist in real-time rendering than in offline rendering.
 
Thankfully.. light probes don't have to be placed manually one by one:

https://docs.unrealengine.com/en-US/Engine/Rendering/LightingAndShadows/VolumetricLightmaps

https://forum.unity.com/threads/light-probes-placement-interior-affects-exterior-lighting.546195/

It is not fully manual but it ask some work and sometimes it work ok on a scene but ask some work on other scenes.

Edit: I think if one day character and environment artists will not need to go from high polygon model to low polygons and normal it will help the productivity and maybe more than raytracing.
 
https://forum.unity.com/threads/light-probes-placement-interior-affects-exterior-lighting.546195/

It is not fully manual but it ask some work and sometimes it work ok on a scene but zsk some work on other scenes.
Unity's LLPVs are as of today still inferior to UE4's Volumetric Lightmaps. Unity also has Occlusion Probes (which were developed for Book of The Dead). UE4's lighting stack is still vastly superior to Unity's (besides the progressive light mapper which is a plus for Unity compared to UE4)..but this is for another topic..
 
Back
Top