Requirements to declare a game fully Path-Traced

Alright graphics nerds! Let's define actual conditions in the rendering path that would constitute a game being fully-path traced regardless of noise. I started this thread so that we can all agree on the requirements to declare this. In truth, none of the games I've seen so far implement 100% of the requirements to define FPT. I also haven't played Portal or Quake or Minecraft, so I can't speak to those. I just want us on the same page on each of the conditions.

In the overall path-traced rendering path in VFX, we use the BRDF with PDF (probability distribution functions) and CDF (cumulative distribution functions) driven by importance sampling for each of the terms in the equation in order to compute a final color for a pixel.

What are they?

1) Direct Diffuse - This is the main component of the rendering equation and we usually see Lambert and Oren-Nayar. We all know that if you do direct path-traced lighting, then every light source will:
a) Cast a soft shadow with prenumbra behavior
b) Area light shape will be taken into account.

2) Indirect Diffuse - This is what we all see as the main signal of RTGI. It's gathering luminance from neighboring objects and therefore color 'bleeds' onto the surface making objects pick up tint. It also will create ambient occlusion from just the nature of checking to see around a hemisphere all colors that it can average. If it's blocked by another object, it will cast a shadow at that location.

3) Direct Specular - This manifests itself with the specular highlight. In PT, it must pick up the exact same shape as the light source based on roughness of the material.

4) Indirect Specular - This is reflections and these will behave very similar with direct diffuse in that the reflection will be sharper at the source and spread out based on the distance from that source. Reflections will also happen with materials that aren't glass (i.e. roughness is < 0.1). We can also put refractions in this category that represents light rays that have been bent after traveling through a medium and showing objects that are distorted through the material.

5) Emissive - this is objects that are light sources themselves. Think of a black body material or something that glows due to heat.

What are the subsets?

1) Environment lighting - This is the sky. Mostly done using cube maps in rasterization. PT environment is literally sampling the sky texture and basing important rays near the highest luminance in the texture (i.e. the Sun). This also allows materials that are outside and in the shadow of the sun reflect the blue hue from the sky.

2) Subsurface Scattering - this is very difficult pass as it's computes several bounces inside of a material only to come out somewhere else on the material. The light exhibits scattering inside of a body and causing soft shadowing on the skin before reflecting the skin's tone. Also used for plants, wine bottles, etc..

3) Hair - another difficult pass. If done right, this is the most complicated PT feature as it will take every single strand of hair into account, run a full BRDF computation on it that's literally 4 components: (primary, secondary, backscattering, and interreflections). It will show itself as having great self-shadowing EVEN when the hair is blocked by Sun and still shows texture. Only 2 games that I have seen allow this: Indy and DD2-PT mode.

4) Participating media - clouds, smoke, and fog. This is probably by far the most expensive as not only do you run all the common computations, but using ray-marching (ala Batman AK) destroys performance.

If anyone care to mention anything I may have missed, it would be appreciated.

If we have this list, we can now determine what games come close to implementing ALL of these features. If there is none, that tells you how far we have to go with the hardware, on top of solving noise, to achieve true film quality games.
 
I don't really think we'll get there in any meaningful timeframe. AFAICS, use of "Path Tracing" is a marketing term. "Not only does this game to Ray Tracing, which is reflections and stuff, but it even does Path Tracing, which is better." It seems devoid of any technical merit. Path Tracing is RT with at least a bounce/iteration. ¯\_(ツ)_/¯

It's an odd situation. We never had previous graphics terms grouped under a quality umbrella. Rasterising. Rastering+. Uberising. Ultrastersing+ Xtreme. "This game features the latest Ultrasterising+ Xtreme, with SSAO, screen-space reflections, light-probe illumination, day-and-night cycle, PBR materials, cubic reflection maps, antialiasing, triangle drawing and 16 million colours."

In terms of full raytracing in game, the holy grail, I think a lot of your targets aren't needed. SSS in a suitable fake would suffice. The main point is unified lighting, so areas lights with natural, accurate lighting and shadowing, zero hacks. I'd argue temporal accumulation should be a no-no as that's not how light works and when I flick a light, I want the room illuminated then and there and not half a second later! ;) To it's most extreme, you could ray-trace the whole scene and just accumulate the samples over 10 seconds. It'd look amazing...in stills. Perhaps we'd need to cap that to a certain perceptible timeframe? Not sure. Seems 120 fps isn't really needed, but let's say we are tracing light at 30 fps in a 120 fps game. Would that be noticeable? How low can we go?
 
Would a game that doesn't implement some of these techniques because it doesn't need to (e.g. no artificial light sources, no hair/fur on characters) count as fully RT if it meets all of the other criteria? Would a fully ray-traced Tetris clone count?
 
Would a game that doesn't implement some of these techniques because it doesn't need to (e.g. no artificial light sources, no hair/fur on characters) count as fully RT if it meets all of the other criteria? Would a fully ray-traced Tetris clone count?

I think it still counts.

I don’t agree with Indy being called fully raytraced. That’s obviously not true as it’s still using SSR and shadow maps. How can that be called fully raytraced?
 
In terms of full raytracing in game, the holy grail, I think a lot of your targets aren't needed. SSS in a suitable fake would suffice.
It's hard to fake to put into a full PT pipeline. We did an inverse where the diffuse contribution would be done on both sides for leaves, trees, etc.. But SSS needed to run under the PT pipeline.

The main point is unified lighting, so areas lights with natural, accurate lighting and shadowing, zero hacks. I'd argue temporal accumulation should be a no-no as that's not how light works and when I flick a light, I want the room illuminated then and there and not half a second later! ;) To it's most extreme, you could ray-trace the whole scene and just accumulate the samples over 10 seconds. It'd look amazing...in stills. Perhaps we'd need to cap that to a certain perceptible timeframe? Not sure. Seems 120 fps isn't really needed, but let's say we are tracing light at 30 fps in a 120 fps game. Would that be noticeable? How low can we go?
That would look weird to update every 30FPS into a 120FPS target. Similar to the cutscenes in Indy.
 
I think it still counts.

I don’t agree with Indy being called fully raytraced. That’s obviously not true as it’s still using SSR and shadow maps. How can that be called fully raytraced?
agreed. Imho a fully Path-Traced game is Quake 2 RTX. That game is superb in that regard, and I managed to run it at a stable 60fps on my A770. The experience is amazing. Too bad that my favourite Quake game is the original, wish it received the fully path-traced treatment.

4K -even with DLSS, XeSS etc- with full Path Tracing should be the normal during the next gen of GPUs for middle range GPUs. I mean, the ones that most people buy.
 
4K -even with DLSS, XeSS etc- with full Path Tracing should be the normal during the next gen of GPUs for middle range GPUs. I mean, the ones that most people buy.
Not even close. DLSS is good but it's not THAT good. The day we see full PT GPUs will be when they make a movie from it. ;) Like I mentioned in the OP, there is no way we are even close to PTing all those elements of the rendering equation even at 1 FPS WITH DLSS.
 
@Shifty Geezer - Talked with my friend at Blizzard animation.

Me: "Tell me about film. How many hours is it taking to render? Are you guys still using render farms?"
Him: "Depends. Anywhere from a minute to 40 hours.:ROFLMAO:"
Me: "Are you guys rendering Octane or some other CPU based renderer?"
Him: "We are on PRMan and Redshift"
Him: "Render farms are still a thing.."

There you go!
 
To me unbiased bidirectional pathtracing is full or true path tracing. As a reference implementation you can look at Maxwell or Indigo.
does something like that exist in a modern videogame? Just curious... I haven't seen anything like that, maybe some video of Cyberpun 2077 and Quake 2 RTX would be the closest thing
 
does something like that exist in a modern videogame? Just curious... I haven't seen anything like that, maybe some video of Cyberpun 2077 and Quake 2 RTX would be the closest thing
I don't think so. There are two problems:
a) bidirectionality means you also start tracing from light sources, if you have 20 light sources you have already 20x rays to trace, you don't know which paths connect initially, and there are surely thousands of lights on a screen nowadays
b) unbiased means you can't use something like ReStir (which is akin to a limited size cache), and you can't put emphasis (bias) on the dense range of the BRDF (where it reflects a lot of light) because even the very dim parts of the BRDF could be lit up by 1 million lux (considering tracing from the camera which is not a light, you just don't know)

I prefer the renderer would also do spectral rendering, not tracing a spectrum/distribution of light, like a RGB value represents, but rays with wave-lengths. As different wave-lengths have different refractive angles you get the actual naturalistic light behaviour. Maxwell does this. But's really a complement to, rather than a requirement for, path tracing.

Edit: the interesting thing with bidirectional paths is, that you can store them/keep them for more then one frame, and when you change the light, you don't need to trace again, only recalculate the contribution. Sure, it's tricky with dynamic geometry, but manageable.
Maxwell has this nice tool, where you can adjust any of the light sources in the final rendered image aposteriori. Because they store the paths in the file. This leads to apparently weird stuff we do sometimes, where we render all night lights as well as day lights or dynamic lights out, and then we "mix" whatever we want on the final image, like tracks in an audio composition.
 
Last edited:
does something like that exist in a modern videogame? Just curious... I haven't seen anything like that, maybe some video of Cyberpun 2077 and Quake 2 RTX would be the closest thing
I don't understand the hype around Q2 RTX. I loaded that up yesterday and it's far from implementing the AAA games that use RT today. It might have the full diffuse direct/indirect contribution to the scenes and using light sources for RT shadows but other than that, it's nowhere near showing off full PT in all it's glory. The non-PBR materials is a non-starter for one.
 
I don't think so. There are two problems:
a) bidirectionality means you also start tracing from light sources, if you have 20 light sources you have already 20x rays to trace, you don't know which paths connect initially, and there are surely thousands of lights on a screen nowadays
b) unbiased means you can't use something like ReStir (which is akin to a limited size cache), and you can't put emphasis (bias) on the dense range of the BRDF (where it reflects a lot of light) because even the very dim parts of the BRDF could be lit up by 1 million lux (considering tracing from the camera which is not a light, you just don't know)

I prefer the renderer would also do spectral rendering, not tracing a spectrum/distribution of light, like a RGB value represents, but rays with wave-lengths. As different wave-lengths have different refractive angles you get the actual naturalistic light behaviour. Maxwell does this. But's really a complement to, rather than a requirement for, path tracing.

Edit: the interesting thing with bidirectional paths is, that you can store them/keep them for more then one frame, and when you change the light, you don't need to trace again, only recalculate the contribution. Sure, it's tricky with dynamic geometry, but manageable.
Maxwell has this nice tool, where you can adjust any of the light sources in the final rendered image aposteriori. Because they store the paths in the file. This leads to apparently weird stuff we do sometimes, where we render all night lights as well as day lights or dynamic lights out, and then we "mix" whatever we want on the final image, like tracks in an audio composition.
many thanks for the very detailed explanation. Just guessing here, but I don't think even the best nVidia GPUs of the 5000 series can run the kind of raytracing you are describing at a decent framerate, maybe at 5fps, perhaps less.

I guess you use Maxwell then? I can't help but wonder which tools people used in the past when presenting on Siggraph and similar events raytraced images that took days to render. They were similar to this image, just had a lower resolution.

checkerboard-design.jpg


I don't understand the hype around Q2 RTX. I loaded that up yesterday and it's far from implementing the AAA games that use RT today. It might have the full diffuse direct/indirect contribution to the scenes and using light sources for RT shadows but other than that, it's nowhere near showing off full PT in all it's glory. The non-PBR materials is a non-starter for one.
well, Quake 2 RTX is just the most realistic game, lighting wise, I've played to date. Along with Quake 1 RT now.

You mentioned in the very first post of this thread that no game accomplished your criteria for it to be considered a full PT game. So which game do you consider to be the closest one to meet that criteria? 'Cos nothing comes to mind outside of Quake 1 and 2, and I haven't seen anything that remotely resembles a Maxwell or Indigo render in a videogame either.
 
Portal RTX can be tweaked to make it look very good and is perhaps one of the best examples.

It just runs like a dog and frame gen (which always felt horrendous in it) it a 100% requirement for pretty much every GPU.
 
many thanks for the very detailed explanation. Just guessing here, but I don't think even the best nVidia GPUs of the 5000 series can run the kind of raytracing you are describing at a decent framerate, maybe at 5fps, perhaps less.
The nasty problem with ray tracing is that you need to deal with exponentially growing number of rays. Disregarding unbiased , the idea that you can find one path from the pixel to a light (this would be a strictly 1 spp) somehow correcty and quickly, almost magically, and then have a correct result is just fantasy.

Of course, there's a continuum between not at all to super crap all the way to physically correct . What is currently in use and is developed for real-time ray-tracing is all stuff which is even more hardcore pseudo perceptive approximation stuff than upscaling. You get a plausible result, but's not correct.

My personal opinion is that because of the inherent complexity of light transport, we will never get correct realtime raytracing. And the position on the continuum where real-time raytracing is will move towards the correct end in smaller and smaller increments. Just as a analogy (not the real formula), to make your image twice as good you need to trace square/2 as many rays; we are maybe at a factor 2000 less spp than a converging render needs, so we have a 2000000x deficit in tracing performance. We don't make the hardware 2 million times faster anytime soon.
The bottom line is, real-time raytracing is okay, it's an approximation, and it will always miss out on some effects that will eventually emerge in offline tracing. It's very coarse.
If you watch over the progress of an unbiased raytracer you will be amazed how subtle but important the accumulation after say 4 minutes is. You sit there and think it looks awesome, it's what you wanted/expected, and that maybe you could abort it now and call it a day. But you continue observing and then these tiny shadows and highlights and indirect refractive effects appear, and you think damn this is awesome (and the prior 4 minute stop was actually really bad). Then after a couple of hours you look again and it became a photo, it's effectively real. And you can not attribute this to any one particular thing, it's very holistic.
It's not just that you can leave uncanny valley behind, you actually end up with a image you couldn't know you want. The mind can only expect/imagine so much realism or correctness, these images surpass your expectation/imagination. I don't see real-time to even leave uncanny valley.

I guess you use Maxwell then? I can't help but wonder which tools people used in the past when presenting on Siggraph and similar events raytraced images that took days to render. They were similar to this image, just had a lower resolution.
1734532057280.pngYes, I've done professional Arch-Viz with Maxwell for a couple of years, about 20 years ago (not saying I'm a particularly good designer, I'm an engineer, but I managed to eat and rent). I'm interested in 3D since end of the 80s. Went through Imagine, Real3D, Lightwave, Povray, Radiance, Cinema4D, Brazil, VRay, Maxwell, many more. Going through standard raytracing, radiosity (remember Half-life?), photon mapping, biased path tracing and so on. As an engineer I looked at what the algorithms did, and it actually helps a lot if you do lighting artist work, or compose a shot in such a way that the flaws are not that glaring.

Without diminishing the achievements in real-time raytracing, in comparison to the high bar, it's producing comic stuff. The same way you get accustomed to the quality in new games, and you can't unsee the improvement, and once cherished games suddenly look really weird, the same way you can't unsee what unleashed raytracing can do. If you give Maxwell enough time, it makes you literal photos, it's that correct.

People that presented at shows most often used software with source code access. So sometimes it was just Povray + the extra code. As you only advance a specific part of the algorithm you don't want to reimplement the entire infrastructure. Sometimes it was just a custom toy project of 4k lines. The algorithm isn't that complicated, it just takes time.
 
Did you ever use Realsoft3D? That was a pure ray tracer, nothing fancy, no forced shortcuts. As a result, if you eschewed the shader-based lighting and just leant on raw tracing, no light-sources were needed. You could create an object, give it an inherent brightness, and then just sample a gazillion rays with microscopic surface perturbations to add roughness, just brute forcing the light transport. For specular highlights, you needn't use a specular term (you could add one in a shader) but could actually sample real surface roughness! Hence soft reflections, soft shadows, etc. Beautiful renders for the time, but too slow for anyone to use it. Eventually it was superseded by proper light-energy tracers with various acceleration hacks that got good enough results in a much shorter timeframe.
 
Did you ever use Realsoft3D?
1734540369097.pngYeah, that is Real3D. I was still in school, so I just played around with it, not for the raytracer, but I was fascinated with the parametric geometry. Basically this image hooked me:

To some degree it's a precursor for Rhino3D and similar CAM related programs.

Well, and the CPU I ran it on was a 68040/25MHz, for a single person and a single CPU it's tricky to get lots of results, it's just too slow. :eek: But it was very educative.
 
Back
Top