Requirements to declare a game fully Path-Traced

The nasty problem with ray tracing is that you need to deal with exponentially growing number of rays. Disregarding unbiased , the idea that you can find one path from the pixel to a light (this would be a strictly 1 spp) somehow correcty and quickly, almost magically, and then have a correct result is just fantasy.

Of course, there's a continuum between not at all to super crap all the way to physically correct . What is currently in use and is developed for real-time ray-tracing is all stuff which is even more hardcore pseudo perceptive approximation stuff than upscaling. You get a plausible result, but's not correct.

My personal opinion is that because of the inherent complexity of light transport, we will never get correct realtime raytracing. And the position on the continuum where real-time raytracing is will move towards the correct end in smaller and smaller increments. Just as a analogy (not the real formula), to make your image twice as good you need to trace square/2 as many rays; we are maybe at a factor 2000 less spp than a converging render needs, so we have a 2000000x deficit in tracing performance. We don't make the hardware 2 million times faster anytime soon.
The bottom line is, real-time raytracing is okay, it's an approximation, and it will always miss out on some effects that will eventually emerge in offline tracing. It's very coarse.
If you watch over the progress of an unbiased raytracer you will be amazed how subtle but important the accumulation after say 4 minutes is. You sit there and think it looks awesome, it's what you wanted/expected, and that maybe you could abort it now and call it a day. But you continue observing and then these tiny shadows and highlights and indirect refractive effects appear, and you think damn this is awesome (and the prior 4 minute stop was actually really bad). Then after a couple of hours you look again and it became a photo, it's effectively real. And you can not attribute this to any one particular thing, it's very holistic.
It's not just that you can leave uncanny valley behind, you actually end up with a image you couldn't know you want. The mind can only expect/imagine so much realism or correctness, these images surpass your expectation/imagination. I don't see real-time to even leave uncanny valley.


View attachment 12686Yes, I've done professional Arch-Viz with Maxwell for a couple of years, about 20 years ago (not saying I'm a particularly good designer, I'm an engineer, but I managed to eat and rent). I'm interested in 3D since end of the 80s. Went through Imagine, Real3D, Lightwave, Povray, Radiance, Cinema4D, Brazil, VRay, Maxwell, many more. Going through standard raytracing, radiosity (remember Half-life?), photon mapping, biased path tracing and so on. As an engineer I looked at what the algorithms did, and it actually helps a lot if you do lighting artist work, or compose a shot in such a way that the flaws are not that glaring.

Without diminishing the achievements in real-time raytracing, in comparison to the high bar, it's producing comic stuff. The same way you get accustomed to the quality in new games, and you can't unsee the improvement, and once cherished games suddenly look really weird, the same way you can't unsee what unleashed raytracing can do. If you give Maxwell enough time, it makes you literal photos, it's that correct.

People that presented at shows most often used software with source code access. So sometimes it was just Povray + the extra code. As you only advance a specific part of the algorithm you don't want to reimplement the entire infrastructure. Sometimes it was just a custom toy project of 4k lines. The algorithm isn't that complicated, it just takes time.
yes! Now I remember, the program they used to generate those renders for graphics conventions like Siggraph and so on was Povray. I had the source files in a disc that was included with PC magazines in the late 90s. They showed the final image in the printed magazine but the source files were included in the disc. I didn't understand the code but I loved to imagine how they generated that. Nowadays I see it more like what they call Folding for medical investigations but for 3D rendering. It took quite a few days to render that.

Thanks again for the thorough explanation. Reading that I can understand why current RT is so mediocre overall.

Even at a very basic level though you see its potential, and some games gain a lot of graphics fidelity by adding it, but on quite a few games the performance cost doesn't justify the fidelity gains imho.

Resident Evil games come to mind, for instance. Elden Ring, Doom Eternal, And many others.
 
If you look at Blender they talk about 1000 rays per pixel, 12 secondary rays, 4 light sources giving 398.131.200.000 rays per frame @4K.
That is a far cry from eg. CyberPunk with 1 ray per pixel, 4 secondary rays + radiance cache.

Question is when we reach enuogh at minimum 60 FPS where it is hard to discern and if the answer will be in "brute force raytracing" or AI rendering.
 
Last edited:
Yeah, that is Real3D.
I'm speaking specifically about their rewrite, technically numbered v4. Real3D was on the Amiga and used CSGs for geometry, making it very fast. Realsoft 3D was a completely rewritten raytracer, on Windows and forgoing the Amiga. Had some great tech like pure HOS tracing with JIT tessellation so perfect curves, even of SDS. Shaders were written in their semi-visual language VSL based on Forth. The complete render pipeline was open to plug into, so you can add your own custom code at pretty much every step from ray intersect to surface illumination to light resolution. Very flexible, but just so incredibly slow as a result!
 
I don't think so. There are two problems:
a) bidirectionality means you also start tracing from light sources, if you have 20 light sources you have already 20x rays to trace, you don't know which paths connect initially, and there are surely thousands of lights on a screen nowadays
b) unbiased means you can't use something like ReStir (which is akin to a limited size cache), and you can't put emphasis (bias) on the dense range of the BRDF (where it reflects a lot of light) because even the very dim parts of the BRDF could be lit up by 1 million lux (considering tracing from the camera which is not a light, you just don't know)

I prefer the renderer would also do spectral rendering, not tracing a spectrum/distribution of light, like a RGB value represents, but rays with wave-lengths. As different wave-lengths have different refractive angles you get the actual naturalistic light behaviour. Maxwell does this. But's really a complement to, rather than a requirement for, path tracing.

Edit: the interesting thing with bidirectional paths is, that you can store them/keep them for more then one frame, and when you change the light, you don't need to trace again, only recalculate the contribution. Sure, it's tricky with dynamic geometry, but manageable.
Maxwell has this nice tool, where you can adjust any of the light sources in the final rendered image aposteriori. Because they store the paths in the file. This leads to apparently weird stuff we do sometimes, where we render all night lights as well as day lights or dynamic lights out, and then we "mix" whatever we want on the final image, like tracks in an audio composition.
True but I'm not going to label that as a requirement for PT since most VFX houses don't even use their renderers that way.
 
well, Quake 2 RTX is just the most realistic game, lighting wise, I've played to date. Along with Quake 1 RT now.
But it's not lighting-wise because it doesn't have physically based materials. Which is needed for path-tracing.

You mentioned in the very first post of this thread that no game accomplished your criteria for it to be considered a full PT game. So which game do you consider to be the closest one to meet that criteria? 'Cos nothing comes to mind outside of Quake 1 and 2, and I haven't seen anything that remotely resembles a Maxwell or Indigo render in a videogame either.
For coming close to PT, I have to go with Dragon's Dogma 2 in PT mode. There is literally nothing about it that takes short cuts in any of the lighting pipeline. The only thing it's missing is SSS. Everything else is there and there are no hacks (that I can see) that switches from PT mode to rasterization mode like in the other games. Hair is using PT also. And every light source casts a shadow.
 
It just runs like a dog and frame gen (which always felt horrendous in it) it a 100% requirement for pretty much every GPU.
This 100%. It can't be overemphasized enough that these GPUs are too slow to do RT without some kind of frame generation to include in the series of frames in order to have a good FPS.

I seriously doubt that even the next-gen of consoles (i.e. PS6, XSX2) will be as fast as the 4090s today so they will be underpowered yet again in 3yrs.
 
Has there been any talks on how games handle shading in RT?

RT/PT has huge strain due shading all the hits, I would quess all sorts of caching and deferred methods could be quite usable in realtime RT.

I know Manuka does shade non directional parts of shaders to vertexes of a tesselated surface before tracing. (The renderer made by Weta.)

Funnily flat colored polygons were used in RT of Avatar game.
 
Back
Top