yes! Now I remember, the program they used to generate those renders for graphics conventions like Siggraph and so on was Povray. I had the source files in a disc that was included with PC magazines in the late 90s. They showed the final image in the printed magazine but the source files were included in the disc. I didn't understand the code but I loved to imagine how they generated that. Nowadays I see it more like what they call Folding for medical investigations but for 3D rendering. It took quite a few days to render that.The nasty problem with ray tracing is that you need to deal with exponentially growing number of rays. Disregarding unbiased , the idea that you can find one path from the pixel to a light (this would be a strictly 1 spp) somehow correcty and quickly, almost magically, and then have a correct result is just fantasy.
Of course, there's a continuum between not at all to super crap all the way to physically correct . What is currently in use and is developed for real-time ray-tracing is all stuff which is even more hardcore pseudo perceptive approximation stuff than upscaling. You get a plausible result, but's not correct.
My personal opinion is that because of the inherent complexity of light transport, we will never get correct realtime raytracing. And the position on the continuum where real-time raytracing is will move towards the correct end in smaller and smaller increments. Just as a analogy (not the real formula), to make your image twice as good you need to trace square/2 as many rays; we are maybe at a factor 2000 less spp than a converging render needs, so we have a 2000000x deficit in tracing performance. We don't make the hardware 2 million times faster anytime soon.
The bottom line is, real-time raytracing is okay, it's an approximation, and it will always miss out on some effects that will eventually emerge in offline tracing. It's very coarse.
If you watch over the progress of an unbiased raytracer you will be amazed how subtle but important the accumulation after say 4 minutes is. You sit there and think it looks awesome, it's what you wanted/expected, and that maybe you could abort it now and call it a day. But you continue observing and then these tiny shadows and highlights and indirect refractive effects appear, and you think damn this is awesome (and the prior 4 minute stop was actually really bad). Then after a couple of hours you look again and it became a photo, it's effectively real. And you can not attribute this to any one particular thing, it's very holistic.
It's not just that you can leave uncanny valley behind, you actually end up with a image you couldn't know you want. The mind can only expect/imagine so much realism or correctness, these images surpass your expectation/imagination. I don't see real-time to even leave uncanny valley.
View attachment 12686Yes, I've done professional Arch-Viz with Maxwell for a couple of years, about 20 years ago (not saying I'm a particularly good designer, I'm an engineer, but I managed to eat and rent). I'm interested in 3D since end of the 80s. Went through Imagine, Real3D, Lightwave, Povray, Radiance, Cinema4D, Brazil, VRay, Maxwell, many more. Going through standard raytracing, radiosity (remember Half-life?), photon mapping, biased path tracing and so on. As an engineer I looked at what the algorithms did, and it actually helps a lot if you do lighting artist work, or compose a shot in such a way that the flaws are not that glaring.
Without diminishing the achievements in real-time raytracing, in comparison to the high bar, it's producing comic stuff. The same way you get accustomed to the quality in new games, and you can't unsee the improvement, and once cherished games suddenly look really weird, the same way you can't unsee what unleashed raytracing can do. If you give Maxwell enough time, it makes you literal photos, it's that correct.
People that presented at shows most often used software with source code access. So sometimes it was just Povray + the extra code. As you only advance a specific part of the algorithm you don't want to reimplement the entire infrastructure. Sometimes it was just a custom toy project of 4k lines. The algorithm isn't that complicated, it just takes time.
Thanks again for the thorough explanation. Reading that I can understand why current RT is so mediocre overall.
Even at a very basic level though you see its potential, and some games gain a lot of graphics fidelity by adding it, but on quite a few games the performance cost doesn't justify the fidelity gains imho.
Resident Evil games come to mind, for instance. Elden Ring, Doom Eternal, And many others.