Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

Kinda depends on how you look at it. Refraction is a pure wave effect, based on the change of the speed of light in a medium and Huygens' principle(*). Of course, you would have to be mad to implement it that way.

Now that I think about it - and after checking my office - that's not a good counter example, as visible refraction is really not that common in real life. But neither are chrome spheres. ;)

Yes, I'm splitting hairs. Sorry about that. I think it's safe to say that we agree. :)

(* If I remember my physics lectures correctly)

Nah, refraction isn't a pure wave effect. Interference and diffraction are.
 
I'm talking about the behaviour of the lighting, shadows and self shadowing. It is exactly as it is in real life, you can't code that in. The best you can do is exactly what you see is todays games.
Too bad they didn't have real-world shadows there then. You know, the effects that global illumination and soft shadows try to accomplish?

The chip itself seems interesting though. I believe it's meant to trace beizer patches instead of triangle soup. I'm sure bigger car companies are very interested in a system that can render their car designs in real-time without having to preprocess them too much. Proper headlight simulations are also quite important for them.
 
Nah, refraction isn't a pure wave effect. Interference and diffraction are.

What about refraction through a prism for example?
The 'rainbow' effect is a result of different wavelengths having different refraction angles, right?
But I agree that most raytracers don't bother to model light this way. They tend to model refraction the same way as reflection, which isn't correct.
This is generally a case where you use a photonmapping approach again. You can emit photons with various wavelenghts, and use the wavelength in calculation of the refraction angle.
 
What about refraction through a prism for example?
The 'rainbow' effect is a result of different wavelengths having different refraction angles, right?

Refraction can be explained using the particle model for light if you give your particles funny (somewhat) properties.
 
FromTheArticle said:
It's impossible to have a ray that partially intersects an object.

Why?

Couldn't you have the ray check the angle of whatever surface it hits, and if it's close to being parallel to the ray, then keep going to the next intersection, then combine the 2 colors using a weight based on how close to parallel the first object was?

Couldn't you check the adjacent "pixels" of where the ray hits, like super sampling? If the "pixel" wasn't on the same object then cast a new ray and again average all the colors?

(pixels in quotes cause it's probably the wrong term to be using, but close enough to what I'm trying to say)
 
Seeing how this thread's link always pops up on the front page and most of the posts were made 4 years ago, I thought I'd post a few interesting bits of information from the offline CG world.

Raytracing has pretty much become a standard requirement for nearly all movie VFX and CG animation production. This was made possible because of the following milestones:

- Seriously faster computer hardware.
We can now put four quad-core CPUs into a single unit-height rack box and install a 64 bit OS with as much memory as we want, which makes render nodes a LOT faster compared to 2007. It's also quite cheap and allows you to utilize every render software license to the max (they usually charge per install and not per CPU).

- Introduction of physically corrent, energy preserving shaders and fully linear lighting pipeline.
This has been pioneered by both Mental Images and ILM and has already been adapted in game renderer engines(!) by nearly everyone. I don't want to get into details, here's ILM's presentation: http://renderwonk.com/publications/s2010-shading-course/snow/sigg2010_physhadcourse_ILM.pdf
The point is that this approach offers less control for shading and lighting, but everything behaves more realistically. Lighting and shading work is significantly simplified and iterations are far, far faster. Granted, it doesn't require raytracing - but complements it very well.

- Significant software advances in existing renderers.
Pixar has added point cloud rendering to PRMan. See, the problem used to be that Renderman works with displaced micropolygons and it's incredibly expensive to calculate secondary rays with these. Now they convert the scene to coarse point cloud data, which is an approximation of course, but it's perfectly fine for stuff like glossy reflection, light bounces for global illumination, and translucency (subsurface scattering) shaders. It adds an insane amount of overhead though, combined with pre-rendering shadow maps, sometimes artists have to wait days to precalculate all their data before they can start the actual rendering.
Still, the actual rendering is a lot faster compared to evaluating secondary rays against polygons and using traditional acceleration structures like octrees.

- Introduction of raytracing focused renderers.
Maxwell has been mentioned before, alhtough it hasn't really picked up (but there are some very cool uses for it: http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpecial/pirate) but Vray has been gaining ground and rendered most of Tron: Legacy.
The new big player however is Arnold which has been in development for a very long time and has some pretty interesting stuff going on. It has a lot of optimizations and has been used on Sony's animated features, and their VFX work for Alice in Wonderland and 2012.
The main point here is that it's always cheaper to buy more processing power for the render farm than it is to hire additional artists to do all the scene management with precalulating shadow maps, point clouds, tweaking shaders and lights and such. you use a full raytracer, set a few dials and push the render button and it all turns out looking pretty good on its own. I don't have any links at hand right now but might update the post later.
Incidentally, we've been beta testers for this for more than a year ;)


All in all, nearly every VFX production and animated feature today is using raytracing in some form or another and there's a huge amount of R&D invested in making it even more efficient. I'm fully convinced that this all is going to be beneficial to realtime rendering as well, when it eventually starts introducing raytracing in the following years.

Nevertheless, there's still a very important barrier: as soon as you start using secondary rays, the best and worst case scenarios start to diverge significantly. As rays start to bounce all around, scene complexity becomes an issue more complicated than the number of polygons and the size of textures and so rendering times would also start to fluctuate all around. It's not such a big problem in offline, but dropping to 5fps in a game just because the camera gets in the wrong place is a serious problem.
 
Back
Top