Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

Natural lighting isn't what's usually desired. Look at the average live-action movie crew. The big lighting trucks contain a few lamps and a LOT of equipment (and people) who are there to PREVENT natural light. Believable, yes. Natural, no.
Interactive games are fundamentally different because the viewpoint tends to be arbitrary.
 
Natural lighting isn't what's usually desired. Look at the average live-action movie crew. The big lighting trucks contain a few lamps and a LOT of equipment (and people) who are there to PREVENT natural light. Believable, yes. Natural, no.

Exactly, I read a nice paper from Dreamworks a few years ago, about the lighting used in Shrek, and how they like to control the lighting accurately with simple pointlights or spotlights, rather than going for GI/'physically accurate' solutions.
 
Really limited ray tracing on an extremely inflexible architecture ... what a throwback.

14400 FP multipliers in a 150 Million gates ASIC is impressive though, but the amount of memory available on chip is incredibly small ... I just don't see how you are ever going to implement say GI algorithms on that thing.
 
Last edited by a moderator:
800TFLOPS Multicore IC for Realtime Ray Tracing

800 TFLOPS and we have real-time Ray Tracing. :D

I can see such a technology make it's way down to a $20,000 arcade game box in 3-5 years time.

Imagine a Virtua Fighter 6 with the following effects in realtime @ 1080p 60FPS -

http://www.youtube.com/watch?v=5-Vq...9FB15750&playnext=1&playnext_from=PL&index=16

You'll never be able approximate that with rasterization. Not now, not in 50 years.

That's why the move to ray-tracing is a must.
 
Last edited by a moderator:
Why not? Unless I'm missing something, the average Pixar movie looks way better than this.

Oh I'm sure you can get things to look as good and better.

I'm talking about the behaviour of the lighting, shadows and self shadowing. It is exactly as it is in real life, you can't code that in. The best you can do is exactly what you see is todays games.

Manmade approximations cannot rival reality.
 
Oh I'm sure you can get things to look as good and better.

I'm talking about the behaviour of the lighting, shadows and self shadowing. It is exactly as it is in real life, you can't code that in. The best you can do is exactly what you see is todays games.

Manmade approximations cannot rival reality.

So you're saying the lighting, shadows and self shadowing in Pixar movies is not as good as this? Then we disagree.
I think we also disagree on raytracing being reality. In my opinion, raytracing is a manmade approximation aswell. Just a less efficient one.
 
I think we also disagree on raytracing being reality. In my opinion, raytracing is a manmade approximation aswell. Just a less efficient one.

That's a pretty important point. Raytracing is not a full physical simulation. Think about how you would render a prism in a raytracer. Or atmospheric scattering. Or fog. In each case you have two options: Throw in a fantastic number of rays, or fake it.

Just so that we're clear, full physical simulation - the famous "Once we do that all problems in graphics are solved!" point - is a pipe dream. It's really up there with every kid's grand game design, the "Let's simulate a whole world!".

Faking things is good. It is the Right Thing(tm) to do. Why would I simulate the way of a photon bouncing through the sun for a thousand years when I can simply approximate it as a white ball of light (or even a yellow one and fake the scattering as well)?

/rant
 
That's a pretty important point. Raytracing is not a full physical simulation. Think about how you would render a prism in a raytracer. Or atmospheric scattering. Or fog. In each case you have two options: Throw in a fantastic number of rays, or fake it.

Yea... A few years ago, a fellow student and I wrote a photon-mapping raytracer based on HW Jensen's work...
Photon-mapping seems to be THE way to handle that sort of effect in a raytracer... Thing is, it's not raytracing in itself. Instead of the classic Whitted raytracing method of tracing rays of light from the eye back to the source, you are tracing photons from the source to ... whatever your criteria are for storing their information in the photonmaps.

I wouldn't really call photonmapping raytracing in the first place. It's related, but not quite the same. Besides, you only use them to create photonmaps. These photonmaps don't necessarily have to be evaluated by a Whitted raytracer. You could just as easily evaluate them from within a rasterizer. After all, in essence a photonmap is just a 2d or 3d texture. That is, the photons are stored in texture-space, you just don't evaluate it as a bitmap. It's more like a procedural texture.

But there are already two obvious approximations going on there:
1) You generally won't base the number of photons you trace on the actual number of photons that would theoretically be emitted from your lightsource. You just take a smaller subset, and have each photon have a certain level of energy to compensate. So they're not really photons in a physical sense.
2) During filtering there's another approximation going on. You will be estimating the photon density and luminance in an area, based on the number of photons you've simulated.

And this solution is far more efficient and delivers far better quality than solutions based on conventional eye-ray tracing with Monte Carlo-based path tracing and all that.

Oh well...
 
It seems to me such a massive misinvestment of capital and expertise, why not a 800 TFLOP accelerator for say progressive photonmapping ... that would be so much more interesting.
 
And this solution is far more efficient and delivers far better quality than solutions based on conventional eye-ray tracing with Monte Carlo-based path tracing and all that.
A tidbit from the progressive photon mapping paper to accentuate this :
The reference image rendered using path tracing still exhibit noise even after using 51500 samples per pixel and 91 hours of rendering time.
 
Scali and MfA, yeah progressive photonmapping is the way to go!

Anyone want to toss in a guess at current GPUs effective TFLOPs when you add in both FLOPs from the shader ALUs and dedicated FF hardware (TEX, ROP, raster)?

We are comparing 88 TFLOP per chip raytracing (9 chips = 792) to ~1-2 TFLOPs + FF TFLOPs per chip on top end GPUs?

Sure does clue in to how much more ALU TFLOPs will be needed to software raytrace on current GPUs (including LRB) in real-time 1080p however...
 
Most of their FLOPs are going to go into the intersection testing though. Bezier clipping is only efficient with a very loose definition of the word ...
 
A few years ago, a fellow student and I wrote a photon-mapping raytracer

Yup, photon mapping surly improves on classical raytracing, but it still fakes a lot of things and takes many shortcuts. Which is good.
Not sure anyone ever tried to mix in wave behavior, which is the cause of most interesting optical effects. :)
 
Interesting perhaps, common not ... interference just like say spectral divergence aren't a big priority for rendering.
 
Interesting perhaps, common not.
Kinda depends on how you look at it. Refraction is a pure wave effect, based on the change of the speed of light in a medium and Huygens' principle(*). Of course, you would have to be mad to implement it that way.

Now that I think about it - and after checking my office - that's not a good counter example, as visible refraction is really not that common in real life. But neither are chrome spheres. ;)

Yes, I'm splitting hairs. Sorry about that. I think it's safe to say that we agree. :)

(* If I remember my physics lectures correctly)
 
Back
Top