Realtime Ray-tracing Processors.

Real-time Raytracing processors the next big thing?


  • Total voters
    233
Yes, Metropolis light transport gives much higher quality compared to more naive path tracing implementations. It's still slow compared to photon mapping though. It requires LOOOOTS of ray intersections.

The all frequency stuff actually isn't constrained by computation as much as by memory consumption. A couple of gigabytes even for fairly simple scenes if I remember my fairly quick glance through the paper correctly. Still a nice technique though.
 
That's the point, isn't it? With a clever algorithm you can improve things. The current methods are just fast, not efficient. And that goes for most rendering implementations.
 
Well, yes, when rendering complex worlds with path tracing becomes feasible, Metropolis Light Transport will rock. We're a long way off from having that kind of computational power at our disposal though.

If I find an incredible rendering algorithm that doesn't require any sort of spatial sorting for the scene and has O(log(n)) asymptotic time complexity but takes 10^1000 years to run when rendering a single triangle, it won't be very useful.
 
GameCat said:
Well, yes, when rendering complex worlds with path tracing becomes feasible, Metropolis Light Transport will rock. We're a long way off from having that kind of computational power at our disposal though.

If I find an incredible rendering algorithm that doesn't require any sort of spatial sorting for the scene and has O(log(n)) asymptotic time complexity but takes 10^1000 years to run when rendering a single triangle, it won't be very useful.

True. But the brute force method uses the most simple model there is: just render everything as fast as possible as it comes along. The largest improvements made are in culling invisible objects as early as possible and the ability to have nicer special lighting effects by using shaders.

You could say, that the brute force model is the opposite of using a model, as it tries to do away with anything that is not essential or improves the speed. While that has it's own merit, it means that using any model on such hardware will reduce the speed.

When you design a chip that does use a specific model, you gain the best results when you stick to it. And yes, any model is a series of clever hacks to remove unneeded actions and improve the speed of the needed ones.
 
I'm curious as to what these problems are that can't be solved?

Pure ray tracing isn't particularly good at ambient diffuse lighting for example...

Thanks...was the 'Luxo Jr' demo fully raytraced unlike PIXAR movies?

No...

I take it that you mean 'all' other effects can be achieved by 'cheaper' alternatives and of the same quality except shadows? What about GI, reflections, refractions etc?

Of course! What do you think films have been doing for the past few years? GI doesn't get a whole lotta use in film (probably gets more use in commercials than film).

This is a very simple looking image of Luxo, as I mentioned earlier, two of the biggest factors, good lighting and animation can bring the image to life without the need for 50 layer textures and millions of polygons...I'm still amazed by this nearly 20 year old video of Luxo

Well look at the surfaces, of course there's no need for a ton of texture layers.. But not all maps are strictly for visible textures eithers... There's shadow maps (being used in Luxo for example), displacement maps, occlusion maps, storing BRDF functions in maps (assuming you don't want to procedurally compute it)... And actually Luxo probably *is* constructed of "millions" of micropolygons...

Actually I find it amusing that folks are considering a scan-line z-buffer system a "brute-force" method, and consider raytracing "elegant"...

It maybe, as DiGuru mentions, that RT may not be the best solution because the right tools don't exist yet for the artists?

What would make the *right* tool? I mean using environment maps while not being as physically accurate as a raytracer for reflection for example, they give the artist a lot more explicit control over the behavior of the reflection with something as simple as an image editor...

Besides, offline renderers today are pretty much hybrids that leverage several rendering technologies and give the artist the tools necessary to use what the artist feels is appropriate to use to render a scene.

I should also point out that these stills aren't that big of a deal. It's relatively trivial these days to do nice convincing stills. Where renderers start falling down is with motion and complex cases (e.g. hair, motion blur, etc.)...
 
archie4oz said:
I should also point out that these stills aren't that big of a deal. It's relatively trivial these days to do nice convincing stills. Where renderers start falling down is with motion and complex cases (e.g. hair, motion blur, etc.)...

Very true. Most of the acceleration techniques that make these nice stills fast to render (-> undersampling...) will fall apart to tiny little pieces when used in an animated scene: flickerflickerflickernoise.

I say it again, fast displacement would add a whole lot more detail and eye candy to realtime 3D than any fancy raytracing...
 
Back
Top