Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

The obvious area is in films – much of the Pixar film work is ray traced as are most of the final high-quality visual effects done in Hollywood.

Prior to Cars, Pixar used virtually no raytracing at all, and neither did most others in Hollywood (many of them use Pixar's Renderman software). And even in Cars, the majority is still rasterized, the raytracing only accounts for about 20-25% of the total scenes.
So most of Pixar's film work is not raytraced.
I can't stand this fallacy that has been used to promote raytracing for ages.
Dreamworks has published a nice paper on lighting in Shrek arguing how they don't even WANT to use realistic light models because they are hard to control for an artist. Relatively simple lightsources are much easier to control than a completely accurate radiosity solution and all that. You simply place lightsources where you want the light, just like in regular movies.
 
Oh, so they finally admitted they had no silicon I see.

My opinion ... if they had a world class renderman engine they could immediately sell which they could accelerate they'd have a product, an accelerator for software which doesn't exist will be hard to sell.
 
Last edited by a moderator:
Oh, so they finally admitted they had no silicon I see.

My opinion ... if they had a world class renderman engine they could immediately sell which they could accelerate they'd have a product, an accelerator for software which doesn't exist will be hard to sell.

Eh ? What?
 
Oh, so they finally admitted they had no silicon I see.

Does an FPGA not count?

My opinion ... if they had a world class renderman engine they could immediately sell which they could accelerate they'd have a product, an accelerator for software which doesn't exist will be hard to sell.

PC Perspective said:
Caustic is already working with companies like Cinema4D, Autodesk, Blender Render and others to begin implementing support for the CausticGL software into future versions of these rendering applications. If they can achieve a near-global adoption of their software then selling the acceleration hardware to design companies would be an easy sell. Caustic did purchase a company called “Splutterfish” that created the Brazil Rendering System in order to better understand how to work with software ISVs to integrate CausticRT and develop tools to utilize the hardware and software capabilities.

So I guess they're working on that. Sounds they aren't trying to make inroads into interactive graphics 'till a few year down the road. The closest they expect to get to gaming in the mean time is in building "pre-baked" assets with their ray tracer.
 
Does an FPGA not count?
No it does not ... hell, they didn't try to hide the fact that they were using FPGAs initially for nothing. For the cost of a couple of those huge FPGAs you could build a rendering cluster of COTS hardware which would blow it away. Which will be a significant problem for them even after they do have their own silicon (due to low volumes). Maybe they can sell it on power consumption ...
 
Last edited by a moderator:
It's not a prototype, they are selling it ... and they shouldn't have allowed their PR staff to try to hide it was a FPGA.
 
This article gives some insight in how these cards are used:
http://www.extremetech.com/article2/0,2845,2345640,00.asp

Apparently the chips are used to accelerate some portions of the raytracing process, but the actual shading is done on the CPU (and in the future can be done on the GPU).
It seems that their hardware mainly reorders rays so that they can be processed efficiently in parallel by conventional CPUs and GPUs. They use the word 'scheduler'.
 
This article gives some insight in how these cards are used:
http://www.extremetech.com/article2/0,2845,2345640,00.asp

Apparently the chips are used to accelerate some portions of the raytracing process, but the actual shading is done on the CPU (and in the future can be done on the GPU).
It seems that their hardware mainly reorders rays so that they can be processed efficiently in parallel by conventional CPUs and GPUs. They use the word 'scheduler'.
Thanx for posting this article.

Finnaly one of first steps going to RT... im pray here for some day we could see a mature hardware (or hybrid with Raster...) doing true RT(60 to 200 rays/pixel).

(what level opemRT /saarcor guys going in your RT engine? It does exist today?)
 
A similar article on Arstechnica:
http://arstechnica.com/hardware/new...s-launches-real-time-ray-tracing-platform.ars

Also, the business plan is apparently to start with these FPGA-based cards (Caustic One), because they're cheap to build... then build an ASIC card (Caustic Two) if there is enough demand.
Because the chip doesn't really rely on tons of bandwidth and tons of parallel SIMD processing power (those things are basically delegated to the CPU or GPU performing the actual shading and other tasks), it doesn't have to be a very big, cutting-edge chip. They want to build the ASIC on 90 nm, running at about 350 MHz, which should be more than 10 times as fast as the current FPGA.
 
A curious business case in that article about game level design. Secret Level made a level lighter that was GPU based. Big win. Otavio Good showed at @ XNA game fest a couple of years ago: http://www.secretlevel.com/downloads/Natural Outdoor Lighting in Games.zip

What's not clear here is why a raytracer would be a better choice... unless it was used slowly for AO or something like that. And you'd still have to write the software and integrate it into your pipeline.

Or just buy Beast.

Natural lighting isn't what's usually desired. Look at the average live-action movie crew. The big lighting trucks contain a few lamps and a LOT of equipment (and people) who are there to PREVENT natural light. Believable, yes. Natural, no.
 
What's not clear here is why a raytracer would be a better choice...
QUOTE]

some effects are just easier with a tracer.

reflections and refractions are pretty trivial in a tracer and are much, much harder to get right in a triangle engine.

cubemap reflections, for instance, dont typically have an accurate point of view for all camera positions as the cubemap is rendered from one pov.

and who knows, might even affect gameplay if done accurately in a tracer
 
Back
Top