Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

I also can create powerpoint slides with gradient shading and cute little arrows.
I sure hope Intel doesn't buy me out as well.

Where's the details section on this site?
 
I also can create powerpoint slides with gradient shading and cute little arrows.
I sure hope Intel doesn't buy me out as well.

It depends...

43544.strip.gif
 
FYI :
http://forums.cgsociety.org/showpost.php?p=5735847&postcount=44

Thank you for all your posts resulting from the launch of Caustic Graphics on Monday. There's been great interest with some really good feedback, and naturally lots of questions. We're just sorry that's its taken a few days to chime in and answer. We hope the following will provide clarity to some of the questions/comments that have resonated so far:

1. Splutterfish is now a part of Caustic Graphics, Inc. so we took the liberty of using some of the images rendered in Brazil from the Splutterfish gallery, crediting the artists. In retrospect, to avoid any confusion, we should have been more clear that these images were not rendered using the CausticRT platform. We'll be updating our site shortly to avoid any further confusion

2. The first generation of CausticRT is targeted toward developers. When we launch our next generation in early 2010, our goal is to have several commercial rendering packages be available that support our technology. This will be the first opportunity for individual artists to leverage our platform, depending on what rendering software you use.

3. Several people have suggested out that we post a movie of our live demo, so that they can better understand our performance gains from the first generation of CausticRT. Rest assured this work is underway and it won't be long before we post a movie on our site.

Hope this helps, and thanks again for all your feedback. We'll be sure to continue to monitor this forum and respond to any questions or comments as appropriate.

The Caustic Team
 
Some interesting industry folks behind the project. The question is: if a small start up can do as they say, why has not Intel, NV, ATI, etc researched this area.
 
Some interesting industry folks behind the project. The question is: if a small start up can do as they say, why has not Intel, NV, ATI, etc researched this area.

Maybe the big IHVs have which is likely, and unlike caustic, their conclusions in that area as of now aren't so rosy as what caustic is trying to portray.
 
http://www.caustic.com/dev_intro.php

platform_chart.gif


The interesting bit is that t his accellerator enhances the capabilities of AMD, Intel, and Nvidia's existing hardware. I think they have a winner with this one! I just hope Intel doesn't buy them and once thwart competition.

By the looks of it, they are resetting/rejiggling/reorganizing the data in their card to enable existing CPUs/GPUs to perform the computation. That may not have needed them to put a 576mm2 die for their ASIC. (*wink*)
 
By the looks of it, they are resetting/rejiggling/reorganizing the data in their card to enable existing CPUs/GPUs to perform the computation. That may not have needed them to put a 576mm2 die for their ASIC. (*wink*)
I must be missing something Deep in your statement that goes beyond my comprehension. Can you elaborate?
 
I must be missing something Deep in your statement that goes beyond my comprehension. Can you elaborate?

Look at the pic from Caustic.

"Enables CPU/GPU to shade at rasterization like efficiency"
"...high cache coherence made possible by Caustic ...."

Obviously they can't compete with anyone in making raw compute monsters. So to me, this seems the most likely route. Also look at the chip size of the thing. It's small by gpu standards and the bus interface is only 4x (apparently) No fans, so definitely not much logic there.....
 
One would need to shade to compute the ray bounce direction, so if this chip only does triangle to ray intersections, it would be doing a full screen set of rays to next intersection per batch? With each batch incurring host<->device round trip and then another host<->device round trip if GPU shading. Seems like one would even want wait until after knowing next intersection to decide what shadow rays to shoot as well (think lots of small lights all over). Doesn't seem all that exciting to have ray intersection and shading on different devices... however perhaps I'm all wrong here, and if it is really exciting, would be nice to see that from their website.
 
Or perhaps it could be just batching together incoherent rays into coherent batches for which shading efficiency is higher......

but yeah, it leaves a lot of questions open, a la lucidlogix hydra
 
How do you know they haven't researched the area?

That was what I was trying to imply. All three are invested in graphics and have looked into the general issue, so it would be surprising of a start-up constructed a cheap solution that escaped the collective grasp of the competition.
 
That was what I was trying to imply. All three are invested in graphics and have looked into the general issue, so it would be surprising of a start-up constructed a cheap solution that escaped the collective grasp of the competition.

This is essentially what Lucid claims to have done..
 
That was what I was trying to imply. All three are invested in graphics and have looked into the general issue, so it would be surprising of a start-up constructed a cheap solution that escaped the collective grasp of the competition.
Yeah, well if you put 1000 average composers to make music they can't do what Mozart did alone, so if they are lucky to have couple of Mozarts in their forces, good for them ;) I'm not really a big fan of all the real-time raytracing stuff myself or rather don't see that as the next big thing, but it's possible they could have come up with some ingenious ideas thus I think it's a bit hasty to dismiss them barely based on the size and capacity of the company. Surely they are fighting against the odds, but innovations and inventions are what drive this industry which isn't tied to the company size.
 
I am surprised that information about NVIRT hasn't surfaced on these forums yet.

NVIRT stand for NVIDIA Interactive Ray Tracing API and will be released this spring.
It is not a rendering API but it can be used for rendering, but also collition detection for example.
As one would have guessed it runs on CUDA.

Instead of me writing what it is all about i suggest you read the following links as it gives me less of a chance to mess up:
http://www.realtimerendering.com/blog/nvirt-a-mini-blog-and-creating-games/
http://www.realtimerendering.com/blog/nvirt-slide/

The second slide links to a PDF with slides from NVIDIA.
 
Back
Top