Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

The way it's pictured, it should be comparable to ATI's "ring-bus" kinda. At least on the protocol level. I'd say this even looks technically doable. Though I think it'll be much too expensive for some time yet.
 
The only real fundemantal problem with the technology AFAICS is the use of very thin wafers ... those things are fragile enough to begin with.
 
Ray tracing has to appear at some point, its a mathematical inevitability and its sort of the logical next step.

Otoh when that happens could conceivably be 10+ years. Work needs to be done on both the algorithm side as well as the hardware side, there is no obvious way to make it efficient for mass market realtime apps. By contrast, traditional rasterizers were sort of obvious to experts that its doable in principle, long before 3dfx and so forth.
 
Ray tracing has to appear at some point, its a mathematical inevitability and its sort of the logical next step.

It will appear, but I do not expect to become the single, does it all paradigm.

Let's look at the Movie CG industry where going by the numbers everything should be ray-traced and other algorithms useless. In the end the future might be compositing together different approaches in which each approaches the problem at a certain angle: in a REYES renderer, Pixar's Renderman is based on the REYES idea started by those same guys once they were inside Lucasfilm IIRC, you can fit a ray-tracing scheme to calculate for example lighting through a radiosity or photon-mapping (which is ray-tracing based).
 
Ray tracing has to appear at some point, its a mathematical inevitability and its sort of the logical next step.

Otoh when that happens could conceivably be 10+ years. Work needs to be done on both the algorithm side as well as the hardware side, there is no obvious way to make it efficient for mass market realtime apps. By contrast, traditional rasterizers were sort of obvious to experts that its doable in principle, long before 3dfx and so forth.

The problem is that raytracing will always be computationally more expensive than "faking it" as most rasterisers do today. Who's going to use raytracing when you can take that same computational power and do several times more work? Or get something that looks just as good (or in some cases even more artistically pleasing) and have more power left over to do other thing (physics, AI, etc)?

While the content creators can always get more for their computational money using traditional rasterisers, I think that's the way the industry will tend to go as far as real-time entertainment goes.

Other fields may see raytracing as useful, but I think they may stay as minority/academic/business usage rather than entertainment. For instance, I used to work on software that used raytracing for calculating microcell signal propogation for urban mobile phone networks. This was much, much slower than the "trick" techniques we used for larger scale calculations, and was pretty much unusable even on big Sun workstations if you wanted to do more than a few blocks of mapping data. And that was an application that did not have to be anywhere near pretty to look at.

Of course when you move to offline rendering, then raytracing techniques can become more useful (as we've seen in special effects and movies), but not being realtime isn't much use for interactive computer games.
 
Last edited by a moderator:
Yes, MiddleMan strikes again ! :)

John Carmack to Reverend via email said:
I believe that there is very interesting potential in ray casting through a collapsed sparse octree representation, possibly punting the final calculation to a general fragment program. A tracing solution like this cries out for some simple hardware, rather than a general purpose processor.

John Carmack
 
Proponents of ray tracing often miss one critical fact. They say that raytracers are O(log n), while rasterizers are O(n). True, but only if the raytracer has a "decent" acceleration structure, like an octree, kd-tree or whatever. The problem is, these are all above O(n) to construct... And in games, you generally don't want any restriction on how dynamic your geometry can be.
 
True, but only if the raytracer has a "decent" acceleration structure, like an octree, kd-tree or whatever. The problem is, these are all above O(n) to construct... And in games, you generally don't want any restriction on how dynamic your geometry can be.
Sure, but not O(n) to update. kd-trees are often liked because the ray searches and neighbour searches are fast, but they suck when it comes to updating them for dynamic geometry. Octrees aren't so bad in this regard.

The biggest thing that comes up, though is the fact that you don't have the ability to do immediate mode rendering anymore as raytracing requires the whole scene. Well, you can technically raycast sample only the way Renderman does, and get your immediate mode rendering that way.

The problem is that raytracing will always be computationally more expensive than "faking it" as most rasterisers do today. Who's going to use raytracing when you can take that same computational power and do several times more work?
I think the question of computational power for raytracing is a bit overblown because people naturally associate the word "raytracing" for an exhaustive solution, which it doesn't really have to be. The difference between simply raycasting to sample data and run each sample through regular old fragment shaders and going through a complete Monte Carlo simulation is pretty enormous.

The raw computational power necessary to get 60 fps at 1080p with 4xAA on a raycast scene (i.e. no recursion, no shadow testing, just have to go through everything the same way you would with a rasterizer), is significantly less than what current GPUs can do. The main weakness is memory architectures, and those are so inexcusably pitiful anyway that it's a weakness no matter what type of rendering we're talking about. The end result would be only as expressive as any other GPU, other than the fact that non-affine cameras are possible, and you get per-pixel perspective projection, and you don't have as many issues with sub-pixel geometry... but it's more of a stepping stone architecture than a goal.

Now the moment you start getting into depths of recursion, Monte Carlo sampling, etc., that's where computational power demands explode.
 
Proponents of ray tracing often miss one critical fact. They say that raytracers are O(log n), while rasterizers are O(n). True, but only if the raytracer has a "decent" acceleration structure, like an octree, kd-tree or whatever. The problem is, these are all above O(n) to construct... And in games, you generally don't want any restriction on how dynamic your geometry can be.
Did you see my earlier post?

http://www.beyond3d.com/forum/showpost.php?p=824857&postcount=14

Jawed
 
Given the fact the the two best offline renderers out there try to rasterize as much as possible (mental ray even supports OpenGL for the first ray, while PRMan always uses a rasterizer for the first ray) I don't see a point in implementing an 100% pure raytracer in hardware. If we suppose that both know what they do they would have probably switched to an all-raytracing implementation if it would have given them any advantage (performance wise or anywhere else). But if even those players optimize by doing rasterizing where possible (first ray, shadow maps) I highly doubt that rasterizers in graphics cards are going anywhere soon.

I tend to believe though that we will see a "trace" call appear sooner or later in the shading languages ... after all, hardware is going through the same evolutionary steps as the offline renderers already did.
 
Now the moment you start getting into depths of recursion, Monte Carlo sampling, etc., that's where computational power demands explode.

But that's what most people consider to be true raytracing. The shortcuts that you mention earlier in your post are simply other ways of faking raytracing-style results, without actually tracing all the rays that are needed generate a scene.

In that respect, your "fast raytracing" is just another one of the trick techniques that people are (and will be) using to avoid having to acutally calculate every ray because it's computationally cheaper to cheat than to fully raytrace.

The cost of computation is always going to be critical where realtime games are concerned because the workload to create gameworlds will always push the finite capacity of graphics card and CPUs to the maximum.
 
Last edited by a moderator:
Re the bus: wireless means recievers and transmitters or transcievers and so on all of which add their own latency. Not to mention the emi. What about a light bus, optical fibre. Each device could respond to a range of or single frequency, huge bw is available, and no emi at all.
 
Fibre is not an option, having to make all the connections with a physical bridge is the problem in the first place ... you could use face to face flip chip bonding I guess, but you would need to be able to implement surface emitting laser diodes (or at least something close to a laser, so you can get decent bandwith) on silicon fist.
 
Fibre is not an option, having to make all the connections with a physical bridge is the problem in the first place ... you could use face to face flip chip bonding I guess, but you would need to be able to implement surface emitting laser diodes (or at least something close to a laser, so you can get decent bandwith) on silicon fist.

Like Intel's new laser-FSB tech?:
http://www.intel.com/pressroom/archive/releases/20060918corp.htm
 
Back
Top