Real time raytracing to go mainstream?

Hannibal at Ars Technica has an article (http://arstechnica.com/news.ars/post/20060805-7430.html) reporting on recent work towards massive acceleration of ray tracing algorithms. He goes so far as to suggest that next year's quad-core desktop processors will be sufficiently powerful to deliver ray traced scenes at useful speeds.

How desirable is ray-tracing compared to "traditional" rasterization, from the programmers' perspective? The idea of perfect global scene illumination certainly sounds nice from a user's POV...
 
Even with raytracing, you still need radiosity to do soft shadows and coloured shadow bleeding effects.
I don't see GI viable until the next decade.
 
Maybe I'm nuts and my memory sucks, but haven't we been hearing that "within a year or two, some raytracing techniques will be fast enough for real time use" for the past five years or thereabouts?
 
Even with raytracing, you still need radiosity to do soft shadows and coloured shadow bleeding effects.
Soft shadows can be done with raytracing just fine if you take multiple shadow samples per light. As long as the light has size, this makes sense. Costly, yes, but there's nothing stopping you.

I don't see GI viable until the next decade.
The raw power necessary to do it in the "correct" fashion is way, way, off. But you can stitch together various cheap tricks just as easily with raytracing as you can with rasterization.

Main thing I can say is that fundamentally, everything we try to achieve with pixel shading techniques is attempts to get closer to what can be borne out of raytracing. In the algorithmic sense, it's as generic a render scheme as you can have because it's the equivalent of an exhaustive search. Certainly the fact that geometry is in the inner loop and pixels are in the outer loop means that things like per-pixel geometry displacement is quite easy and isn't so much a matter of tricking the device or creating any extra data. It also means that implicit and algebraically computable surfaces can be faster than the status quo high-polygon models. LODing to get data visible will be less of a problem, too, because you're not actually rendering any meshes -- you're sampling them.

Maybe I'm nuts and my memory sucks, but haven't we been hearing that "within a year or two, some raytracing techniques will be fast enough for real time use" for the past five years or thereabouts?
Academics have said it for years, and if anybody actually put some money into raytracing hardware research, we'd have had it 5 years ago. It's still something else to say that nVidia/ATI will be happy to let some new paradigm take over the same thing they've been pouring millions of dollars into for a decade.
 
ShootMyMonkey said:
Academics have said it for years, and if anybody actually put some money into raytracing hardware research, we'd have had it 5 years ago. It's still something else to say that nVidia/ATI will be happy to let some new paradigm take over the same thing they've been pouring millions of dollars into for a decade.
Which is one pretty major problem--how would raytracing hardware run all of the 3D applications that already exist? Maybe that's a stupid assumption, but users always take backwards compatibility over superior hardware...
 
The Baron said:
Maybe I'm nuts and my memory sucks, but haven't we been hearing that "within a year or two, some raytracing techniques will be fast enough for real time use" for the past five years or thereabouts?
Your not nuts, this topic comes up every once in a while. It is getting closer to reality, but it is still a few years away.

epic
 
ShootMyMonkey said:
The raw power necessary to do it in the "correct" fashion is way, way, off. But you can stitch together various cheap tricks just as easily with raytracing as you can with rasterization.
Since it's just as easy and just as hacky it all comes down to speed in the end ... and raytracing has always had problems there, at the algorithmic level where new hardware just won't help it.
Certainly the fact that geometry is in the inner loop and pixels are in the outer loop means that things like per-pixel geometry displacement is quite easy and isn't so much a matter of tricking the device or creating any extra data.
Displacement mapping is essentially a form of geometry compression, raytracing can suffice with decompressing less of the scene but then that is essentially the same advantage it always had over rasterization ... less overdraw (not 0, the extra intersections which get discarded for being occluded are overdraw). Occlusion culling has always been sufficient to more than close the gap in the end.
It also means that implicit and algebraically computable surfaces can be faster than the status quo high-polygon models.
As I say every time this comes up, the intersections with interesting higher order surfaces can only be determined with iterative methods (which are almost identical to simply tesselating them).

In the end most of this is about Intel propaganda ... raytracing is better suited to CPUs at the moment, they don't want to loose more ground.
 
Yeh bit more fluff to add with the "Zillion core chip by 2010" or whatever that was. It was interesting the point about the scalability with threads and then cores though. Isn't that pretty much the same deal with a contempory gpu?

Did the ray tracing mod - if that what is was - for Quake3 ever play a game or was it reduced to offline render mode? The pics looked good but I thought it was never played.
 
The Baron said:
Maybe I'm nuts and my memory sucks, but haven't we been hearing that "within a year or two, some raytracing techniques will be fast enough for real time use" for the past five years or thereabouts?

Twenty five years.
 
Last edited by a moderator:
IgnorancePersonified said:
Did the ray tracing mod - if that what is was - for Quake3 ever play a game or was it reduced to offline render mode? The pics looked good but I thought it was never played.


It ran on a cluster of 20 AthlonXP1800.
 
this decade is about the R&D and experiements towards realtime raytracing, radiosity, global illimunation, etc.

The next decade will probably bring in enough experience, perfected algorithms and enough raw performance to bring these things into mainstream PC and console (Xbox3, PS4) games.
 
Last edited by a moderator:
...but with the graphical complexity of Quake 1.
I highly doubt anything more complex than Quake 1 at possibly 1024x768 would run realistically.


Megadrive1988 said:
this decade is about the R&D and experiements towards realtime raytracing, radiosity, global illimunation, etc.

The next decade will probably bring in enough experience, perfected algorithms and enough raw performance to bring these things into mainstream PC and console (Xbox3, PS4) games.
 
and raytracing has always had problems there, at the algorithmic level where new hardware just won't help it.
Again, mainly because it is an exhaustive search. Hacks and image-based trickery are all about making it less exhaustive. But in practice, even with raw power, the main wall you'll hit is random memory access. Particularly when scenes reach the point of complexity that bounding hierarchies are obvious wins.

As I say every time this comes up, the intersections with interesting higher order surfaces can only be determined with iterative methods (which are almost identical to simply tesselating them).
That isn't quite true of all varieties because many of the common ones tend towards an infinite limit which can be calculated algebraically quite easily (if not always stably). Certainly, things like the PN triangles and Loop subdivision surfaces and things are another story.

But even otherwise, the thing that costs you whether raytracing or rasterizing higher order surfaces post-tesselation is having to render or sample against so many small elements (which is exactly what makes raytracing high-polycount meshes expensive in the first place). Whereas being able to test only against the much simpler control mesh and solv within the space of that is often cheaper and decreases memory access costs (though, this is obviously problematic on x86 which effectively has no registers ;)).
 
ShootMyMonkey said:
Again, mainly because it is an exhaustive search. Hacks and image-based trickery are all about making it less exhaustive. But in practice, even with raw power, the main wall you'll hit is random memory access. Particularly when scenes reach the point of complexity that bounding hierarchies are obvious wins.
Indeed, coherency is an almost insurmountable disadvantage in raytracing for first hits (and to a lesser extent shadow rays). Until you are monte carlo sampling geometry with detail way below the pixel level, at which point the ability of raytracing to only sample a subset of the visible geometry finally becomes relevant, this isn't going to change.

I thought you were trying to come up with reasons why raytracing would replace rasterization though? :)
But even otherwise, the thing that costs you whether raytracing or rasterizing higher order surfaces post-tesselation is having to render or sample against so many small elements (which is exactly what makes raytracing high-polycount meshes expensive in the first place). Whereas being able to test only against the much simpler control mesh and solv within the space of that is often cheaper and decreases memory access costs (though, this is obviously problematic on x86 which effectively has no registers ;)).
Occlusion culling uses simplified representations too.
 
ogl suggestion

Some time ago I posted this in the openGL forums:

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=7;t=000565#000014

Basically I wanted a function in the fragment(pixel) shader to do simple raycast. When you create a VBO the driver just stores in VRAM the vertices+some king of acceleration structure/for example an octree). Then with the GLSL/HLSL rayCast function the silicon could find the triangles the ray collides ( like the AGEIA PhysX does btw ). That could be done too using the new Geometry Shaders, knowing the ray origin/direction and performin a ray-triangle test for each triangle... but that will be slower than using an acceleration structure like the octree.
 
How can resterisaztion be replaced by raytracing?
You still have to draw the initial textures and geometry. We won't have physics that's so advanced that colours are purely determined by material properties and light.

How will a ray tracer replace rasterized geometry?
 
How can resterisaztion be replaced by raytracing?
You still have to draw the initial textures and geometry.
I don't follow your logic here. You don't have to *draw* anything before tracing rays. You still have to create actual textures and models, but that doesn't have anything to do with having to draw something. In a lot of realtime raytracers, they do use the GPU to dump a Z-Buffer to accelerate the first hit, but that's not a requirement by any means.

I thought you were trying to come up with reasons why raytracing would replace rasterization though?:smile:
So I could still come up with plenty of those. For the most part, on the hardware side, it means needing a lot of memory ports, a lot of cache, and a lot of TLP -- The SaarCOR and Freon 2/7 samples do some pretty interesting stuff for their level, but because they're not nVidia or ATI, it's probably not going to go anywhere. If SaarCOR's team could get up to designs scaled up to 300 million transistors running at 500 MHz, you'd basically be seeing some rather interesting results... will it actually happen? I doubt it.

Certainly, there are ultimately things you can't do without at least raycasting (your example of sub-pixel geometry), e.g., non-linear and time-dependent cameras and other similar things -- aside from the fact that you actually get per-pixel perspective. The fact that anything per-pixel is inherently straightforward in a raytracer as opposed to per-vertex (pixels in the outer loop) is actually more powerful than you let on. Of course, the fact that it is, by nature, the quintessential "embarassingly parallel" problem doesn't hurt from a hardware design standpoint.

My big problem is that there's nothing that you can achieve effectively by mutating and stripping down theory in order to suit something in obfuscated and otherwise unsuitable way. Making something needlessly complex and limited in scope is pretty much the ticket to anything that involves world-level sampling on rasterizers. Rastererizers are designed with certain limitations in mind, and achieving higher-order rendering techniques with them means finding ways to circumvent that in a highly impractical way. None of the crazy dreams you see out there will probably ever make it into a real product. Real products always tend to stay well within the hardware's designed limits -- try to obscure that, and you are invariably asking for trouble. With raytracing, it's all simply a matter of power (on all fronts, that is). Simple stuff works, and complex stuff always breaks down, and raytracing variants are really simple.

Certainly, the first viable place in my mind to introduce raytracing hardware would be a game console or something similar which isn't weighed down by legacy nonsense. On a PC, you have to worry about the past history of games, and the fact that the user basically doesn't understand or give a damn about how things get the job done -- just that it always works. The only problem is that the people who could afford to do this are the same ones who would sooner shoot themselves than admit the fact that raytracing is innately superior ;).
 
Take a breath, raytracing hasn't even made it into offline rendering up until now... the latest RenderMan now supports it but performance is limited so Pixar recommends to use it with caution. Out of the whole Stuart Little, only 16 (!!!) frames were raytraced, the rest of the movie was rendered as usual.There are hardware-accelerated raytracing cards, ART-VPS Pure for example, but they are currently far from being realtime with 1/120 fps for a basic scene, despite having 16 DSPs onboard... they basically stake up to 4 cards in a rack and then stake up these racks, and even that is still far from reaching 0.1 fps...

We'll switch to procedural textures, free stochastic antialiasing and HDTV+ screen resolutions, and maybe then raytracing will be approximated in some very limited parts of the scene...
 
Last edited by a moderator:
Back
Top