D. Kirk and Prof. Slusallek discuss real-time raytracing

Dedicated hardware is always faster given similar development time and money, similar number of transistors and clock speed, etc.

It's not necessarily going to be faster if you can't spend as much time and money, and therefore can't get the number of transistors and the clock speed nearly as high.
 
I haven't read this thread from front to end and you'll have to excuse me if same links have been used before:

http://www.beyond3d.com/forum/viewtopic.php?t=2961&highlight=ray+tracing

Carmack:

Now there's two directions to go about this. There's companies that have made ray-tracing acceleration chips, that do ray-tracing in a fairly conventional way, where it's object/ray intersections. There's some benefit there... it's going to be a little bit of a hard sell. It might show up in some specialized markets. That type of thing might happen when ray-tracers are getting their asses kicked so thoroughly by rasterizers using real-time hardware that that type of hardware may wind up getting some small market there... but I don't see it really as effective enough for the real-time gaming market. You've still got all of the scene data management issues.

What I do think is still a potential interesting thing is, and this is something I've thought... there can be some opportunity to maybe make a go at, is ray-tracing into voxels basically, where you make an obtuse, surface-skin voxelization of the world. That's something that's so ideally suited for hardware rasterization on there... it's one of those things where it's tantalizingly close to the type of thing that I tend to do, to like... try and write a hardware rasterizer for voxel tracing. I'm surprised there hasn't been a university project or something like that, because that's something which could be done really, really fast and would give you effectively unlimited geometric detail. There's a bunch of trade-offs there in how you wind up exactly getting some of the specular effects that you're used to getting, where you're not sure about surface geometry, but there's interesting potential directions for a lot of the bi-directional ... [*obscured*] ... at a voxel level.

SA:

Ray tracing is primarily useful for specular and refractive touchups IMO.

Transformed geometry is much more efficient for first hit computation and first order diffuse and specular reflection. Soft shadows with area lights approximated by stocastic multiple sources are handled much more efficiently using antialiased shadow maps than with distributed ray tracing. The situations that transformed geometry can handle cover the vast majority of real world scenes with essentially the same amount of realism as ray tracing.

That said, a hardware ray tracer will be very useful for touching up those parts of a scene where refraction and specular reflection require it and the sooner such a capability is available the better.

From my rather simplistic POV I'd say that selective ray tracing for only specific parts of a scene shouldn't be such a bad idea after all.

Interesting link from the first link above (a project that had been sponsored by multiple IHVs):

http://citeseer.ist.psu.edu/cache/p...perszSzrtongfxzSzrtongfx.pdf/purcell02ray.pdf
 
I'm having a hard time seeing how raytracing portions of the screen would be amenable to realtime graphics.
 
bloodbob said:
So your seriously telling me that you think dedicated hardware can't do ray tracing faster then a generic chip???
No, I said a GPU , if you look in the near future, will be able to be very fast at RT, not more efficient than a custom solution.
The money are in GPUs development not in exotic custom hw.

If the dedicated hardware is designed with the same effort as generic hardware its ALWAYS FASTER
dreams

ciao,
Marco
 
Laa-Yosh said:
Procedurally generated content will never get good enough. That's the result of many years of research... You need human input, models and textures to build content that you can then use to create variations procedurally. Like the orc builder plugin for Maya that Weta has created to build the Massive armies.
But the content itself will be pretty similar to what we have today, not some looks-almost-like-a-tree procedural thingies. Anything else just won't cut it, and will be too expensive to render anyway, especially in realtime.
Were basically on the same page, but i think much more weight will lie on procedural side, and quite a lot of it will be runtime-instanced, not pre-generated. Of course you need human input, but lots of it will be more on a architect level, not artist level.
As for "almost-like a tree" thingies .. um, http://www.idvinc.com/

I'd rather expect content libraries to appear as a solution. Just as you can make a movie with the same kind of cars, buildings, clothes, vegetation etc. as hundreds of other movies, you will be able to do this with games as well. The artistic choice will be what you put in the enviroments, how you light it, and so on... Thus there'll be companies specialized in building content to licence, and we may even get to see virtual 'actors' for hire as well. Of course otherworld settings will require specialized content, but the same goes for movies as well...
Content libraries are already appearing. Some of them are procedural like aforementioned SpeedTree and Darkling simulations Darktree ( procedural materials ) Whether we call them middleware or content libraries somewhat depends on how much of it works in runtime.
Pure prebuilt content licencing will obviously also have a part to play.

What im getting at, is that hand-tuned special artistic tricks to make a particular scene look good in RT 3D environments will have to decrease, because the amount of content will go up and worlds will become increasingly bigger. There simply will not be enough artist resources to spend to make everything look good. And thats IMO where raytracing will have an edge, because it doesnt have to rely on special tricks for natural looks.
You can afford those tricks in offline rendering, because the amount displayed scenes is finite in the end, not so in Galaxies of Starcraft or whatever the next big ubermassively multiplayer thingie is going to be.
 
Procedural content doesn't have to be generated at runtime..
In a console one would prefer to stream content from DVD.
 
no_way said:
There simply will not be enough artist resources to spend to make everything look good. And thats IMO where raytracing will have an edge, because it doesnt have to rely on special tricks for natural looks.

Yeah, but "natural" and "good" aren't necessarily the same thing.

"Physically correct" is not always a good thing. Indeed in many instances, especially in the entertainment industry which relies heavily on the suspension of disbelief, physically correct is absolutely the wrong thing.
 
Re: HW raytracing

GeLeTo said:
The algorithm is realy simple. At each BSP node you have one plane and two pointers to child nodes. You start testing the ray segment against the root node. There are 2 cases - the ray intersects the plane (in which case it is split in two segemnts which are tested against the two child nodes) or the ray is completely in front or behind the plane ...

You have to be carefull here, as there are two different levels of recursion in raytracing. The
first is at the algo for bsp-traversal. This one can easily avoided by reformulating the algo in a
iterative way and using a manual stack. It's just a cheap implementation trick, but it helps to get
rid of ~80% of the recursion overhead.

The interesting recursion on the other hand is on the rendering level, and it's kind of an
interleaved recursion. You trace a ray to find out what shader lies in that direction, then execute the shader. The shader itself spawns secondary rays, which finally gives you what
we use to call a shading tree. This tree is inherently recursive and both the reason why raytracing
is efficient as well as why it maps so poorly to rasterisation hw.
 
nAo said:
Procedural content doesn't have to be generated at runtime..
In a console one would prefer to stream content from DVD.
Both runtime generation and stored/cached content have their uses. Runtime generation has the advantage of basically unlimited level of detail + no limit on amount of generated content. After all, Elite had entire galaxies in couple tens of kilobytes.
 
Re: HW raytracing

morfiel said:
. The shader itself spawns secondary rays, which finally gives you what we use to call a shading tree. This tree is inherently recursive and both the reason why raytracing is efficient as well as why it maps so poorly to rasterisation hw.
Is your research group going to write a RT implementation on a GPU that has support for very long shaders, unlimited dependent texture reads and dynamic branching?
Tim Purcell's work is very interesting but he had to split the basic RT algorithm in a lot of computing kernels...and actually his GPU implementation it's bandwith limited.
 
Re: HW raytracing

morfiel said:
GeLeTo said:
The algorithm is realy simple. At each BSP node you have one plane and two pointers to child nodes. You start testing the ray segment against the root node. There are 2 cases - the ray intersects the plane (in which case it is split in two segemnts which are tested against the two child nodes) or the ray is completely in front or behind the plane ...

You have to be carefull here, as there are two different levels of recursion in raytracing. The
first is at the algo for bsp-traversal. This one can easily avoided by reformulating the algo in a
iterative way and using a manual stack. It's just a cheap implementation trick, but it helps to get
rid of ~80% of the recursion overhead.
Yup. And this can be implemented in future rasterization hardware without much effort. The only thing missing is the ability to both read and write values to an array required for the manual stack.

morfiel said:
The interesting recursion on the other hand is on the rendering level, and it's kind of an
interleaved recursion. You trace a ray to find out what shader lies in that direction, then execute the shader. The shader itself spawns secondary rays, which finally gives you what
we use to call a shading tree. This tree is inherently recursive and both the reason why raytracing
is efficient as well as why it maps so poorly to rasterisation hw.
Yes but as I already said you are not going to need more than 2-3-4 bounces. How many rays a shader will spawn will not affect the recursion depth. I agree that you can't get away with a manual stack here (at least not in a transparent way - you can still use manual stack). If you realy want more than that - you will need the ability for the shader to push/pop it's state in a stack (which need not be bigger than the number of bounces). Given that something relatively similar is already implemented in currnet hardware (f-buffer) I don't see any obstacle to having this in future rasterization hw. BTW, doesn't ps3.0 hardware already support recursion depths <=4? Then this should be enough for 4 bounces.

I expect that fast shader switching depending on the intersected surface will be more problematic than implementing recursion (unless you use some unified do-it-all shader for all pixels).
 
Re: HW raytracing

morfiel said:
The interesting recursion on the other hand is on the rendering level, and it's kind of an
interleaved recursion. You trace a ray to find out what shader lies in that direction, then execute the shader. The shader itself spawns secondary rays, which finally gives you what
we use to call a shading tree. This tree is inherently recursive and both the reason why raytracing
is efficient as well as why it maps so poorly to rasterisation hw.
As I said earlier, you don't need a stack nor a recursive language to do that.
 
Re: HW raytracing

Simon F said:
As I said earlier, you don't need a stack nor a recursive language to do that.
One can 'unroll' the tracing loop to support a maximum number of bounces..it would not need recursion but probably a small extra local storage would be required. (one could view that extra memory space as a stack)
 
By the way, why is there no texture filtering in the demo? Would it slow down because of the extra calculation/bandwith required?
 
Re: HW raytracing

nAo said:
Is your research group going to write a RT implementation on a GPU that has support for very long shaders, unlimited dependent texture reads and dynamic branching?
Tim Purcell's work is very interesting but he had to split the basic RT algorithm in a lot of computing kernels...and actually his GPU implementation it's bandwith limited.

No, we are not working on RT on GPU at all. We were close to starting a
project about a year ago but then two things happened simultanously.
Tim Purcell came up with his solution which got the credits for prooving
it's possible at all and Jörg Schmittler almost finished his special purpose
hardware providing deeper insight in what kind of HW you need for RT.
The first event let to the fact that it's no more interesting to do RT on GPU
unless you can do it efficient enough to be of practival use, the second
event disencouraged us that this will be possible on anything that is more
or less related to todays GPU's.

By now we are working on a pure software raytracer which works on a
cluster. This solution is extremely important for research as you can very
quickly test things. I did some research myself in the last few weeks
concerning methods to speed up scenes with many lightsources and in
SW you can test approaches in about a weekend. The second solution
(which is focused more on getting something like a mass market product)
is Jörg Schmittler's group with their special purpose HW. This is by now
in FPGA state and will hopefully be implemented as an ASIC soon.
 
Re: HW raytracing

Simon F said:
As I said earlier, you don't need a stack nor a recursive language to do that.
? How are you going to do recursion without a stack ?

You have to calculate some intermediate results, store them somewhere
and then trace secondary rays, doing some calculations as soon as the
results from those rays are available. As those secondary rays will probably
spawn more rays themselves, the problem is inherently recursive by definition.

I know that you can try to avoid the true recursion by doing it in an
end-recursive, that is giving the secondary rays the information how to mix
their results with the intermediate results from the primary ray, but
then you only reformulate the recursion and need a queue instead of
a stack. So you didn't really gain a lot.
 
Re: HW raytracing

GeLeTo said:
I expect that fast shader switching depending on the intersected surface will be more problematic than implementing recursion (unless you use some unified do-it-all shader for all pixels).

I'm not shure about your first statement. I did some applications where
we needed as much as 128 bounces, though only on some few pixels to
get reasonalbe results. This application was an prototype visualisation
and not a game though. The point is: if you have >100 bounces (or even
>10) on all pixels on the screen you are dead. But if it's only a few pixels,
it's not that bad and say you have a glass on desk. You need 1 ray where
you dont hit the glass. You need depth 5 or so where you hit the glass
(enter front layer, leave fron layer, enter back layer, leave back layer, hit
desk = 5). But then you get those few rays that get trapped in the glass
because they hit the surface in a slow angle and you get a fibre effect.
Here you may need 50 or 100 bounces, which doesn't make your scene
too slow because only a few rays are concerned, but it makes a big
difference to overall appearance.

But you are definately right on your second statement. It is a problem.
I don't dare to say which one is the bigger problem, recursion or context
switching, but they are both not nice.
 
Re: HW raytracing

morfiel said:
Simon F said:
As I said earlier, you don't need a stack nor a recursive language to do that.
? How are you going to do recursion without a stack ?
FWIW my first job was developing a ray tracer (based on Amantides' "Ray tracing with cones) in Occam, a language with no recursion.

I think you basically summed it up - each ray can be treated independently, as long as it is told how to add its contribution to the resulting pixel.
 
i don´t get why people get so excited about raytracing when you can just go ahead an take quantumelectrodynamics as a lighting model. it´s just as calculation intensive (if not less) as rt, includes EVERY optical effect known to man and, although it´s actual calculations are a mathematically and physically very complex, you can get the point of how it´s working very quick.

so the advantages are clear:
you have one global lighting model (one shader) which is used for every pixel on your screen, and this model considers every aspect of light as it behaves naturally. so now you don´t have to worry about shadows, reflections or refractions or diffractions or whatever, you can be sure that all these effects will be calculated in your global lighting model.

the disadvantages are that you may need to study mathematics and physics for a long long time before you can actually implement something based upon this.
but i will show my implementation as soon as it´s working 100% correct and graphics hardware is fast enough :)
 
Back
Top