D. Kirk and Prof. Slusallek discuss real-time raytracing

Chalnoth said:
davepermen said:
just think of this: every ray hits a surface, as a drop hits the floor, then the drop scatters in all directions. that way, the ray scatters all away from this surface (shaded by the surface, of course), and each of those rays have to get followed again.
I don't see why you couldn't abstract the incoming ray as a vertex/triangle, and the outgoing rays as a render target (texture) that could be read into the next pass of rendering.

The real question is efficiency: if you don't do enough work each pass, performance could get pretty poor.

and this is exactly the point: while trying to map raytracing onto current hw, you have to fiddle around in a way that it simply kills performance.

you can install linux on your gpu if you want. for sure you can. but don't expect it to be fast. but theoretically, the hw can do it.


and this is what i think about kirks statement: the power is there, but its simply NOT USABLE.
 
Er, what you should keep in mind is that raytracing won't replace good and complex shaders - you'd still need that, becuse it is more important than shadow or reflection quality. It also won't replace the heaps of textures and zillions of polygons that you must have to represent a scene with a high amount of detail.

To artists, texture detail, displacement, shading complexity are the most important. If you can render it with raytracing without a speed penalty, than go for it - but similar results can be reached with a skilled lighter and some preparation work for the reflection/refraction stuff (except for some cases... but you don't absolutely need to trace an ocean, for example).

I also don't see that many problems here... shader complexity and high quality displacement mapping seem to get the research they need, texture detail should be helped by virtual memory (it's not reasonable to try to fit the scene into local memory).

Also, Doom3's shadowing tech is IMHO an intermediate step. At the level of hw acceleration that was available in the design phase, there wasn't really any other option for Carmack, but current hardware offers many other possibilities. UE3 is one example of what will be possible, and to me at least, it only needs to loose the "virtual" word from its displacement and increase the shader complexity (standard Phong is not enough for organic stuff)...
 
the problems with shaders are their nonmodularity. you have to often revert back to very different logical parts in your engine to manage a certain shader.

in raytracing, there is just one shader. the brdf. every shader works the same way in the end, all directly plugable. this leads to powerful complex scenes without glitches, and a good estimatable overall performance.

the issue is simple: a raytracer can use rastericing to get some work done, or approximated, if needed/appropriate. the rest, it can do the way it should be.

a rastericer does not have this option. it's the subpart. and it can never scale up to full fledged scenes with brilliant lighting and all, in a full generic way. espencially gi will never really be done without restrictions.

there is no future in precalculation if we want dynamic geometry. be it geomod, be it rigid bodies, be it deformable bodies, be it fluids, etc.

they all give much rendering issues today. in static days of quake, lightmaps have solved about all of this. and, for quake style levels, today you could use spherical harmonics to decode the whole lighting info.

but if you want dynamic stuff, you get huge issues today, it gets all very complex to combine. shaders are a MINOR issue in 3d graphics on the programming side.

with raytracing, they would be about the only side.



of course, this is only partially true. but the knowledge about raytracing is still very limited. they allow the full monte carlo integration.



why try to enhance a part that will never be able to do the full thing? instead, try to get the full thing working, and take out what you don't need.

that way, we can target the highest end, with NO restriction. but we can restrict back, where ever we want, when ever we want, for sake of speed.

get all, and take what you need, instead of get something, and try all you need to fit into it.


i just can't see how people can stagnate on an old, wrong habbit. rastericing has it's purpose. but not in 3d rendering. its the wrong algo, it will always be.

no one had project management ever in his life? if a project leads into the wrong direction, and definitely will fail, you should drop immediately, instead of trying to get the most out of it at all cost.

we're exactly at that. but there is one thing we should not forget: as long as people believe we will still find one day the holy grail for rastericing, they will buy every next generation of hw, in the believe they get cinematic gaming.
 
Chalnoth said:
Except rasterizers are getting more flexible at the same time, and due to the R&D that has gone into them, may well outstrip hardware raytracers, if they don't already (Well, if the 6800 doesn't already).

And once rasterizers are faster at raytracing than raytracing hardware (which will be much more expensive to produce, given the limited distribution), it's just a matter of software developers starting to say, "I want to do that," and hardware manufacturers saying, "Okay, sure. We'll optimize for it in our next architecture."

I haven't yet seen any reason why raytracing and rasterizing are mutually exclusive in hardware acceleration, though it would be nice, if we go the raytracing route, to have some different API interfaces for raytracing to make it easier to program for.
If rasterizers ever get faster at raytracing than raytracing hardware fire the design engineers of the raytracing hardware. The same speed should be the most a rasterizer can hope for.
 
What I'm saying is that the current developers of ray tracing hardware don't have the resources to build a 200+ million transistor ray tracer. Rasterizers, on the other hand, are mass-market products, and therefore vastly more money can go into their development and manufacture. This means that even if they aren't as efficient at ray tracing, they can certainly be faster.
 
Chalnoth said:
What I'm saying is that the current developers of ray tracing hardware don't have the resources to build a 200+ million transistor ray tracer. Rasterizers, on the other hand, are mass-market products, and therefore vastly more money can go into their development and manufacture. This means that even if they aren't as efficient at ray tracing, they can certainly be faster.

Yeah just look at the FPS for the FPGA chip it would have no where near 200 gates if it was ported to asic and it was only running at 90 mhz
 
davepermen said:
raytracing is an inherently recursive algorithm, just google around for tutorials and watch the picts to get a general idea.
shader hw of today, and possibly tomorrow, is till now not designed for any recursion. a max call depth of 4 is defined, i think, and the function can not call itself eighter. (but i would need to read that up again).
All you need is storage space for a 'job queue' - you don't explicitly need "recursion". My first job was writing a ray tracer in Occam, a language "blessed" with a lack of recursion (and structs and while loops and...)
 
Re: HW raytracing

davepermen said:
hm. rastericing was never programable...
i'm talking about having a raytracing logic part here instead. the rest can still be shaders (and is designed to get that). it's just replacing the fixed rastericing logic with a fixed raytracing logic.
I don't propose programmable rasterization - but rather using the pixel shader logic and the traditional rasterization to do raytracing.

Chalnoth said:
I guess my real question is: How much calculation is there between the levels of recursion? The more there is to do, the better for the efficiency of modern hardware.
The algorithm is realy simple. At each BSP node you have one plane and two pointers to child nodes. You start testing the ray segment against the root node. There are 2 cases - the ray intersects the plane (in which case it is split in two segemnts which are tested against the two child nodes) or the ray is completely in front or behind the plane ( then you don't need recursion - just continue testing the same ray against the' front' of 'back' child node). When a leaf node is reached - the triangles referenced in that node are tested. Only the first case requires recursion(and only for one of the two segments), so the recursion depth will be smaller than the BSP tree depth.

davepermen said:
modern hw can not do recursion really...
Yes, modern hardware can't do recursion, but there is no reason why future hardware couldn't handle it.

davepermen said:
... and if you want full gi solutions, the recursion depths and the resulting stack sizes per pixel would be HUGE.
Why? The BSP tree depths are not exactly huge (a perfectly balanced tree can have 2^tree_depth leaf nodes) and you don't need recursion at each node (see above). It's also unlikely that you will need more than 2-3 bounces.
 
davepermen said:
learn how graphics work, and you would be. even kirk is! :D
Out of curiousity, was that a jab at Dr David Kirk? If so, I was wondering how much graphics research you have done, because, according to ACM, he as published quite a number of papers. A small selection includes

  • [*]Accurate and precise computation using analog VLSI, with applications to computer graphics and neural networks
    [*]Curved surfaces in solid modeling: New hardware improves the view
    [*]A survey of ray tracing acceleration techniques
    [*]Accurate Rendering by Subpixel Addressing
    [*]Fast ray tracing by ray classification
    [*]The rendering architecture of the DN10000VS
    ...
Note the ray tracing and HW references.... perhaps he does know what he's talking about??
 
davepermen said:
in raytracing, there is just one shader. the brdf. every shader works the same way in the end, all directly plugable. this leads to powerful complex scenes without glitches, and a good estimatable overall performance.

I'm sorry but I don't get this one... By different shaders I mean various shading models (Blinn, Cook-Torrance, wrapped diffuse, Oren-Nayar, anisotropic etc), variations in specular (there's more than Pong/Blinn here too), volumetric stuff, SSS and so on - there's more than enough material on this one. The common requirement is that they need floating point math; SSS and some other things might also need raytracing.
So how would a raytracer replace all this with "just one shader"?

a rastericer does not have this option. it's the subpart. and it can never scale up to full fledged scenes with brilliant lighting and all, in a full generic way. espencially gi will never really be done without restrictions.

Brilliant lighting is achieved at Pixar, using sometimes as many as a few hundred point + spot lights per scene... GI will get you realistic results, but an art director might prefer something else.

there is no future in precalculation if we want dynamic geometry. be it geomod, be it rigid bodies, be it deformable bodies, be it fluids, etc.

Actually I'm not sure that no precalc is the future... in an ideal system it might be, but there'll always be performance limits in practice.

with raytracing, they would be about the only side.

Er, there might be other ways to light and render a scene than raytracing, that can still be fully dynamic and maybe even unified, too. Why are you so sure that there is a Holy Grail to reach for?

why try to enhance a part that will never be able to do the full thing?

Now that's the point - why do we need the full thing, if we can get what looks like what we want without it?

no one had project management ever in his life? if a project leads into the wrong direction, and definitely will fail, you should drop immediately, instead of trying to get the most out of it at all cost.

But this approach has worked in CGI for more than a decade, and we have yet to see a reason why it wouldn't in the future...
 
Simon F said:
Out of curiousity, was that a jab at Dr David Kirk? If so, I was wondering how much graphics research you have done, because, according to ACM, he as published quite a number of papers. A small selection includes

  • [*]Accurate and precise computation using analog VLSI, with applications to computer graphics and neural networks
    [*]Curved surfaces in solid modeling: New hardware improves the view
    [*]A survey of ray tracing acceleration techniques
    [*]Accurate Rendering by Subpixel Addressing
    [*]Fast ray tracing by ray classification
    [*]The rendering architecture of the DN10000VS
    ...
Note the ray tracing and HW references.... perhaps he does know what he's talking about??

Have to point out that that this was the same David Kirk who just a couple of months ago was publicly scratching his head over fp24, declaring it more or less to be contrary to nature as he couldn't fathom how things could get done in triplets instead of pairs...;) I'm quite certain you have to remember this (I doubt I'll ever forget it.) It's also the same Kirk who last year was spouting the benefits of fp32 even at a time when the nV3x drivers were exposing nothing but fp16; the same Kirk who also later in the year decided that fp16 (64-bits) was "good enough" after all, after several months of telling the world that "96-bits is not enough", but only when it comes from ATi, of course...;) The list of absurdities, propaganda, and slanted hogwash coming from Kirk in public over the years is far too long to reprint here, and much longer than your little list above. People are far too easily impressed these days, seems to me. This is not so much a "jab" at Kirk as it is simply a recounting of history as I've watched it unfold.
 
Re: HW raytracing

GeLeTo said:
I don't propose programmable rasterization - but rather using the pixel shader logic and the traditional rasterization to do raytracing.
and this is very inefficient and unlikely to get working. bether get a real raytracing logic, fixed, and a rastericer logic, fixed, and tons of shading units, programable, wich can get used by both. saarcor shows now the fixed raytracing part. gf6 shows the fixed rastericing, and the programable shading parts.

together, that would be a hell of a beast. (yes, you could share shading units for both)
 
nAo said:
C) You don't know what you're talking about.

Okay I though Kirk was saying we don't special purpose hardware for ray tracing.

Kirk:
Now who’s talking company politics? I see no need for special purpose ray tracing hardware if general purpose programmable GPUs can run the same algorithms faster.

Now I know that their is special purpose hardware can go faster then the NV40 or the X800 XT for raytracing.

So I'm guessing since I don't know what I'm talking about that first statement must be wrong. Thank you for pointing out that we need special purpose ray tracing hardware.
 
Laa-Yosh said:
I'm sorry but I don't get this one... By different shaders I mean various shading models (Blinn, Cook-Torrance, wrapped diffuse, Oren-Nayar, anisotropic etc), variations in specular (there's more than Pong/Blinn here too), volumetric stuff, SSS and so on - there's more than enough material on this one. The common requirement is that they need floating point math; SSS and some other things might also need raytracing.
So how would a raytracer replace all this with "just one shader"?
shaders are still programable. but there is just one way to do a shader, namely by describing its brdf (okay, there is more than brdf only, hehe, but for the start that would be nice).
you can have a blinn, cook-torrance, wrapped diffuse, oren-nayar, anysotropic, etc brdf describing the surface. those are all different shaders. but they all work with the same input data, and produce the same output data.

this is different when you implement shaders today on gpu. depending on what you want to do, you have to have special data here, special algorithms there (outside of the shader!), you have to prerender certain textures, etc. lots of shader-specific additional overhead. raytracing doesn't need this. renderman shows the way.

Brilliant lighting is achieved at Pixar, using sometimes as many as a few hundred point + spot lights per scene... GI will get you realistic results, but an art director might prefer something else.
if you can't get realism, then your surealism is only the bugs of your faked realism. means you're limited by the tools, not by the imagination. possibly pixar doesn't want realism. but hollywood wants, for special effects. and gamers want, for realistic style games.

Actually I'm not sure that no precalc is the future... in an ideal system it might be, but there'll always be performance limits in practice.
of course there can be always made limit tradeoffs. but you should not have any other limits.

Er, there might be other ways to light and render a scene than raytracing, that can still be fully dynamic and maybe even unified, too. Why are you so sure that there is a Holy Grail to reach for?
because tens of years prove it always comes down to this.

Now that's the point - why do we need the full thing, if we can get what looks like what we want without it?
some require it, and they would like to benefit from hw, too.

and why should we do tons of hard works, tons of approximations here and there, if we could get teh full thing instead? performance is NOT the reason. only the fact that you can sell and sell and sell as people don't know what they would get if they would support the other side.

But this approach has worked in CGI for more than a decade, and we have yet to see a reason why it wouldn't in the future...

it has only worked in so far that you could always continue to sell it, and let the people do the hard work to get nice result-approximations with it.
 
A hybrid scanline rasterizer / raytracer approach would be the best IMHO as well, but it'd still need a lot of fine tuning and care so that render times wouldn't fluctuate too much.

I'd also precalculate/cache as much as I could; like how the Dreamworks guys did as described in their Sig2004 paper linked on the previous page. You generate the radiance maps with raytracing, but do not trace for the actual lighting... You could then add some simple rules or trickery to hide the extra calculations for the cases where you move/toggle a light... say, make sure that it takes at least 0.5 seconds for a light to turn off or on, or generate 2-3 sets of radiance maps and so on. So you'd basically trace in the background on simplified scene data, and decouple the actual rendering/rasterizing from it to stabilize framerates. This could work in a realtime enviroment although with a lot of problems to solve.

As for reflection/refraction... use maps as often as possible, generate cube maps from the scene that you use for the tracing itself, render the reflection/refraction into a texture as well... And so on.
 
davepermen said:
this is different when you implement shaders today on gpu. depending on what you want to do, you have to have special data here, special algorithms there (outside of the shader!), you have to prerender certain textures, etc. lots of shader-specific additional overhead. raytracing doesn't need this. renderman shows the way.

Your mixing up things here IMHO. The reason why videocards do all this magic is because of SPEED, not because they're not raytracing. As you've said, calculating the shading itself is a similar process, but for raytracing you have to travel around the scene... with simple scanline rendering, you just rasterize triangles/micropolygons.

And Renderman is not a raytracer at the core, its a scanline rasterizer, REYES architecture and so on.

if you can't get realism, then your surealism is only the bugs of your faked realism. means you're limited by the tools, not by the imagination. possibly pixar doesn't want realism. but hollywood wants, for special effects. and gamers want, for realistic style games.

I repeat: Hollywood movies are not real either. They're almost as stylized as a Pixar movie, only using different elements. But there are many extra artificial lights, bounce cards, etc. even in a simple outdoor daytime scene in any movie. The director wants the lighters to show and hide things as he sees fit, not as physics would dictate.
Same goes for VFX: you light the scene to separate the main character from the background, hide the ugly monster, give mood and so on - and not as a 'simple' and realistic GI calculation would make it.

because tens of years prove it always comes down to this.

It does not.
Prove it, by the way; I've expanded upon how movie VFX does not require and prefer raytracing, now its your turn to tell me why it does not work for us... :)

and why should we do tons of hard works, tons of approximations here and there, if we could get teh full thing instead? performance is NOT the reason.

The reason is that it's usually better to let an artist run free. He wants fast and flexible tools, and raytracing is not such a thing.

it has only worked in so far that you could always continue to sell it, and let the people do the hard work to get nice result-approximations with it.

We could raytrace as much as we want - we don't use hw acceleration for rasterizing either. It just happens to be faster and easier to control.

Keep in mind that there is a place for raytracing as well, but only to act as a part of the big toolbox, and not to replace it.
 
2002 ACM SIGGRAPH Awards Computer Graphics Achievement Award
David Kirk

Today, computer graphics is a field with cultural and societal importance beyond the dreams of the early SIGGRAPH pioneers. Indeed, much SIGGRAPH research over the years has made the journey from esoteric laboratories to the everyday lives of millions of people. This has become possible because advanced high performance graphics systems, once costing millions dollars and the province of flight simulators and a few national centers for research, is now available to anyone with a personal computer. Siggraph is pleased to award Dr. David B. Kirk the 2002 SIGGRAPH computer graphics achievement award for his key technical role in bringing high performance computer graphics systems to the mass market.

Dave has been involved in graphics hardware and algorithm research for almost two decades. After receiving training at MIT in Mechanical Engineering, receiving his BS and MS degrees there in 1982 and 1984, he joined Raster Technologies working on the Raster Tech Model 1, Model 1/25, and Model 1/80, which offered z-buffering and shading in firmware. In 1984 he joined Apollo computer. Along with Doug Voorhies and Olin Lathrop he coarchitected one of the outstanding graphics workstations of the day: the Apollo DN1000VS, the first workstation to offer hardware texture mapping.

Dave has also published extensively with collaborator James Arvo, researching algorithms for ray tracing acceleration, object oriented ray tracing, and global illumination. He has also edited Graphics Gems III.

Well..obviously he doesn't know what he's talking about..I mean, B3D is full of selfproclameted 3D graphics geniuses that have published papers about RT and won a Siggraph Awards..
I think one should think twice before saying someone as Mr. Kirk is a liar/faked his researches

ciao,
Marco
 
as an artist, i do understand your position. but you don't seem to really know the difference between hw shaders on gpu today, in how to implement effects (lighting, shading, shadows, reflections, and all the stuff), compared to how you would have to simply do it in raytracing.

this is a big difference. hidden for most artists by a lot of work. but i'm a programmer, i see this difference. getting art done in rastericing, to look at least similar to what simple raytracing can accomplish is tons of managing work on when to do what how on gpu. it's not at all automagic, it's complicated, and the way is filled with tons of mines that can nuke your performance. the network you build up to get both material shaders, and all the global effects work together gets inherently breakable. the individual effect is not the problem. the problem is getting them work together.

with raytracing, this is all a non-issue. it "just works". and it can work fast, too. and this is where a lot of the research goes in.

kirk stated his gpu can do bether. he gives a 500mhz 16pipeline beast into the duell against a 90mhz 1pipeline baby.i don't wonder if it can outperform saarcor. but is this the right way?

do i really need an xbox or a ps2 to play game boy tetris? i don't think so. and kirk failes to ever react on this fact. understandable, as he wants to hype his gf6 == best position. but, wrong.

and all the hibrid things. do you really think that will ever happen? its even doable with saarcor yet now. openrt and opengl can work together, thats a non-issue. you can use saarcor + gpu + cpu if you want. (but don't try to share data, of course:D).

the problem is, as long as there is no real raytracing hw, how can we create a mixed hw?.


what i don't understand is how people want the full thing, right now, and it has to beat everything, else they never accept that it could, one day, work. this is just a small research team. they do as much as they can, and the results are impressive (think of david against goliath). they can't kill goliath yet, and they don't plan to. but they try to impress him, so they can work together.

if they get real funding, sponsoring, and support from one of the bigger ones, they can get big enough to kill goliath.

and two things remain:

for the best image quality, where every sort of effect, every sort of lighting, and everything imaginable should be doable, you need raytracing (or beam tracing, hehe, to get rid of all the filtering issues for ever).

in statistical estimates, raytracing wins on big datasets. datasets get bigger and bigger.


and a third thing: software renderers, used in all sort of high end stuff, are NOT comparable in any form to todays gpus. else, they would render all on the gpu right now.
 
Back
Top