D. Kirk and Prof. Slusallek discuss real-time raytracing

Evildeus said:
davepermen said:
Evildeus said:
Well, that also means you can't prove your claims. And your logic is a bit flawed don't you think? :?

uhm? http://www.openrt.de/ -> http://www.saarcor.de/ -> http://graphics.cs.uni-sb.de/~jofis/SaarCOR/DynRT/DynRT.html

and much more? i don't have to prove saarcor (nor intrace www.intrace.com). they can do that themselfes.
I know what can be done, that doesn't tell me how it compares to a 6800.

it just states what saarcor can. and intrace. kirk states his gpu can do bether. but he has no prove. if he had, i would believe him. that way, i can't, as he is known for marketing. so.. as he can't show his claims, it means there is the tendency he does not have prove.

and the gf6 can not beat saarcor in efficiency. possibly in raw speed. but not in efficiency.
 
So if i say to you, a bird can fly, but i can't provide you a bird, that means birds can't fly? :?

If a 6800U can do better in raw power, it means at least that part of his claim is not pure lies.
 
davepermen said:
the ability to scale to the perfect image. something, where raytracing scales by far much bether.
Far better than what?

there is much raytracing involved with newer cgi. there is even much raytracing involved in current games. spherical harmonics, polypump, they all use raytracing.
LOL :) are you calling RT everything that's firing a ray?
Obviously you don't need RT (neither in preprocess time) to use spherical harmonics or mesh semplification via normal maps.

we ARE interested in the recursive algorithm, we ARE interested in implementing it the real way.
What's the real way?
Cause it's seems a lot of people is not that interested in that way..

gpu's of today don't have this nice line anymore. they are yet now heaty, big, power hungry, always at its limits.
Well..let's way the RT revolution..forever :)
You're right, GPU are big, power hungry and have limits..but do you realize most of the silicon on current GPU is not devoted to rasterizing?
A RT hw can't magically compress shading time.

ciao,
Marco
 
nAo said:
Far better than what?
rastericing. else, the power of todays hw would yet give shrek at MINIMUM.

LOL :) are you calling RT everything that's firing a ray?
Obviously you don't need RT (neither in preprocess time) to use spherical harmonics or mesh semplification via normal maps.
if you try to map an algorithm to hw, then yes, its rather visible they involve raytracing, and would fit easily onto raytracing hw. not so, onto rastericing ones.

most use raytracing there, due the lack of good quality rastericing solutions to these topics. epic in ue3 does, for example. they claimed in their 64bit hyping statements. polypump, too. doom3 as well. there are rastericing approaches, but they don't give the same result.

What's the real way?
Cause it's seems a lot of people is not that interested in that way..
learn how graphics work, and you would be. even kirk is! :D

Well..let's way the RT revolution..forever :)
You're right, GPU are big, power hungry and have limits..but do you realize most of the silicon on current GPU is not devoted to rasterizing?
A RT hw can't magically compress shading time.
actually, they remove all sort of lod, and culling issues. and you don't need to shade one pixel you won't need in the end. wich results in a much more stable framerate, and in general a higher one, if shading limited.
 
Evildeus said:
So if i say to you, a bird can fly, but i can't provide you a bird, that means birds can't fly? :?

If a 6800U can do better in raw power, it means at least that part of his claim is not pure lies.

birds can fly?

well. the 6800 has more raw power. it has 16 pipelines at 500mhz, and much faster ram. of course it has more raw power. it just doesn't mean anything. i know gpu's can do raytracing much much faster. but it doesn't mather if you can't scale it to real game scenes with ease. you can't, till now, with gpu's, due other restrictions.

the process doesn't map in a nice way to gpu's. due that, most of the processing power is spent in theoretically useless tasks, and a lot of processing power is just lost.

this was true till gf6. kirk should have been able to show its not true anymore with gf6. a marketing person should never claim something without being able to back up. they are wrong till they can prove right, not the other way around.

tracing a sphere with diffuse shading runs at 70fps on 1280x1024 on a 9700pro. non-beatable except with newer gpu.

gpu's definitely do have impressive power, wich i'd like to see directly used in raytracing. but the mapping fails till now.

i do believe kirk that the gf6 has enough power. the question is wether there is a way to actually USE IT. he can't show there is.
 
I suppose within the next 3-5 years, somebody will come out with a GPU that can do proper raytracing, so it can be used on a practical level, in games.

I am completely impressed with the saarcor chip. it slaughters the NV40-GeForce 6800 in what it can do well, raytracing. I'm sure the GeForce 6800 can be made to do raytracing of some sort. but it will never compare well to the effiecency of the 90 Mhz, 1-pipeline saarcor chip. because NV40 was not designed with raytracing in mind.

once the saarcor chip is perfected, I hope it can be scaled up massively, and used either as a stand alone co-processor, or as part of a new GPU by some major graphics vendor. (ImgTech PowerVR maybe :oops: )

I'd also like to see more competition in this field. rivals to the saarcor technology.

It will also be interesting to see how Cell processors can be programmed to run raytracing. if Cell is any better than conventional CPUs, GPUs and this saarcor chip.

in the mid to late 1990s, the big thing for consumer PC graphics chips to have was geometry processing (T&L). we had 3D acceleration, but not full polygon processors. the CPU had to do the front end stuff. workstation processors had on-chip (on the chip level) or on board (on the card level) geometry processing. it wasn't until late 1999 that PC graphics chips got on-chip geometry processing. The NV10-GeForce256 and to a much lesser extent of importance, the Savage 2000 (i think that's the Savage that got T&L)... T&L wasnt really used until 2000 or 2001. workstations and arcade boards had it since the early to mid 1990s (SGI, E&S, 3DLabs, Sega Model 2, Model 3, Namco System 22, etc etc).

so now, as we're at the start of the middle part of this decade, (about the same time as we started to get 3D acceleration in the last decade), we are seeing the first attempts at raytracing in hardware. chips designed with raytracing in mind. I predict that by 2008-2009, we will have consumer 3D hardware built around raytracing, of some sort. be it a raytracing co-processor, a raytracing unit within a conventional or semi-conventional rasterizing GPU/VPU, or even a full blown raytracing GPU.
 
one rival hw design to saarcor is freon, but i haven't had any official news since over half a year, not knowing how good or bad they go on.

but they had one very important thing:

raytracing has a lot of small and big scale paralellisms. this in a way simply not mappable to any ordinary cpu, nor streaming processor.

i hope this knowledge on how to parallelize can find its way into hw. but i have no clue about the state of their work
 
if you try to map an algorithm to hw, then yes, its rather visible they involve raytracing, and would fit easily onto raytracing hw. not so, onto rastericing ones.
Sorry, but at this point I'm starting to think you don't know what you're talking about (no offense here, I could have misunderstood your words..). Show me how evaluating the rendering equation via functions already expanded as Lagrange's polynomials involve RT.

epic in ue3 does, for example. they claimed in their 64bit hyping statements. polypump, too. doom3 as well. there are rastericing approaches, but they don't give the same result.
Please explain further, I have problems reading and understanding your own words. (sidenote: I'd not use the term 'polybump' as Crytek in this field didn't develop nothing and their weren't the first too..)

learn how graphics work, and you would be. even kirk is! :D
Well, I'm here to learn and I'm open minded too.

actually, they remove all sort of lod, and culling issues. and you don't need to shade one pixel you won't need in the end. wich results in a much more stable framerate, and in general a higher one, if shading limited.
Actually you don't need RT at all to remove culling and ovedraw/overshading issues, like a lot of rendering packages (and hw renderers) show.
LOD is not something you want to avoid, even if you're employing RT.

ciao,
Marco
 
those algorithms normally all use raytracing, because they define something like this:
"we sample at this point what we can see in this direction".

this is, by default, 1:1 mapable to raytracing. often, they integrate over a certain space, wich can then get aproximated in some way with rastericers.

but in general, raytracing lets you use all sort of statistics on how to sample arbitary direction-related information at arbitary points.

polypump technologies use this to trace from the surface of the low-res mesh in the surface-normal directions into the high-res-mesh, to sample colour, normal, and more information. they generate for example the selfshadowing terms then, too, by raytracing, by sampling on a hemisphere on each final texel.

spherical harmonics have to evaluate what they see over the whole hemi-sphere/sphere (translucent ones).

subsurface scattering maps nice to raytracing issues as well.

all sort of global illumination issues are normally defined in a way that fits raytracing.
 
The use of raytracing is still limited due to speed factors, and unpredictable render times, in movie VFX rendering and everything below it (= most of the CGI you see). A scanline renderer is always faster, and gives you all the quality you need in shading, sampling, texture filtering etc.
What you won't get in scanline are:
- raytraced reflection and refraction (you can fake it with maps, so not needed most of the time)
- global illumination (you bake it into textures, fake it with many lights)
- perfectly nice area shadows (fake it in compositing, with quick feedback)
- subsurface scattering (we have yet to get a good fake)

If you really need one of these, you'll raytrace that part of the scene in a separate pass, and you'll keep the scanline rendering for the rest. The current trend is exactly this one.
Of course a fully interactive 3D enviroment is a bit different than a movie scene, however this also means that render times have to stay constant. Raytracing just won't get you that.

I guess it's a recap, but it looks like you guys got back into the trace everything loop...

(Note for davepermen: I am a 3D artist working at a studio producing cinematics for an AAA game from EA :)
 
davepermen said:
polypump technologies use this to trace from the surface of the low-res mesh in the surface-normal directions into the high-res-mesh, to sample colour, normal, and more information. they generate for example the selfshadowing terms then, too, by raytracing, by sampling on a hemisphere on each final texel.
We're not doing this realtime, just in the preprocess stuff, so who cares?
The idea behind it is that we want to replaces milions triangle meshes with thousands triangles meshes :)
Maybe do you mean that with RT we could avoid to use those 'tricks' and just render those milion triangles meshes?
It would be funny to skin a 1 milion triangles character though ;)

spherical harmonics have to evaluate what they see over the whole hemi-sphere/sphere (translucent ones).
No. Spherical harmonics are just a way to represent functions in a space that has a lof of nice properties.
One use SH in realtime rendering just to AVOID other algoritmhs as RT :)
Well..it seems I'm not the only one that should learn more here.
Obviously this discussion is going berserk. I'm done..next thread please!

ciao,
Marco
 
Laa-Yosh said:
Of course a fully interactive 3D enviroment is a bit different than a movie scene, however this also means that render times have to stay constant. Raytracing just won't get you that.

actually, this is exactly where raytracing is bether than rastericing, if your scene has full fledged shading and lighting effects. doom3 has this issue with stencil shadows very much. and it needs tons of hw and sw support to get around this at least partially (only frame-rate changes of a factor 5, or so..)



(back to you: nice to see someone from the graphics departement. i know you prefer splitting the tasks. there just isn't any good solution till now to do this. first we need raytracing at all, then we can combine..)


my question to you: if speed would not be the issue. would you choose the rastericer approach, where the fake looks good enough, or prefer doing everything directly with the raytracer, having the full solution in one step, correct, without afterwork, and fixing, and twiddling?


oh, and, do you think that the way offline rastericers hack around to get their stuff done is mappable to hw at all?


(these are not offensive questions. i just don't know how.. so i ask)
 
nAo said:
We're not doing this realtime, just in the preprocess stuff, so who cares?
The idea behind it is that we want to replaces milions triangle meshes with thousands triangles meshes :)
Maybe do you mean that with RT we could avoid to use those 'tricks' and just render those milion triangles meshes?
It would be funny to skin a 1 milion triangles character though ;)
nope. i'm talking about the fact that we use raytracing quite some time. preprocessing, offline rendering. why not realtime rendering? if hw can do it, WHY NOT?.
the fact is, a lot of todays tricks need raytrace preprocess to get that far. raytracing hw would definitely help. even for preprocessing only. even more for not having to find such algorithms all the time to map solutions into the small domain of possibilities of a rastericer.

No. Spherical harmonics are just a way to represent functions in a space that has a lof of nice properties.
One use SH in realtime rendering just to AVOID other algoritmhs as RT :)
Well..it seems I'm not the only one that should learn more here.
Obviously this discussion is going berserk. I'm done..next thread please!
again, to generate the data stored in the sh, you normally use raytracing tasks.

raytracing is involved in about all gi situations. having it in hw would help, again.
 
davepermen said:
actually, this is exactly where raytracing is bether than rastericing, if your scene has full fledged shading and lighting effects. doom3 has this issue with stencil shadows very much.

Er, I consider Doom3's stencil shadowing to be similar to raytracing, in terms of unstable render times... Shading and lighting happen all the time in rendering, and a robust renderer like PRMan usually takes such simple loads without any worries. It's stuff like very large number of objects or very complex shaders that can bring it down, and of course raytracing...

my question to you: if speed would not be the issue. would you choose the rastericer approach, where the fake looks good enough, or prefer doing everything directly with the raytracer, having the full solution in one step, correct, without afterwork, and fixing, and twiddling?

I believe we would still not go to full raytracing, as we don't desire physically correct results. This is the reason why Mental Ray got a fast scanline engine to supplement it's (very good) raytracer, too.
We would, however, gladly take advantage of it whenever it could help. If there'd be no penalty for using an area light, a traced refraction, and so on, then we'd most likely flip the switch and enjoy the results. However, at the moment, there are sometimes orders of magnitudes in the difference between using a simple shadow mapped light with some compositing trickery to get it realistic, and using a proper area light and oversampled area shadow. And keep in mind that in the VFX business, you usually MUST be able to re-render a whole shot in one night, sometimes even in a few hours, when there are last-minute changes (asked by the director ;).


oh, and, do you think that the way offline rastericers hack around to get their stuff done is mappable to hw at all?

I wonder if I get the question right... you mean, how much of our general workflow could be HW accelerated, from rendering through compositing? I'm not sure I have enough technical knowledge to answer that...
 
davepermen said:
raytracing is an inherently recursive algorithm, just google around for tutorials and watch the picts to get a general idea.
I guess my real question is: How much calculation is there between the levels of recursion? The more there is to do, the better for the efficiency of modern hardware.
 
there is the general belief that gpus one day will be able to do all the rastericer-trickery high-end-artists use, to get good images (you know them more actively than i do, i guess).

problem is, most of those solutions are far away from hw mappable any time near or far, and most of them, even with hw, would take minutes to hours to render, today.

stencil shadowing has NOTHING technically to do with raytraced shadows. en contraire they show exactly the rastericing issue that exist.

we need one time to do the whole stencil setup for the whole geometry, and the two stencil passes (or a collapsed one), to render it. as a mather of fact, you can NOT know how long this takes, as it can eat near-to-infinite fillrate. with the optimisations both in stencilspeed, and the 3d-box-clamping, this gets less of an issue, but is still a fact.

raytracing would simply need to do "another pass". at max. cost, defined, time, defined. simple, and payable.

doom3 is one of the games wich shows how bad rastericers actually are, to get just a bit further in realism, you need to spend a huge amount of additional calculation work.


it's a difficult topic. there are a lot of .. mentual brick walls existing. people just never have learned to think without rastericing issues anymore. the reason the industry moved to rastericers was just one:

immediate fast results with a good amount of quality (quake1 as "good amount"). they all knew right from the start, its the wrong way to push. but it was right then the right way in terms of performance against quality.

raytracing was too costy. and now, years later, rastericers got all the money, and tons of research spent, to evolve till the max. nowadays, we know much about hw and software, and with this knowledge, saarcor, openrt, intrace, realstorm, freon evolve.

the target: brazil and co as minimum. if hw is fast, and designed to the real solutions, we don't need the fast hw of today, with the hacking solutions only implemented. then, mental ray can drop the rastericer fallbacks. and you, as artist, can do so, too.
 
Chalnoth said:
davepermen said:
raytracing is an inherently recursive algorithm, just google around for tutorials and watch the picts to get a general idea.
I guess my real question is: How much calculation is there between the levels of recursion? The more there is to do, the better for the efficiency of modern hardware.

modern hw can not do recursion really. and if you want full gi solutions, the recursion depths and the resulting stack sizes per pixel would be HUGE.

the other way would be respawning of rays. possible in raytracing hw, but not really on gpu's (except multipass solutions, of course).

just think of this: every ray hits a surface, as a drop hits the floor, then the drop scatters in all directions. that way, the ray scatters all away from this surface (shaded by the surface, of course), and each of those rays have to get followed again.

if you have a huge pipeline where you can feed in new rays while processing them, this is a non-issue. else, it is quite some issue.
 
Except rasterizers are getting more flexible at the same time, and due to the R&D that has gone into them, may well outstrip hardware raytracers, if they don't already (Well, if the 6800 doesn't already).

And once rasterizers are faster at raytracing than raytracing hardware (which will be much more expensive to produce, given the limited distribution), it's just a matter of software developers starting to say, "I want to do that," and hardware manufacturers saying, "Okay, sure. We'll optimize for it in our next architecture."

I haven't yet seen any reason why raytracing and rasterizing are mutually exclusive in hardware acceleration, though it would be nice, if we go the raytracing route, to have some different API interfaces for raytracing to make it easier to program for.
 
davepermen said:
just think of this: every ray hits a surface, as a drop hits the floor, then the drop scatters in all directions. that way, the ray scatters all away from this surface (shaded by the surface, of course), and each of those rays have to get followed again.
I don't see why you couldn't abstract the incoming ray as a vertex/triangle, and the outgoing rays as a render target (texture) that could be read into the next pass of rendering.

The real question is efficiency: if you don't do enough work each pass, performance could get pretty poor.
 
raytracing and rastericing are inherently different. you loose the power of both if you try to combine them. at least in hw solutions.

you can still gain, by using small rastericing solutions in parts of the raytracer. so technically, you can reuse knowledge. but only as hw manufacturer.


openrt is a great api to solve the needs. works very similar to opengl, but more object oriented. a public sample implementation is in the works (sort of a refrast:D). it will be slow, but working.
 
Back
Top