D. Kirk and Prof. Slusallek discuss real-time raytracing

nAo said:
Well..obviously he doesn't know what he's talking about..I mean, B3D is full of selfproclameted 3D graphics geniuses that have published papers about RT and won a Siggraph Awards..
I think one should think twice before saying someone as Mr. Kirk is a liar/faked his researches

ciao,
Marco

bether read what he talks in the discussion instead. he obviously get asked for prove, and can't give. he just claims. he can be a genious. but in the name of his company, he always just talks what sounds best for promoting the product. anyone doesn't? nvidia doesn't bother if they deliver a piece of hw that possibly just fakes the people and tries to blind them from the truth. who cares as long as people buy? NOBODY. its even BETHER. they don't have to bring real solutions, but just make the people need something, and give them that.

this is what everyone does. we would be at 2.8 - 3gig athlon64 right now, but because of intels failing with its prescott, we get 2.4gig now. amd could give us more, but they won't. they just want to sell the least expensive thing for as long as possible.

nvidia does the same. ati does so, too. and everybody else. the only revolutions happen, once such a big company starts to break. intel currently breaks, and boah, now we soon get small, high performing, low energy consuming high quality chips. if people would not complain about the heat and noise a prescott would produce, and intel would not get into troubles, things would never have evolved.

companies don't want to deliver the best. they want to deliver the best that is good enough to make the company win.

kirk works at a company. so thats his main focus.
 
Let's settle this large dataset thing... on our current project, we have one hero character with 250 thousand polygons in the control cage of the subdivision surface. We have a few dozen texture maps in 8K*8K resolution. We use micropolygon displacement. This one creature has more data than whole games have today.

I seriously doubt that any raytracer could beat PRMan on this simple character. It renders very quickly, and scales very well from video to movie resolution, with sparse sampled motion blur and depth of field. So, could you convince me that a raytracer would be faster? I really doubt it :)
 
davepermen said:
kirk works at a company. so thats his main focus.
That's a given. No wonder he's pushing its products (like Prof. Slusallek does, even if he's working for an academic institution, and I can't see anything wrong with that).
I just pointed out someone should think twice before calling some other guy a liar or an incompetent.
 
sure. kirk has done good work. i would not call him a liar. i would just call his statements to be marketing lie based.

he can not show anything, except that the gf6 has great rastericing performance. intrace on the other hand can show great raytracing performance, yet kirk states with his gpu, this would all be unneeded. we'd all like to see prove on such a claim.

would you believe the intrace dude if he states "with our hw configuration, a gf6 simply sucks at rastericing, you would never touch that again" ?. i wouldn't, i would like to see prove.

same here.
 
Laa-Yosh said:
Let's settle this large dataset thing... on our current project, we have one hero character with 250 thousand polygons in the control cage of the subdivision surface. We have a few dozen texture maps in 8K*8K resolution. We use micropolygon displacement. This one creature has more data than whole games have today.
hm.. when will we see this realtime on gpu?

I seriously doubt that any raytracer could beat PRMan on this simple character. It renders very quickly, and scales very well from video to movie resolution, with sparse sampled motion blur and depth of field. So, could you convince me that a raytracer would be faster? I really doubt it :)
it would render bether and faster than on any gpu at full detail with all features, with proper lighting and shadowing, possibly subsurface scattering, and all the fancy effects you want to, and need to add to it, if you want to make it look natural and good.

the micropolygon stuff is something not at all done yet in hw as well. we wait and see till we can see this really is a great thing in scalability and raw performance on a gpu. and it doesn't solve the major problems we have today. the way to get proper lighting working in a generic way. offline renderers don't have issues there, of course. they can, if needed, access anything anytime.
 
Rasterization is almost certainly more efficient for testing visibility and for getting hard shadows, and it is more efficient for edge and texture anti-aliasing (as long as you dont have subpixel tris). At the moment raytracing simply doesnt deserve to take first place as far as support is concerned, the results it could put on the screen at a decent fps wouldnt compare.
 
nAo said:
Well..obviously he doesn't know what he's talking about..I mean, B3D is full of selfproclameted 3D graphics geniuses that have published papers about RT and won a Siggraph Awards..
I think one should think twice before saying someone as Mr. Kirk is a liar/faked his researches

ciao,
Marco

Exactly where did I say he lied or he faked his research I never said that.

I either said he lied ( or the alternate see below ) and I was referrering to the discussion regardless I stated for PR which I doubt those research papers are for. He works for a company that sells chips that do rastering I think he is going to prompt that rather then the competition if you can't handle that someone might lie or bend the truth to further themself and their company get a reality check.

I also said he might not have written the papers because after all their could have been someone else with the same name world is a big place and only his first and lastname was mentioned before.

So stop saying I don't have a clue and stop accusing me of claiming peoples research is fraudlant.

If you still want to accuse me of this rubish please enlighten me of what kirk ment here.

Kirk:
Now who’s talking company politics? I see no need for special purpose ray tracing hardware if general purpose programmable GPUs can run the same algorithms faster.
 
MfA said:
Rasterization is almost certainly more efficient for testing visibility and for getting hard shadows, and it is more efficient for edge and texture anti-aliasing (as long as you dont have subpixel tris). At the moment raytracing simply doesnt deserve to take first place as far as support is concerned, the results it could put on the screen at a decent fps wouldnt compare.

hard shadows are efficient? a 3d scene with constant 500+ fps drops to below 10fps depending on how the geometry is placed if you use stencil shadows (and this is with the optimisations that are generally applied. it's btw an nvidia demo even, showing those features).

a raytracing scene will be only half as fast with one light, wich shadows sharp the whole everything. but it will never drop below half as fast as well..

cost estimation is much simpler, for gamedev that means you can much bether define what the target is you point at, and then, deliver a constant smooth gameplay at least at this target.

do you think you will like the moments where the monsters in doom3 attack you? nope. but not only because they are scary, but because in such conditions, fps can drop to single-digit. exactly in the moments you need your high fps the most.
 
davepermen said:
hard shadows are efficient? a 3d scene with constant 500+ fps drops to below 10fps depending on how the geometry is placed if you use stencil shadows (and this is with the optimisations that are generally applied. it's btw an nvidia demo even, showing those features).

Stencils aren't the only solution. Yep, depth map shadows are problematic but they work just as well. And I'm still not sure if we really need unified shadowing; and the rest of the lighting can remain unified. We'll see how it works in UE3 in 2006 ;)
 
i know shadowmaps. in realtime 3d, they are a huge problem to work with to get them at least more or less nice looking. they are by far my prefered solution over stencil shadows anyways.

but he mentoyed sharp precious shadows. you can't get that with shadowmaps, except the way highend does. but thats not mapable to hw (yet) at all.

and all those mixed solutions to partial issues are very tricky. espencially how to translate between the solutions without popping?

no. these are just ways to work around hacks with other hacks. the solution is to simply strive for the one real working solution. and those solutions, algorithmically, can be implemented with raytracing. not with scanliners. not in any cheaper, more efficent form.

the todays hacks are more efficient than raytracing. but by no means as scalable, as good working, as bugfree, as clean, and as good looking.

just watch the video of this game demo. watch the trees, how they throw the shadows. you won't see such a quality anytime on scanliners.

one hint: i mentoyed beamtracing before. highend offline scanline renderers often rely on gross beamtracing in a statistical instead of algorithmical way, to solve these issues. i'm actively following such a solution for realtime. it works rather well, and looks brilliant. all realtime and dynamic. it's pretty cool work. but till we see it really performing as realtime to call it realtime for a gamer, that'll be some time.

it's good looking cool work nontheless. and it uses full hw.
 
bloodbob said:
I also said he might not have written the papers because after all their could have been someone else with the same name world is a big place and only his first and lastname was mentioned before.
Yeah, there are a lot of David Kirk doing graphics research and heading nvidia technology group :rolleyes:

If you still want to accuse me of this rubish please enlighten me of what kirk ment here.

Kirk:
Now who’s talking company politics? I see no need for special purpose ray tracing hardware if general purpose programmable GPUs can run the same algorithms faster.
He meant exactly that. There is no need of custom RT hw when you can do the same on a GPU. In a couple of years GPUs will be flexible enough to overcome present problems and next generation GPUs obviously will be much much faster than current GPUs.
 
nAo. and why can't dedicated hw do this as well?!

this is so ridiculous. you just state that current gpu's have problems. but by enhancing performance more, means more power, more heat, more transistors, more expensive chips, they can beat a much simpler gpu?

yes, they can emulate raytracing. but they are not efficient at it. and they never will be as efficient as dedicated hw.

so anyone who wants raytracing prefers to have dedicated hw. just as you want dedicated rastericing hw for gaming.

gpus will never beat dedicated raytracing hw. (that does not mean dedicated raytracing hw is not programable..).

if it one day get that far, this only means one thing: it IS dedicated raytracing hw.
 
If the hardware is programmable it just becomes a question of for which rendering method it should be primarily designed ... rasterization has legacy and intertia on it's side, and can be as efficient as raytracing or moreso for the most pressing problems at the moment (visibility, hard shadows, anti-aliasing etc).

Raytracing can do lots of things better, which is completely irrelevant ... because it doesnt translate to realtime rendering yet.
 
davepermen said:
nAo. and why can't dedicated hw do this as well?!
Please show me when I stated that. Only a very stupid man would claim that, because there already hw raytracers on the market.


this is so ridiculous. you just state that current gpu's have problems. but by enhancing performance more, means more power, more heat, more transistors, more expensive chips, they can beat a much simpler gpu?
You persist to ignore the fact that a high level next generation hardware raytracer would be as 'fat' as a GPU cause the real burden (and the silicon estate) it's not in the RT phase, like in current GPUs just a small part of the chip is dedicated to the rasterizer.

yes, they can emulate raytracing. but they are not efficient at it. and they never will be as efficient as dedicated hw.
GPUs are not efficient at RT ( I never stated the opposite..) at this time

gpus will never beat dedicated raytracing hw. (that does not mean dedicated raytracing hw is not programable..).
I'd rewrite your sentence as "gpu will never beat dedicated and unexistant raytracing hw" ;)
Mfa and Laa-Yosh have already shown you we can do without RT most of the time. That's why the RT revolution isn't going to happen anytime soon.
We don't need it..we just don't care about it, that's plain and clear.

ciao,
Marco
 
Take a RT scene dominated by on-demand generated/cached procedural geometry and textures.
Preprocessing tricks like faked global illumination baked into lightmaps etc. are pretty much out of the question for such scene. So scanline renderers sleeve full of fakery tricks would be pretty much useless in such situation, whereas raytracer wouldnt care.

I believe tons of procedural content generated by middleware is where this ( RT 3D) industry should be heading, otherwise content development budgets will lead us to a situation where we get one or two major titles released in a year.
 
Procedurally generated content will never get good enough. That's the result of many years of research... You need human input, models and textures to build content that you can then use to create variations procedurally. Like the orc builder plugin for Maya that Weta has created to build the Massive armies.
But the content itself will be pretty similar to what we have today, not some looks-almost-like-a-tree procedural thingies. Anything else just won't cut it, and will be too expensive to render anyway, especially in realtime.

I'd rather expect content libraries to appear as a solution. Just as you can make a movie with the same kind of cars, buildings, clothes, vegetation etc. as hundreds of other movies, you will be able to do this with games as well. The artistic choice will be what you put in the enviroments, how you light it, and so on... Thus there'll be companies specialized in building content to licence, and we may even get to see virtual 'actors' for hire as well. Of course otherworld settings will require specialized content, but the same goes for movies as well...
 
hm.. in sort. some are just braindead and don't even try to really think in other ways.

its the only reason we aren't yet there. if support would be bigger, people would know what they would really get, and not always search for hacks getting rid of the problems that actually arise, we would not stick where we are.

always same looking content, even with shaders, and always same gameplay.

the more people would actually start research and bother whats going on, the more we would get. it's depressing. where's your energy all gone?.

"we just don't care about it". why? you don't bother about it? wouldn't it be at least interesting to find out what goes really on?. instead of staying with the crap you have? it could all be so much simpler, and clean.
 
Or maybe it's you who makes the wrong conclusions? Maybe the lack of innovation in today's content is not the artist's fault, but that of the game publisher? Maybe we could stop jumping on the next bandwagon and try to make the most out of what we already have?

It's just that I'm tired to see people getting all too excited about raytracing. It won't automatically make the graphics better, it won't make the games better, it won't solve all our problems without introducing new problems.
What we need is better art, and a general push towards innovation in the game industry, that's all. Raytracing won't bring either of that... that's why I'm not excited and would rather see research spent on something more practical, like better tools for content creation. Could someone please sit down and instead of coding yet another raytracer, would find out how to make UV mapping faster and easier?
 
In all likely hood real-time will jump over ray-tracing as its done lots of other things that made sense in the CG world.

The non-linear memory access of ray-tracing is just not hardware friendly enough. Why go through all that trouble get it working when we already know that ray-tracing has LOADS of problems with visual quality.

Better to jump into techiques like all-frequency radiance transfer which offer much higher quality images. Lots of research in alternative real-time system is being done, its just that few believe that ray-tracing will be a key feature.

One other thing that was brought up was that BDRF is a complete light model, that totally wrong even BSSRDF is a crude approximation.
 
nAo said:
bloodbob said:
I also said he might not have written the papers because after all their could have been someone else with the same name world is a big place and only his first and lastname was mentioned before.
Yeah, there are a lot of David Kirk doing graphics research and heading nvidia technology group :rolleyes:

If you still want to accuse me of this rubish please enlighten me of what kirk ment here.

Kirk:
Now who’s talking company politics? I see no need for special purpose ray tracing hardware if general purpose programmable GPUs can run the same algorithms faster.
He meant exactly that. There is no need of custom RT hw when you can do the same on a GPU. In a couple of years GPUs will be flexible enough to overcome present problems and next generation GPUs obviously will be much much faster than current GPUs.

So your seriously telling me that you think dedicated hardware can't do ray tracing faster then a generic chip???

If the dedicated hardware is designed with the same effort as generic hardware its ALWAYS FASTER. If not why are we using GPUs at all why aren't we using CPUs???
 
Back
Top