D. Kirk and Prof. Slusallek discuss real-time raytracing

Nexiss said:
In general, with some sort of scene hierarchy, raytracing scales logarithmically with scene complexity.
Hrm, I thought it was more like the square, so I guess that's good. But it's still going to be challenging to manage for a game environment.
 
imho, (like anyone would care? ;) ) Ray Tracing could be the next BIG step. Just like moving from software / 2D to Hardware 3D was around mid-90's. Will the Ray Tracing be _The Next Big Thing_ ? it's really hard to say. It depends in so many things. Basically I see 2 different roads to point where Ray Tracing would eventually replace the rasterizers (if not completely, then mostly.)

1.) what Ray Tracing is missing right now, is the successful pioneers. After even a small success on some HW Ray Tracing Implementation, it would definately wake up the interest of API providers. And when that happens, the snow ball has started rolling and Ray Tracing is on it's way to replace Rasterizer implementations.

2.) If some major IHW's would start implementing things like Automatic Shadow Casters on their Hardware via Ray Casting / Ray Tracing route, it could lead to point where things would be easier to implement via existing Tracer / Caster ASIC Blocks than build a own rasterizer version for it. This would be much more flexible move from the consumer point of view, but most likely at the end the Tracer part wouldn't be in so big role than in 1st route and also I doubt that efficiency wouldn't be in the same level, because of need keep it still compatible with nowadays rasterizer based APIs.




yeah, I really would like to see option 1. to come reality. or does OpenRT 1.0 and DirectRT3D sound bad? ;) Of course, it is possible that neither option comes to reality and HW 3D graphics continue it's way on rasterizer line...
 
Waltz King said:
rwolf said:

Yes, this real-time raytracing engine - developed by a student - made the "GameStar" magazine investigate further and arrange this discussion in the first place. It's really impressive, I have seen some nice in-game videos. But: it's running on a supercomputer with 48 Athlon MP CPUs and 12 Gigabytes RAM...

Only twice as powerful as the chip the guy was talking about in the interview. I believe that was a FPGA implementation if it was then you should get much better gains on ASIC though it might have been asic I'm too lazy to go check.

Edit : bzz wrong bloodbob

Its eight times as fast. But it is a FPGA implementation so a ASIC implementation would hopefully be fair bit faster.
 
As long as the majority of the work for realtime rendering goes into shading and finding intersections for primary rays and shadow rays it makes little sense to switch to raytracing.
 
Nexiss said:
In general, with some sort of scene hierarchy, raytracing scales logarithmically with scene complexity.

For the simple intersections of rays originating from a point, the amount of nodes which have to be tested/rasterized are equal between raytracing and Greene's occlusion culling scheme ... the big advantage in complexity raytracing has is that it can subsample the scene, but that property isnt all that desireable for present geometry. Maybe once we get to 10s of visible tris per pixel that will become an advantage, but for now it is more of a burden ... since the same aspect which allows it to subsample also makes anti-aliasing (texture and edge) so much less efficient with the tri/pixel ratio we have now.

At some point stochastic supersampling will become the only realistic way of doing anti-aliasing, and make rasterization fully obsolete ... but not now, at least not for games.
 
HW raytracing

All fast raytracing implementations I've seen so far require some sort of spartial subdivision (BSP trees, octrees, etc). This means that the scene will need some pre-processing and that you can't easily change any geometry.

Take for instance one of the most famous HW raytracing architectures (I think Philipp Slusallek has some relation to this project):
http://www.saarcor.de/

People go to the web site, see the impressive screenshots, the talk about high fps with less HW resources and start dreaming of their next-gen raytracing videocard. But there are way to many limitations:

1. You need an axis-aligned BSP of the whole scene. When something changes you have to rebuld the affected BSP nodes. This is very slow (especialy when the changed geometry spans a root node) and can not be done efficiently in hardware.

2. The architecture requires good ray coherence - e.g. lots of rays span the same BSP nodes and intersect the same triangles (hence the advertized low bandwidth requirements). Not good if you have lots of pixel sized tiangles - will happen a lot when games start to use HOS+displacement mapping or just more dence geometry. Also not good for things like reflections/refractions from bumpy surfaces.

3. I don't see how this architecture can be less complex than the classic hardware implementation - they replace the very simple triangle rasterization+depth compare units with a bunch of paralel raytracing units that perform BSP traversal and triangle intersections.

This approach has a few advantages - the capability to calculate correct reflections and refractions , and the zero overdraw when calculating the shading. But because of glitches #1 and #2 it is unsuitable for rendering anything other than static and relatively(by today standarts) simple scenes.

All other disadvantages of the standart raytracing still apply - lots of context switching, needs the whole scene in memory at once, etc.
 
MfA said:
Nexiss said:
In general, with some sort of scene hierarchy, raytracing scales logarithmically with scene complexity.

For the simple intersections of rays originating from a point, the amount of nodes which have to be tested/rasterized are equal between raytracing and Greene's occlusion culling scheme ... the big advantage in complexity raytracing has is that it can subsample the scene, but that property isnt all that desireable for present geometry. Maybe once we get to 10s of visible tris per pixel that will become an advantage, but for now it is more of a burden ... since the same aspect which allows it to subsample also makes anti-aliasing (texture and edge) so much less efficient with the tri/pixel ratio we have now.

At some point stochastic supersampling will become the only realistic way of doing anti-aliasing, and make rasterization fully obsolete ... but not now, at least not for games.

Well, I don't think I have seen anything with regards to the complexity of HZB, so...
Anyway, first hit is only one part of a realistic rendering (which is what we want, right?). Consider a simple scene with two pointlights; if every primary intersection spawns two shadow rays, only a third of the rays being traced are first hit rays. If we had, say, any reflections that percent drops even more. If we were at a point where we would be doing GI, well first hit then becomes much smaller.


As far as spatial structures go, you already see a lot of current software making use of such structures.
 
Nexiss said:
Anyway, first hit is only one part of a realistic rendering (which is what we want, right?).

We want the best image quality for the resources at our disposal (so I would say wrong).
 
It would be interesting if rendering moved towards real time raytracing, but that would be difficult logistically right now. Would anyone want to start from square one and go through years of development with it and abandon current fidelity? Probably not. That would leave it up to these guys to develop hardware that can do this that exceeds or matches current image quality and framerate, which is a pretty tall order
 
Sure, but we would seem to be rapidly increasing the resources at our disposal, with the goal of better quality.
 
By the way, shadow rays towards a spotlight still converge on a single point. Rasterizers arent exactly designed for it, but performing intersection tests for them with rasterization is possible (you just get a rather funky sampling pattern, determined by the depth buffer from the eye view).
 
How widely used is ray-tracing (non-realtime of course) in the high-end CGI business, for movies, etc. You know, the Pixar / Renderman / ILM / blah blah blah industry :?:
 
Chalnoth said:
PC-Engine said:
I think the IHV should move towards RT cores ASAP. The rasterizer route is a slippery slope not to mention a hack. ;)
But it has one huge benefit: performance. To put simply, it's easy to keep performance under control in a rasterizing environment (i.e. performance scales linearly in most variables). It's not so easy with Raytracing (or other, more exotic offline rendering techniques), as performance doesn't necessarily scale linearly, and will be highly scene-dependent.

actually, doom3's stencil shadowing is HIGHLY incontrolable in terms of performance and scalability, and highly scene, and camera dependent.

it would be perfectly estimatable in worst case for any raytracing hw.

rastericing is NOT good to estimate performance. draw one triangle can lead to completely different fps, just depending on where the camera is.



i think kirk is rather closed minded. he hadn't had ANY proof to show his statement is true. raw power does not mean it helps. on the othe rhand, slusallek has proof. intrace is saarcor is openrt is all the same (all those funky domains:D). tons of papers, real world demonstrations, etc, showing "it works".

his statements can be taken as a fact. kirks are only based on marketing (as long as we can sell rastericers don't tell ANYONE how bad they are. once we can't, FLAME RASTERICERS TO DEATH.)
 
davepermen said:
actually, doom3's stencil shadowing is HIGHLY incontrolable in terms of performance and scalability, and highly scene, and camera dependent.
Which is why it's nice that there are other shadowing algorithms out there.

it would be perfectly estimatable in worst case for any raytracing hw.
Definitely not better than rasterizing.

on the othe rhand, slusallek has proof. intrace is saarcor is openrt is all the same (all those funky domains:D). tons of papers, real world demonstrations, etc, showing "it works".
Read GeLeTo's post a few up.
 
Chalnoth said:
Definitely not better than rasterizing.

actually, this is the power of raytracing. every part of the algo has a determined maximum evaluation time, means the worst case is 100% defined. simple said: if you add reflections of EVERYTHING with 1 recursion, it takes 2x as long to render. with 2 recursions 3x as long, etc..

if you add reflections onto everything in rastericers, you're fucked. there, every mirror-like thing adds another (6) pass(es).

similar for shadowing. it takes at max 1 additional pass per light. stencil shadowing can take arbitary long. this results in a most case high fps, but sometimes it can drop to below 2digit numbers (and believe me, you don't want to have such a case while you have 3 enemies attacking you. then again, in general, _THIS_ is the worst case situation:D).

raytracing is much more balanced. and independent on scene (the complexity, means the worst case). this means, if worst case is 20fps, you will NOT drop below it. if the next hw is 2x as fast, you will NEVER drop below 40fps, etc..

the maximum on the other hand is quite a bit lower, too.. but the variance, the fps range, is much more tight. this is great, it leads to a fluid overall impression.


kirk is just ridiculous somehow, the way he hypes his gpu :D i do understand him, though :D
 
Back
Top