D. Kirk and Prof. Slusallek discuss real-time raytracing

I'm sure this makes sense if you're rendering a video or looking at a modeled car, but it doesn't make sense in a game. Games include huge worlds, and it'd be pretty rediculous to raytrace the entire level each frame (the data transfer alone would kill performance).

So, we have things like portal rendering and whatnot, ways to eliminate objects. From the start, then, you're not always going to know exactly how many objects are going to contribute to rendering, you're not going to know how many objects that reflect are going to be around, etc.

And since the performance will scale greater than linearly, there will be problems to overcome in limiting the complexity of each scene.
 
Re: HW raytracing

GeLeTo said:
All fast raytracing implementations I've seen so far require some sort of spartial subdivision (BSP trees, octrees, etc).
As it happens, quite a lot of graphics intensive apps like games for instance, already employ such subdivision schemes for physics calculations and scene management
Those are not raytracing-oriented, i.e. they usually encode info on mesh/submesh level, not primitive, but raytracing certainly could benefit from here.
 
nutball said:
How widely used is ray-tracing (non-realtime of course) in the high-end CGI business, for movies, etc. You know, the Pixar / Renderman / ILM / blah blah blah industry :?:

This is the $10.000. question :)

Raytracing is used very-very sparingly in CGI, for effects that really require it, and even in those cases, it only contributes to 1-3 passes that are combined with dozens of other passes to get the final composite.

For example, in the case of Gollum, the skin lighting utilized subsurface scattering that required raytracing; and there was an ambient occlusion pass too. As an interesting side note, the tracing itself was performed using an array of depth maps (from shadow casting lights) scattered on the surface of a large sphere, all done in the Renderman shader - so no builtin raytracing features were used. The rest of the passes, including the eye shaders with reflections and so on, used no raytracing at all, as far as I know (I can ask some Weta guys though, if you want to know for sure about the eyes :).
Other cases for raytracing were the big bottle stuffed with nuts in Bug's life, and some scenes in the Matrix sequels (the mega-fight in the end with all the rain around the CG doubles).
But most of the shadows are depth mapped shadows, processed in a 2D compositing app to get the area light look; most of the reflections are simply rendered into textures; and most of the lighting is spot lights, with global illumination only contributing to an occlusion pass. It's faster, artists have more control, in other words: it's better :)


The general problem with this discussion is that it only centers on the technical aspects - how easy it is to code, accelerate, pre-calculate the effects, what to do with the hardware, etc. What you miss is the artistic part, which in the end is the more important, IMHO. After all, most people don't care about how you generate the images they see in games, movies and commercials - they only care about the looks, and that is defined by the artists.

Now, artists do care about the physics of the world around us, but they generaly prefer not to let them interfere with their work :). That is why actors wear makeup in movies, that's why there is a need for lighting, to add atmosphere and guide the eye of the viewer around the scene, that's why you usually can't turn the camera around in movie scenes (the lighting would fall apart). You can say that they paint with light just as they paint with the colors of the clothing, the set, scenery and so on.
Adding raytracing will only make the life of the CG artist a lot more complicated - instead of painting in the effect that they want to achieve, they first have to fight the realistic results already present in the scene, just like in real life. I can tell you that it would only anger them, especially if they have to wait more for their renders as well.

Just look at the Quake3 stuff - sure there are shadows, reflections and whatever, but does it make the game look any better, really?
So, in my opinion, raytracing in hardware is not the way to go, and I'd not expect it to replace today's approach at all. I realize the differences between a movie scene with a defined camera path and a game world with complete freedom, but I still believe that the answer is to develop better tools for the artists (as an example, to allow them to really PAINT lighting), instead of trying to calculate everything as physically correct as possible...
 
What's stopping a CGI artist from throwing an artificial light or two into a physically correct RTed scene the same way a gaffer does it on a film set?
 
The main problem with doing ray-tracing on current hardware is that current hardware still takes vertices from memory and puts pixels to memory.
This has also been pointed out by prof. Slusallek:
The Boeing model does not use instancing at all! It contains roughly 350 million separately stored triangles, which we load on demand as required. And with some of the outside views we are seeing working sets of several GB. The key to rendering such a model is proper memory management, which is already non-trivial on a CPU. Having to deal with the added complexity of a separate GPU, graphics memory separated from main memory, and only limited means of communication with the CPU and finally disks, makes it so much harder to use this approach.
What if futre GPUs will have much more generalized memory access? ;) If they won't be limited with vertices/triangles and pixels/textures in a way current hardware is?
I am not quite sure why would we need specialized "DirectX 7 style" ray tracing hardware?
 
davepermen said:
actually, this is the power of raytracing. every part of the algo has a determined maximum evaluation time, means the worst case is 100% defined...

I agree - for reflections and shadows - the raytracing approach scales quite well. But for scene complexity it doesn't scale at all. The HW raytracers rely heavily on paralelsim to acheive speed.

Make your triangles too small - you loose paralelism because you don't have many rays that intersect the same triangle. What would be a simple Z-read/test in a traditional rasterizer will require testing a single ray against many triangles.

Make the visible scene big and complex(outdoors) and you loose paralelism beacause you don't have many rays that span the same BSP nodes. Further from the viewer the distance between coherent rays becomes too big and at the end you may have to trace each ray through different BSP nodes individualy. Also further from the view your triangles become smaller(see above).

And the biggest showstopper is - how can you easily put an animated character in a raytracer? Rebuild it's BSP each frame? I don't think this is feasible. Moving stuff around will not be very efficient either.
 
Laa-Yosh said:
Adding raytracing will only make the life of the CG artist a lot more complicated - instead of painting in the effect that they want to achieve, they first have to fight the realistic results already present in the scene, just like in real life. I can tell you that it would only anger them, especially if they have to wait more for their renders as well.
The thing with games is, that at some point in time it becomes infeasible for artists to attend to every graphics detail in the game.
Game worlds are becoming increasingly larger, developers are turning to procedural methods for content generation on many levels ( materials, geometry, object populations etc ).

Its a different story with a movie of course, where you can spend tons of time for each single rendered frame.
 
Raytracing is already used in games.

Some precalculated lightmaps are calculating light occlusion with raytracing. In fact for precalculated lightmaps or env maps, even precomputed radiance transfer you can use whatever method suits you. Especially tracing rays.

In a game like Quake 3 (the classical one), the precomputed light maps are a good part of the visual quality and I think their tools use simple raytracing to calculate the light on one surface.

I remember the first "advertised" raytraced game, was using sprites that were prerendered using raytracing. It was running on Amiga I think.
 
no_way said:
Laa-Yosh said:
Adding raytracing will only make the life of the CG artist a lot more complicated - instead of painting in the effect that they want to achieve, they first have to fight the realistic results already present in the scene, just like in real life. I can tell you that it would only anger them, especially if they have to wait more for their renders as well.
The thing with games is, that at some point in time it becomes infeasible for artists to attend to every graphics detail in the game.
Game worlds are becoming increasingly larger, developers are turning to procedural methods for content generation on many levels ( materials, geometry, object populations etc ).

Its a different story with a movie of course, where you can spend tons of time for each single rendered frame.

...and this is also the greatest problem that I see in the future: How to make high quality content for games, so that Playability won't get too big hit and still player feels finding something new every time he starts the game?

You certainly can generate even huge worlds dynamically, but that solves problem only partially. For Example, able to create believable forests, you need A) tens to hundreds of different looking trees, or B) pretty awesome mutation engine. Forest also needs a lot of vegetation, plants, grass, different sorts of swamps, ponds, lakes, streams, etc. This all evetually needs lot's of one thing: Textures. you will have to have huuuuuuuge storage of textures for making all the stuff look natural. good looking landscape can quickly eat 4-6 texture layers (at least when combined semitransparent water effects) plus few good pixel shader programs. And all of this, needs a lot of work to get working together, which again means more ppl working on with GFX engine and graphics alone (without mentioning needed ppl working on with Story line, game play and AI, that has to match with updgraded graphics.) which again means bigger budget for game. bigger budget means that more games has to be sold to get publisher getting justify to his invests to the project.

And again, which direction PC game sells have been going lately? :?

I am really scared that some day PC playing ppl find themselves from situation that only 10% gamers are buying their games, while at the same time game prices have gone skyhigh and amount of different genres have shrinked to around 3.

(sorry guys, it went a bit off topic, but I had to bring this up.)
 
Nappe1 said:
A) tens to hundreds of different looking trees, or B) pretty awesome mutation engine. Forest also needs a lot of vegetation, plants, grass, different sorts of swamps, ponds, lakes, streams, etc. This all evetually needs lot's of one thing: Textures. you will have to have huuuuuuuge storage of textures for making all the stuff look natural. ....
which again means bigger budget for game. bigger budget means that more games has to be sold to get publisher getting justify to his invests to the project.
Thats what middleware is for. Procedural material libraries, vegetation generation etc etc
For instance:
http://www.gamasutra.com/features/20031001/sanchez_01.shtml
"Product Review: Tree Generation Middleware"

These are only going to get better over time. Development houses which make smart use of various available and emerging middleware for game development certainly have an edge.
 
Laa-Yosh said:
This is the $10.000. question :)

Hehe, yes it is ;) It was kind-of a leading question as I had an inkling that what you said was the case.

If the Big Boys with oodles of FLOPs aren't making widespread use of it, then it's tough to argue that it's a good candidate for replacing existing (and widely understood) techniques in a regime where microseconds are everything.

Moreover it also suggests to me that the "increase" in quality gained by moving to full ray-tracing isn't judged as worthwhile by folks who make their living by being at the cutting edge of photorealistic CGI (in a commercial arena that is, not academic research projects).

Laa-Yosh said:
Just look at the Quake3 stuff - sure there are shadows, reflections and whatever, but does it make the game look any better, really?

IMO it looks considerably worse than the visuals available in modern game engines using modern hardware, and certainly doesn't even come close to the shots/movies I've seen from, eg. UnrealEngine 3.

I know it's an old engine, so the comparison is unfair, blah blah blah, but IMO if ray-tracing is to be the Next Big Thing(TM) then it has to yield results now that are better than the best available with current techniques (especially as RT is almost infinitely parallelisable). Otherwise it's just another maybe possibly technique.
 
Simple raytracing while nice isn't that much better then what can approximate with rasterisation. But when we start doing complex stuff raytracing can give very nice results IE Global Illumination.
 
Raytracing for Global Illumination? It doesn't handle diffuse->diffuse interactions (for which you need Radiosity; important for scenes where not everything is shiny) and doing extended light sources/soft shadows tends to suffer the same performance/quality tradeoffs with Raytracing as with traditional Rasterization.
 
arjan de lumens said:
Raytracing for Global Illumination? It doesn't handle diffuse->diffuse interactions (for which you need Radiosity; important for scenes where not everything is shiny) and doing extended light sources/soft shadows tends to suffer the same performance/quality tradeoffs with Raytracing as with traditional Rasterization.

Won't happen with full monte-carlo raytracing hmm I must be getting confused.

Anyway all the rasterization GI approximation attempts I've seen run at terrible speeds too.
 
Traditional Whitted ray tracing might not do global illumination, but virtually all global illumination algorithms trace rays for visibility information. Including many radiosity implementations.

When it comes to asymptotic complexity raytracing wins (it's O(plog(n)) in pixels and triangles, rasterisation is typically O(p+n)) but this doesn't really matter for the kind of scenes we're interested in, i.e. high resolution, high AA, high shading complexity. Geometriy performance usually doesn't factor in performance and raytracing scales badly for things that artists tend to care about like soft shadows and high AA.

Besides, asymptotic complexity really is misleading since according to asymptotic complexity, path tracing (Monte Carlo ray tracing) is much faster than both Photon mapping and radiosity, which obviously isn't the case for most scenes we're interested in lighting.
 
arjan de lumens said:
Isn't full monte carlo raytracing incredibly slow, like 50+ rays per pixel?

I would bet 50 would give you VERY bad results. Also caustics require raytracing.
 
GameCat said:
When it comes to asymptotic complexity raytracing wins (it's O(plog(n)) in pixels and triangles,
Err....... How did you build the acceleration structure in sub linear time? Magic?
 
bloodbob said:
Also caustics require raytracing.
The caustics on the bottom of a pool or shallow lake? Not hard if you can do vertex texturing or render-to-vertex-array in an otherwise traditional rasterizer: subdivide the water surface into a mesh. Then, for each mesh point, look up a normal vector in a texture map (normal map for the water surface), calculate the direction that the incoming light from your light source will get after refracting through a water surface with the given normal vector. For the resulting direction, divide xy components by z, multiply them by water depth, offset them by mesh location on the water. Now, use the xy coordinates to render a (antialiased) point primitive into a texture map, with additive blend. After having processed the entire mesh, you have rendered a texture map that you can apply to the pool bottom or seabed, containing caustics. No real ray-tracing needed. Granted, it is a bit cheesy, and if you use a too coarse mesh it will suck. Caustics from other objects (like a lit glass sphere) may be harder to shoehorn into the classical Rasterizer model, though.

cristoph: That model covers point lights, hard shadows, specular reflections, and at most 1 level of diffuse reflection; as indicated in the discussion here, once you wish to introduce extended light sources, soft shadows and diffuse->diffuse lighting effects, everything immediately gets about an order of magnitude harder, slower and more expensive. That applies pretty much the same whether your main rendering paradigm is Rasterization or Raytracing.
 
Back
Top