Can realtime Global Illumination be accomplished with 100TFLOPS of processing power?

The real question here is, what do we want to render in real time? A simple sphere? Low-res, low complexity game scenes? Movie level CGI? Even the later is a moving target, thigs are a lot more complex nowadays then in Jurassic Park...

There will be a meeting point sometime in the future. Where it is, we can't tell now.
A good guess is that Davy Jones is probably going to be the detail level to go for; it's one of the few CGI characters that... who has managed to trick even seasoned CG artists, making them believe that it was make-up or at least a CG augmented real person. So I'd say that this level or realism will probably be sufficient, and we have to work on non-rendering related issues from now on.

However, Davy Jones is a single character in a live action enviroment, and even this took a loooot of rendering time. So you can probably extrapolate from this to a level...
 
Err, too bad it's not that exciting to look at... ;)

We might not get to Pirates of the Caribbean 2's level of CG soon enough as far as real-time rendering is concerned, but there is something about dynamic and global lighting+shadowing systems that is appealing to me even if you restrict polygonal detail of characters+scene to just a bit above PlayStation 2 levels: if I hold a flash-light I want any object that occludes it to cast a shadow... whether it is photo-realistic or not... is a secondary concern (it comes after the priority of having full light sources-objects interaction).
 
Before meeting point with movies, games are going to improve illumination quality in many small steps. Interesting question is what the next steps will be.
Existing games have hard shadows without indirect illum (doom) or with single precomputed channel of indirect illum that doesn't respond to dynamic lights (all other games). Shadows are hard or blurred hard, never correct soft shadows from area lights, it's just marketing.

Next milestones for games are
2007 indirect illum precomputed for many configurations of lights, not just for one. first realtime soft shadows.
2008 realtime computed indirect illum without specular reflections
2009 realtime computed indirect illum with some specular reflections
 
just puts in perspective how far off we are from movie level graphics, people who think we are 5 years away are funny.

I dont think its suck a strange thought to think we get cgi movie like gfx in 5 - 6 years with the ps4, x720 and pc's. Sure, it wont be as good from a technical point of view but if you compare what you see on the screen I dont think there will be such a big difference. The differenace probably will be small enough to let a decent amount of people not see the difference.
 
Before meeting point with movies, games are going to improve illumination quality in many small steps. Interesting question is what the next steps will be.

Many upcoming games are using ambient cube maps for lighting, and only a single direct light. It creates rather good looking soft lighting, without the harsh contrast and total blacks of a Doom3-like system. It's also a lot faster then adding another 2-5 lights (which is actually pretty expensive in pixel shader calculations).

The cube map stuff works well with normal maps, and you can also use the same in-engine generated HDRI cubemap for simple reflections too. The direct lightsource will account for shadows, and if needed, for specular highlights. Additional dynamic lights can be rendered into the cube map.

An example is Halo 3, the Brute video shows untextured, grey shaded models and they clearly have some image-based lighting going on.
 
To get raytraced real soft shadows/sss/etc we will need something like this implemented in HW:

- Load the mesh using vertex buffers BUT store together a kd-tree/octree/BSP with the vertices. Something like the AGEIA does.

- Add an intruction in the pixel shader to test if a ray hits a mesh in the scene. Something like a bool closestHit ( rayPos, rayDir, rayDistance, out float3 triangleBariCoords ). This will iterate over all the hit primitives's triangles and give us the closest triangle that collides our ray with.

I think that could be an easy milestone. After all the AGEIA PhysX is making that in HW at amazing speed ( see the shadow/terrain raycast example in its SDK ). And NVIDIA/ATI are claiming physics now ( well, basic raycasting are physics really soo.... ). With that we could start to make some basic raytracing in the pixel shader...

The problem is that with that basic raytracing we can't achieve GI really. GI is much more complicated. Requires to cast photons from a mesh-distributed lighting, calculate all the photon collisions, etc etc

... So probably we will se first the raytracing instruction in the pixel shader.. and then, when we have this running in 1600x1200 at 80fps, we could start calculating photon things at 4 or 5 FPS :p

Considering we are almost reaching the electronic integration limits and the Mhz can't go much more ( and a 384-core CPU could use like 9000W ), we will need some kind of electronic advancement before to reach this stage ( nanotubes or optronic ). Who know! :rolleyes:
 
- Load the mesh using vertex buffers BUT store together a kd-tree/octree/BSP with the vertices. Something like the AGEIA does.
The last thing we need is a data structure hard-coded into hardware!

- Add an intruction in the pixel shader to test if a ray hits a mesh in the scene. Something like a bool closestHit ( rayPos, rayDir, rayDistance, out float3 triangleBariCoords ).
You can already do this in software on current GPUs fairly efficiently, why implement it in hardware?

GPUs are already fairly efficient at doing raytracing, even with deep acceleration structure traversal. However as noted, GI is a fair step from just being able to trace a ray.
 
You can already do this in software fairly efficiently, why implement it in hardware?
Because I wanna use it inside a pixel shader :p

GPUs are already fairly efficient at doing raytracing, even with deep acceleration structure traversal.
With CUDA/CTM, perhaps! But im not sure if are going to support ppointers to use decent recursion. Meanwhile the uniform/nonuniform grid approaches aren't bad.. but still far from fast enough and too complicated and hacky.
 
Because I wanna use it inside a pixel shader :p
Totally possible with something like Sh or RapidMind. Internally it can be split into multiple passes as necessary (similar to what the GPU would have to do with divergent control flow anyways).

With CUDA/CTM, perhaps! But im not sure if are going to support ppointers to use decent recursion.
You don't actually need recursion or a stack to descend into the data structure. There are alternative ways that don't need extra state.

Meanwhile the uniform/nonuniform grid approaches aren't bad.. but still far from fast enough and too complicated and hacky.
kd-trees are quite viable to do on the GPU :)
 
Last edited by a moderator:
Imho, the future of graphics lies with pre-computing anything that can be, and real-time computing the minimum. That is, full on ray-tracing and photon mapping will be used, but only when absolutely necessary, like with large-scale, in-focus and dynamic objects. Again, static objects would have their GI contributions calculated ahead of time, but during the real-time rendering, they will simply be composited into the final image. In this way, usage of the hardware's capabilities can be appropriately tuned.

However, I think this will require an advancement of not only graphics hardware and engines, but the development software itself, since this demands more detailed LOD and usage specifications. In other words, the tools need to be more expressive, so that the programmer/designer/modeller can specify exactly what characteristics their objects have and the contexts they will exist in. From this, the development tools can determine the optimal balance of graphics effort, as per the object's expected visibility, which can be further tweaked by the developer just to be sure. Essentially, this is an expert machine that supplements the developer, in the same sense that a C++ compiler is this.

Now, I don't believe in only having one good reason to do something, and indeed, this kind of advancement has secondary benefits. Not only is the quality displayed graphics improved, but performance scalability is greater, and development is streamlined. Of course, the economic cost of this must also be considered, but in this time of HD graphics, I think this optimization has matured.
 
give us the closest triangle that collides our ray with. ...
After all the AGEIA PhysX is making that in HW at amazing speed

Last time I measured it, Lightsprint Collider was 2-7x faster than AGEIA PhysX, running on the same computer with Ageia card. Collider does ray-mesh intersections on CPU [I wrote its major parts]. Do you still think PhysX does it on dedicated HW?

On the GPU front, what are the latest results, how fast are GPU raytracers?
(Latest news I have are from 2004 when my friend wrote Inferno GPU raytracer. It was still slower than CPU raytrace, but he predicted GPU raytracing era to come in a few years.)
 

Not necessarily raytracing, we can give it different names. But for indirect illumination effects, you usually need lots of ray-scene intersections, which is base also for raytracing.

Indirect illumination is nicely visible on screenshots.
I don't know technical details, so please correct me if I'm wrong.
Sun looks static on those images, so indirect illumination is probably precomputed. Indirect shadow of chair on the left image is too sharp, it is probably faked by one weak pointlight that was positioned by level designer.

This approach will be replaced by engines that do secondary effects in realtime, so it will be possible to move light source etc.

Alternatively, "Instant radiosity" based techniques use no rays, only rasterization, but artifacts are visible and as I know game developers, they prefer good looking over physically correct with artifacts.
 
Correction - sorry there are many other global illum techniques that need no ray-scene intersections.
I remember several demos with problematic quality. If Crytek developed something realtime of this quality, it would be nice surprise.
 
'Sun looks static on those images, so indirect illumination is probably precomputed.'

The CryEngine 2 tech demo video showed a mobile light source apparently casting indirect illumination using a technique they described as 'Real-time Ambient Maps'.
 
Most likely its an optimized ambient occlusion mapping techique. Cry Engine 2 isn't going to be licensed till Crysis is out the door or close to out the door so I'm guessing, but that is what it sounds like to me from my talks with Crytek. It is definilty texture based thats for sure.
 
The CryEngine 2 tech demo video showed a mobile light source apparently casting indirect illumination using a technique they described as 'Real-time Ambient Maps'.

I've seen that one, very nice.

I don't know how they do it, but this is one possibility:

Back in 2000 in Realtime Radiosity 2 demo, we made freely moving lights with global illumination by precomputing GI for many light positions in space and interpolating between closest GI solutions during light movement. It was meant as joke but many believed it's realtime computed :)

If you want to copy our joke today, do shadows in realtime because it's finally cheap, precompute only indirect illumination ("ambient maps") for many light positions. Storage is cheap, it's low frequency low res map. Then, interpolate precomputed ambient maps in pixel shader and call it "realtime ambient maps". Many will believe it's realtime :)
 
Back
Top