Photon Mapping on Programmable Graphics Hardware

Ilfirin

Regular
http://graphics.stanford.edu/papers/photongfx/

Haven't seen this posted yet (feel free to slap me if it has ;)). By mostly the same authors as last year's Ray Tracing on Programmable Graphics Hardware.

They used a GFFX 5900, and then later stated that they were totally FP computing bound. One wonders how much faster it would be on a 9800 Pro.

Discuss.
 
Seriously, though, those rendering times took about a minute per frame, at significantly lower quality than the software reference (probably due to lower photon density). If it is feasible to increase the photon density to software reference levels (or otherwise get rid of the artifacts seen...I suppose they could also be due to a lack of precision), then perhaps this sort of rendering could be used for offline renderers, but it's not going to be in games for quite a while.
 
Yes, but a few of the tests were only a couple of seconds per frame. While far from real-time, I believe that is a substantial improvement from the ray-tracing article of last year.

Also, the r300/r350 has proven to be quite a bit faster in computationally similar tests. If that same performance margin applied here, some of those tests would be right around near real-time, or at least interactive frame rates.

But yeah, this is more of one of those 'forward looking' things.
 
Maya5 uses has a special hardware shader rendering option for high speed rendered previews. This kind of thing is pretty useful now for speeding up pre-rendered art design.

I guess it also shows that there really is some convergence between realtime graphics and cinematic prerendering. :)
 
they used an 5800 ultra if i'm not mistaken, not an 5900U...so, the artifacts could be because the lack of precision...all in all, nice to see such things on GPUs...
 
robert_H said:
they used an 5800 ultra if i'm not mistaken, not an 5900U...so, the artifacts could be because the lack of precision...all in all, nice to see such things on GPUs...

3. Results
All of our results are generated using a GeForce FX 5900
Ultra and a 3.0 GHz Pentium 4 CPU
 
Ilfirin said:
Yes, but a few of the tests were only a couple of seconds per frame. While far from real-time, I believe that is a substantial improvement from the ray-tracing article of last year.
Notice that those "tests" were parts of a scene, not the entire scene.
 
robert_H said:
they used an 5800 ultra if i'm not mistaken, not an 5900U...so, the artifacts could be because the lack of precision...all in all, nice to see such things on GPUs...
Possible, but probably not the case.

Notice that the lighting looks "splotchy." This could more easily be explained by a lack of adequate photon density.
 
There has been similar work done by "Open-RT" people, German researchers who have done lot of work with real-time raytracing. Photon mapping was a spin off of the raytracing work. They speeded the process up using SIMD support and ray coherence. Work was also distributed across a (small) Athlon cluster. If I remember correctly, they had about the same performance than those Stanford people. Probably slightly better with cluster thou :rolleyes:
 
I think that GPUs for a very long time will take the route to rendering that 3dmax studio uses where you only apply for example raytracing to the materials in the scene that really needs it and just use shader (hacks) to emulate whatever you need with the rest.

One 'unified' rendering tech like raytracing or Photon mapping for the whole scene needs a radical departure in GPU architetures and that defies everything we have learn about how the GPUs have developed and advanced so far (e.g. a new generation is the fastest for the technology it's just about to leave behind + is introducing new features albeit sometimes to slow for real use in two years).

Anyway, I don't see this as a problem: Shaders in FP goes a very, very long way towards giving us fairly believable results. Just think about how much time you would have to spend in 3dStudio Max R2 to show off in stills what you can see in real time in ATIs car paint demo!

This is still the best demo for me to show off how far the industry have actually come. 8)
 
Does raytracing/photon mapping automatically replace shaders?

I've read that given a few traces, reflection, refraction and specular can be done automatically. In scanline renderers, that would be done on a per-pixel level using a phong/blinn lighting model. So does raytracing unify the lighting model (diffuse, specular, ambient) into one? Obviously only some surfaces will be reflective or smooth so there stilll need to be material attributes - are these not shaders?
 
Maya5 uses has a special hardware shader rendering option for high speed rendered previews
yes, but they use CG which mean NV3x only. 3DLabs is still trying to get them to admit that it will work on a VP tho.
 
JF_Aidan_Pryde said:
Does raytracing/photon mapping automatically replace shaders?

I've read that given a few traces, reflection, refraction and specular can be done automatically. In scanline renderers, that would be done on a per-pixel level using a phong/blinn lighting model. So does raytracing unify the lighting model (diffuse, specular, ambient) into one? Obviously only some surfaces will be reflective or smooth so there stilll need to be material attributes - are these not shaders?

Yes, you still need shaders... Classic basic raytracing is usually just Phong illumination model + additional rays for REAL reflection (those classic shine balls ;) ), refraction and shadows. There have been a lot of research with all kinds of hybrid scaline-raytracing schemes.

To build better simulation of material-light -interaction, we need more complex models. You can implement these models with shaders, or you can try to use raytracing. Basically raytracing just adds real interaction between different objects, so you get real mirrors etc. Straightforward raytracing doesn't include complex diffuse light interaction between objects. This is where the radiosity or photon mapping techniques emerge.

Realtime gfx are now kinda done the first step; in near future we can use shaders to simulate the so called "first hit" between the light and surface correctly. It's the interaction between the different surfaces in the 3d scene that we are severely lacking. In the real-time sence, that is. Sadly these complex global illumination effects are usually also ones that really make the scene realistic.
 
I don’t think there’s anything impressive in this paper. The Cornell-Box image at 512x512 took 64.3 seconds using 65000 photons for the caustics and only 500 for the radiance estimate on the NV35. As you can see here ( http://graphics.ucsd.edu/~henrik/images/cbox.html ), a similar scene can be rendered at 1024x768 in 15 seconds with a software renderer on a dual P3 at 800 MHz using 50000 photons for the caustics and 200000 photons for the radiance estimate.

So the software renderer is rendering 3 times more pixels, is shooting 400 times more photons for the radiance estimate (and about the same amount of photons for the caustics) and it’s still 4.2 times faster than the hardware renderer. And note that the hardware approach directly visualizes the global photon map, so it needs an enormous amount of photons to produce acceptable results. Also note that the software raytracer used in the above comparisons doesn’t use any architecture specific optimizations, like OpenRT.

But the speed is not the main problem. The problem is the flexibility of the hardware. This hardware is designed for fast scanline rasterization. Until the hardware companies realize that raytracing is fast enough and offers a unified and elegant solution to almost any rendering problem (hidden surface removal, deferred rendering, (soft) shadows, global illumination and others) don’t expect fast and PRACTICAL hardware raytracing, even with the upcoming generation of graphics accelerators.
 
what's important about this paper is that it show's a) that it can be done, and b) that it's impractical with current graphics hardware. If noone even took the time to show that it could be done then why would the hardware company's ever care to make it a practical alternative?
 
Back
Top