Real-time Caustics Rendering

musawirali

Newcomer
This summer I have been working on rendering caustics in real-time. I have developed an algorithm which I call "Caustics Mapping". Caustics Mapping is a physically based real-time caustics rendering algorithm. It utilizes the concept of backward ray-tracing, however it involves no expensive computations that are generally associated with ray-tracing and other such techniques. The main advantage of caustics mapping is that it is extremely practical for games and other interactive applications because of its high frame-rates. Furthermore, the algorithm runs entirely on graphics hardware, which leaves the CPU free for other computation. There is no pre-computation involved, and therefore fully dynamic geometry, lighting, and viewing directions are supported. In addition, there is no limitation on the topology of the reciever geometry, i.e., caustics can be formed on arbitrary surfaces and not just planar ones. Lastly, the caustics mapping algorithm does not hinder the rendering of other optical phenomenon, such as shadows, and hence can be integrated into current rendering systems easily. For results (images and video) and more information, please visit the project home page at http://graphics.cs.ucf.edu/caustics

Thanks
 
I haven't read this yet, but I assume this is different than the cacheing method presented at Siggraph?
 
Nitpick on the site: there is no GeForce FX 6800 :)

Edit:
That video on the site looks pretty nice, but you could describe the technique?
 
Last edited by a moderator:
I can't watch the video for some reason. Is there a special codec I need or is my system just acting funny? I used Windows Media Player 10.
 
Download DIVX. The DIVX player seems to work better, too (but that may just be because I'm on a slow-ass machine for the time being).
 
Hello, I am to implement this technique in our project.

1. I have a no idea about that you've called "position texture": I really can't understand how should it look like, and how it can be used in estimating the intersection. The only idea I have is simply to use a depth buffer texture for distance.

2. Are there any render-to-vb optimization ideas? Two dependent texture reads in vertex program is rather expensive overhead, don't you think so?

Thanks.
Evgeny Homchenko, Tools-Media Corp. (www.toolsmedia.com)
 
Hi musawirali,

I hope you're still reading this thread from time to time. Your technique is not bad, using some of the rudimentary ray-tracing approximation you often see in virtual displacement mapping (i.e. parallax mapping) papers.

There's a way to significantly improve your technique for deforming geometry. I'm not sure if you've already heard of this technique, but I remember a really old NVidia demo where caustic intensity was determined by using the same mesh of the caustic source geometry instead of point sprites, but still altering vertex location as you are. Culling was disabled when rendering this to the caustic map, and a texture was used where each successive level of the mipmap chain was a brighter shade of white. Essentially, the mipmap hardware is used to determine the area of the triangle. You have to set the initial texture coordinates correctly, but when you render your caustics map, areas were the refracted vertices converge will have shrunken triangles, and hence the GPU will choose a higher, brighter mipmap. Sorry if I'm not explaining it well, but I can't find the demo. Feel free to ask me any questions.

This will take away the artifacts caused by using point sprites and should let you use a lower density mesh too. The only major limitation I see with this is that light can't really bend around anything due to the projected caustic map texture, but overall it looks pretty convincing.
 
I am hoping that vertex texture fetch cost goes down enough so that we could actually use a jittered grid of point sprites and trace them. Then this method need not dependent on the tesselation/ setup of texture coords.
 
sgreen said:
I surprised you remember that old caustics demo, I presented it at GDC 2002:
http://developer.nvidia.com/object/gdc_hdr.html

It's funny looking back at the HDR stuff using int 16 textures on GeForce3. ATI still do it that way, arf!
Hi, welcome to the forum! You're exactly the type of person we value most here, and its what makes B3D special.

I remembered that method because it was particularly clever in how it used mipmapping hardware for a different purpose than intended. I also remembered about it when talking about derivative functions for shader antialiasing, as a forum member suggested how a similar method can be used to aid in shader antialiasing on all hardware.

Has anyone tested the vertex fetch latency on the 7800? Is it as bad as on the 6800? If so, then we'll have to wait for the unified shader architectures for fast VS texture access, which looks to be a year away or so with the exception of XBox 360.

In any case, I hope you'll find the above caustics method useful, musawirali. It'll really speed up your algorithm a lot by allowing you to use a less dense mesh.
 
sgreen said:
It's funny looking back at the HDR stuff using int 16 textures on GeForce3. ATI still do it that way, arf!
What I find amusing is that Geforce3 had high precision textures, but not high precision shader math. R200 had high precision shader math, but no way to get high precision data in or out via textures/render-targets.
 
Mintmaster said:
What I find amusing is that Geforce3 had high precision textures, but not high precision shader math.
The GeForce3 did have high-precision shader math (FP32, apparently), but only for the first stage of shading, which was rather limited in the performable operations.
 
oh wow, sorry I have been out of touch at this forum. I had almost forgot :oops:
Anyway, if I understood one of the posts correctly, someone suggested using triangles and then utilizing a mipmapping trick to select the caustic intensity. The problem is actually using triangles in the first place. There are cases in which the triangles get warped when component vertices "go out of bounds". Of course one can employ hacks to correct this, but the general overall appearance of the caustics does not look convincing.
As an update, the algorithm has been enhanced more towards a per-pixel rendering of the caustics map. Again, I just saw that someone had posted about this but unfortunately I wasn't able to see this post until now. And the comment is correct, it does lift the high tessellation requirement. Furthermore, it reduces aliasing as well. I will post some pictures on the project website soon showing the new version of the algorithm. The water caustics in particular look very nice after the change.
Texture lookups in the vertex shader don't seem to be that bad. However, for practical purposes such as games the distance values (between the refractive object and the receiver surface) can be hardcoded to avoid the texture lookups in the vertex shader. The screenshots of the water demo (without the buddha statue) was done using shader model 2.0 by hardcoding the distance. Of course there is some error in doing this, but the part that I want to emphasize is that the caustics mapping algorithm can sustain these errors without visually "crapping out". As a second update, I have also enhanced the algorithm to include area lights. I hope to put out another technical report soon to highlight these new additions. Thanks for your support. :D
 
Back
Top