g80/Cuda for raytracing ?

The most expensive part is not the simulation, but the volumetric lighting. I implemented the slice-based volume rendering at first, then tried to move on to raytraced volume rendering. Unfortunately the result was frustrating: hardware raytraced volume rendering trended to show more artifacts than slice-based methods, and almost no good way to calculate volumetric lighting. So guess that nVidia smoke demo is based on raytraced volume rendering, that means - tracing a pixel through a volume texture and accumulating sampled values along the ray.
hi,
i am also doing volume raycasting/-tracing, but i can not reconstruct your problems. could you explain what artifacts you encountered opposed to the slice based rendering?
 
Rufus said:
It's ray-traced, but not in the traditional sense. I'd assume it goes something like:

Draw a rectangle for each face of the box
For each pixel on the face, you know where you're starting and what angle you're being viewed from. March through the 3d texture (that represents the fog/water) and calculate density (fog) or surface crossing (water).

So yes, it's ray traced, but it's a single ray shot through a single object with basically everything known. What's always been hard about GPU ray tracing is managing the scene graph and finding ray / object intersections. Once you know the intersection (like this) the rest is trivial.
It's true. What you described is exactly discussed in a paper several years, but I forget where I found it.

Wouldn't it be harder to rendering the smoke (and probably also the water) without ray-tracing?
For fluid with explicit free surface (liquid-air boundary), it seems raytracing is the only way to render it with believable appearance. Otherwise, I believe slice-based volume rendering methods are faster and less artifacted.
 
hi,
i am also doing volume raycasting/-tracing, but i can not reconstruct your problems. could you explain what artifacts you encountered opposed to the slice based rendering?

The main artifact is zebra strips when encountering the thin but high density parts of the volume. It seems much more obvious than rendering with slices. Even in some thick area the low frequency strips can be seen due to the variant number of sampling points. Another annoying one is the undersampling alias trends to form different pattern along each face of the bounding box, that would really become unacceptable when moving the camera. I tried to use jittered sampling position but that didn't help a lot if the sampling points not being increased (at that time I used 20~40 sampling points per ray on a 64x64x64 volume texture). There were also other artifacts, but I can't clearly remember because I did it in 2004, the time 6800 just came out.

Also, if you use ray-marching to render a volume texture, can you apply the volumetric lighting? When rendering with the slices, you can use half-angle oriented slices to compute the absorption, extinction etc.
 
The main artifact is zebra strips when encountering the thin but high density parts of the volume. It seems much more obvious than rendering with slices. [...] Another annoying one is the undersampling alias trends to form different pattern along each face of the bounding box, that would really become unacceptable when moving the camera. I tried to use jittered sampling position but that didn't help a lot if the sampling points not being increased
yes, i encountered these artifacts too. one important thing to counter the second problem you mentioned would be to set the first sample position along the ray not where the ray intersects the bounding volume but set it to the first whole numbered multiple of the sampling sep size. this way you sample the volume on discrete shells (as opposed to the slices of the slice base approaches).

that was a very big improvement on my work in visual quality. the zebra stripes can only be countered by higher sampling distances or preintegration but they also occure in slice based rendering.
 
Back
Top