ShootMyMonkey
Veteran
Ah... that makes more sense. Either way, you'd probably cut out some unnecessary sampling, and you can probably get a better model of light transmission through the medium. The hard part with lightsources is you'd have to step through and see how much direct and indirect scattering you receive at every point. You can probably get away with assuming that all indirect scattering is only along the view direction, but direct scattering would be straight from the lightsource to each step point along the ray. Assuming isotropic scattering, the direct light paths would scatter a 1/2pi fraction to the view direction from that point.I was thinking more cast a ray and reduce it's intensity (or increase for light particles) for each particle it passes through. You would only need traverse until a percentage occlusion, so wouldn't need to draw every single particle back to front. So in say 1000 particles deep cloud of smog, by the time the ray's passed through the 10th particle's volume it's already black so stop there.
Actually that wouldn't work. If you've got a super bright light-source at the end, or in the middle, you'd need to keep tracing until you reached the lightsource (or rather, bright object) to add it's intensity to the traced pixel. So you'd have to keep tracing until you hit an opaque surface. I s'pose if particle data could be efficiently enough described it wouldn't be too hard on the memory to do this, using the z-buffer from RSX to determine length of the ray, creating a half-ray traced renderer.
Well, you can stop raytracing until you've reached total opacity, probably just because you can say that there wouldn't be any indirect scattering along the view direction beyond that point, but you'd still take direct lighting samples... okay I'm not even making sense to myself at this point. I need to get some coffee.
Ummmmmmm... tell me you wouldn't *entirely* mind just sticking with sprites for a while...