That's a pretty important point. Raytracing is not a full physical simulation. Think about how you would render a prism in a raytracer. Or atmospheric scattering. Or fog. In each case you have two options: Throw in a fantastic number of rays, or fake it.
Yea... A few years ago, a fellow student and I wrote a photon-mapping raytracer based on HW Jensen's work...
Photon-mapping seems to be THE way to handle that sort of effect in a raytracer... Thing is, it's not raytracing in itself. Instead of the classic Whitted raytracing method of tracing rays of light from the eye back to the source, you are tracing photons from the source to ... whatever your criteria are for storing their information in the photonmaps.
I wouldn't really call photonmapping raytracing in the first place. It's related, but not quite the same. Besides, you only use them to create photonmaps. These photonmaps don't necessarily have to be evaluated by a Whitted raytracer. You could just as easily evaluate them from within a rasterizer. After all, in essence a photonmap is just a 2d or 3d texture. That is, the photons are stored in texture-space, you just don't evaluate it as a bitmap. It's more like a procedural texture.
But there are already two obvious approximations going on there:
1) You generally won't base the number of photons you trace on the actual number of photons that would theoretically be emitted from your lightsource. You just take a smaller subset, and have each photon have a certain level of energy to compensate. So they're not really photons in a physical sense.
2) During filtering there's another approximation going on. You will be estimating the photon density and luminance in an area, based on the number of photons you've simulated.
And this solution is far more efficient and delivers far better quality than solutions based on conventional eye-ray tracing with Monte Carlo-based path tracing and all that.
Oh well...