HypX, sort of...
I do have an idea on how to combine the points, but it would be dog slow, and as you mention, it involves ray-tracing ("rendering" the 3d points in the first place pretty much requires raytracing as far as I can tell).
Basically, once you have all the points potentially affecting your pixel during the frames exposure time, you consider them one by one:
- take the timestamp T of the point, use it to compute the camera position at time T
- ray trace from the point to the camera position and discard the point from the flux estimate if it doesn't reach the eye. Moving objects would have to have associated bounding hulls that account for object motion to avoid having to compute the state of the entire scene for every different timestamp.
Alternatively, maybe one could shoot out rays from the camera at randomized times, obtaining an intersection point with the scene, and then gather up all the nearby "3d points" to compute shading information that way (should handle occlusion fairly well).
But yeah, as stepz so sarcastically pointed out
, it doesn't sound all that practical. My guess is that the number of 3d points you'd need to avoid graininess in the image would be pretty insane.
[edit]
On the other hand, as obobski states, the "3d point" rendering can be optimized in the sense that unless that 3d point corresponds to a surface that is moving (or is subject to changing lighting conditions) you can keep it around for the next frame...