I've been reading a bit about Computer Generated Holograms and I came upon the paper A Framework for Holographic
Scene Representation and Image Synthesis.
What amazes me is how detailed their reconstructions are considering they are only computing 1024^2 samples per hologram, whereas photographically created holograms use films which can resolve sub-micron features. Can someone explain to me why this can work inside the computer but not inside physical reality? (Physical reconstructions from such low resolution holograms look like crap.)
PS. hey Simon, I guess pre-filtering is possible after all
Scene Representation and Image Synthesis.
What amazes me is how detailed their reconstructions are considering they are only computing 1024^2 samples per hologram, whereas photographically created holograms use films which can resolve sub-micron features. Can someone explain to me why this can work inside the computer but not inside physical reality? (Physical reconstructions from such low resolution holograms look like crap.)
PS. hey Simon, I guess pre-filtering is possible after all
Last edited by a moderator: