obobski said:
3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away
What use is there in storing rasterized pixel data if you aren't going to show it at all? If a pixel passes all culling and visibility tests and is rendered, then why in the heck would you not want to show it? Why would you want to have an algorithm that does a ton of work on pixels, stores additional values, and then doesn't show it?
Realize that a "hypercube" is a theoretical object that can only be approximated in an arbitrarily representational model. We cannot see in four dimensions, so it follows that any 3D projection is an arbitrary representation that fits our expectations. Likewise, even if time *is* the fourth dimension, humans cannot see into it, but are constrained to this instant of time, thus the time component cannot be directly viewed but intead inferred by the way our brains process visual data. In short, our visual cortex is an accumulation buffer that we consider to be a present representation of what we see, although what our eyes are actually sensing is quite a bit less than that mental representation.
If you do have an idea that is more than a half-developed thought, then you need to try to break it down into algorithmic components. Show us what the source data structures might look like, how those structures are translated (usefully) into 3D space, and then rasterized to take advantage of this extra data. Also, it appears you are neglecting that time is already represented in these models, just not as being attached to discrete objects, but as simply being a constantly ticking counter that models time throughout the entire data structure. Time is (for our purposes) universal and independant of objects, so why would you want to attach a time value to an object? Or a pixel?
There are, obviously, an infinite number of ways that data representation and processing can occur, but virtually all of the possibilities are pointless, inefficient, or incompatible with the discrete binary nature of computers. The few ways that do work well are currently being used and optimized for. Obviously you can get wonderful rendered images if you're willing to integrate values across a range, but that simply isn't feasible today.
Anyhow, I'm not trying to put down your idea, but I would like the idea clearly stated in terms that can be evaluated against what is in use today rather than very vague allusions.