Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Albuquerque said:I'm going to assume "temporal motion blur", but I'm lost on the rest of it.![]()
obobski said:
Well, that's precisely what I thought it was, then. This will only be good for slow-moving pixels, and may provide some computational efficiency. Even then, it's probably not going to be any better than simple MSAA, since it can't track motion over more than one pixel between frames.obobski said:The original concept is basically:
1. Use of 4 dimensions to represent the pixel location: X, Y, Z, and W
2. The 4th dimension is used to represent time
3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away
4. This method would require a new GPU (if nobody had gotten that yet) design, and would also require some serious coding
5. The concept stems from a hypercube, in that a hypercube exists in 4 dimensions (or more?) and the 4th dimension is usually time
6. Basically to understand this method, consider it more along the lines of Einstein's theory of everything existing at a certain point in time, but application of that to a pixel, and with duration and more management
psurge said:here's another guess: the scene is represented as a cloud of points, arranged in say a 4 dimensional kD tree type structure (where the 4th dimension w is used to represent time,
and x,y,z are world space coordinates). To display a pixel, you obtain a 4 dimensional volume swept out by the pixel during the time slice (presumbly some kind of approximation is used to obtain it) and intersect that volume with the kD tree, combining all (or maybe the k "nearest") points you find to get a display pixel.
Basically , you don't rasterize triangles at some fixed time anymore, you store energies incident to points on your geometry (I think you might need to store some kind of incidence angle as well for accurate view dependent shading), and associate a (small) time range or timestamp with each point. The time ranges or timestamps should be randomly distributed across a frames exposure time for good quality, and basically give you a time interval during which you can reasonably use the corresponding points energy in a flux estimate (for a display pixel).
I'm not really sure how you would go about obtaining the per-pixel flux estimate...
Serge
*Cough*Chalnoth said:By the way:
You're = you are.
Your = your posessions.
For those of us that see meaning in written words instead of sounds, it makes things very hard to read.
obobski said:3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away
Dave B(TotalVR) said:best way to do it, IMO is have a general shader on every small object, rockets, cars, whatever that knows how long it took to render the last frame so can guesshow many pixels along the object will be in the next frame. then just render the object to texture many times with 1 pixel offset each along the whole length. should make projectiles look MUCH better.
Ragemare said:Here's an idea;
You have a scene with a single square, the top left coordinate of the square is at x=1, y=1, z=2 and it is 5 pixels wide and tall, so in one draw call, it creates 25 pixels which all have a scene depth of 2(z) as the square is flat on the screen. The (temporal) pixels are sent to the graphics card's temporal pixel buffer with a set amount of time that it (the graphics card) should display them before they fade out, lets say a W=0.1 seconds. Depending on whether or not the pixel is hidden because of it's depth in the scene and plausibly it's alpha value, the graphics card maps these pixels to the display pixels on the screen buffer. In order to get a fuid image you would have to add intermediate pixels between the current pixel and it's predecessor (I.e the previous pixel to come from a certain point on a surface). This would be done asynchronously to the creation of the temporal pixels being sent to the temporal pixel buffer.
At low frame rates it might look a bit odd and the temporal pixel buffer would have to be huge for a complex scene... I think I just described a very crude version of what psurge was talking about, but tbh I'm out of my depth![]()