Ok, we were toying with a hypercube (4-D object) today and it got me thinking, it could be the answer to TMB
you have the X, Y and Z values to represent the pixel's loaction, and with 4-D you have a W value which can represent time or another variable, but my thinking was to have W represent:
√refresh hz = n
second/n = x
x = fractional display time for a pixel
each pixel is independent, and you have each pixel displayed exactly as long as it needs to be, so you aren't re-rendering for a frame, individual pixels are addressed by the GPU, and individual planes are addressed to provide texturing at a similar level of updateability
the advantage is you can cut re-rendering down a lot, whereas 3dfx's technique with T-Buffer rendered each frame 4 times, and compiled them
this technique would hold each pixel for either a correct ammt of time, or a specified amout of time (hence, all sorts of blur and distortion effects are now easily possible in hardware)
not sure if this is explained entirely correctly
but basically i'm saying give each pixel a finite display time, yes if you stared at the same point you'd have re-draw, but for motion it would be much more fluid, instead of trying to implement TMB via frame compliation, implement it via REAL temporal output of the individual pixels, or pixel planes
if this is hard to understand I can try explaining it more, but it's not the simplest thing to convey out, it does however seem a lot more logical than just amping up the power and bandwidth to run a TMB solution by rendering each frame 4 or 8 times to compile down
it can also run a lot lighter on bandwidth requirements, i'd venture to guess this style of rendering would run VERY WELL on RV530
you have the X, Y and Z values to represent the pixel's loaction, and with 4-D you have a W value which can represent time or another variable, but my thinking was to have W represent:
√refresh hz = n
second/n = x
x = fractional display time for a pixel
each pixel is independent, and you have each pixel displayed exactly as long as it needs to be, so you aren't re-rendering for a frame, individual pixels are addressed by the GPU, and individual planes are addressed to provide texturing at a similar level of updateability
the advantage is you can cut re-rendering down a lot, whereas 3dfx's technique with T-Buffer rendered each frame 4 times, and compiled them
this technique would hold each pixel for either a correct ammt of time, or a specified amout of time (hence, all sorts of blur and distortion effects are now easily possible in hardware)
not sure if this is explained entirely correctly
but basically i'm saying give each pixel a finite display time, yes if you stared at the same point you'd have re-draw, but for motion it would be much more fluid, instead of trying to implement TMB via frame compliation, implement it via REAL temporal output of the individual pixels, or pixel planes
if this is hard to understand I can try explaining it more, but it's not the simplest thing to convey out, it does however seem a lot more logical than just amping up the power and bandwidth to run a TMB solution by rendering each frame 4 or 8 times to compile down
it can also run a lot lighter on bandwidth requirements, i'd venture to guess this style of rendering would run VERY WELL on RV530