Anyone have a comment on this?/reason it's not employed?

obobski

Newcomer
Ok, we were toying with a hypercube (4-D object) today and it got me thinking, it could be the answer to TMB

you have the X, Y and Z values to represent the pixel's loaction, and with 4-D you have a W value which can represent time or another variable, but my thinking was to have W represent:

√refresh hz = n
second/n = x
x = fractional display time for a pixel

each pixel is independent, and you have each pixel displayed exactly as long as it needs to be, so you aren't re-rendering for a frame, individual pixels are addressed by the GPU, and individual planes are addressed to provide texturing at a similar level of updateability

the advantage is you can cut re-rendering down a lot, whereas 3dfx's technique with T-Buffer rendered each frame 4 times, and compiled them
this technique would hold each pixel for either a correct ammt of time, or a specified amout of time (hence, all sorts of blur and distortion effects are now easily possible in hardware)

not sure if this is explained entirely correctly
but basically i'm saying give each pixel a finite display time, yes if you stared at the same point you'd have re-draw, but for motion it would be much more fluid, instead of trying to implement TMB via frame compliation, implement it via REAL temporal output of the individual pixels, or pixel planes

if this is hard to understand I can try explaining it more, but it's not the simplest thing to convey out, it does however seem a lot more logical than just amping up the power and bandwidth to run a TMB solution by rendering each frame 4 or 8 times to compile down
it can also run a lot lighter on bandwidth requirements, i'd venture to guess this style of rendering would run VERY WELL on RV530
 
I think he's trying to say that the framebuffer will store an extra value which is how long to "hold" the pixel. Presumably, the 3D rasterizer when it "clears" the framebuffer will only clear pixels which don't need to be "held" at all. Also, pixels that are held fully won't need to be rasterized.

Finally, I assume that the "hold" value has a fade-in/fade-out ramp, so that it gets "partially held" (alpha blended)


It's kinda like selective update/damage rectangles in 2D desktop GUI rendering. You only update the portions of the screen which change or are "damaged" by other foreground objects.

??
 
None of you got that...

yes TMB is Temporal Motion Blur
but what i'm saying is to remove frames entirely, keep the pixels individual, and just repeat pixel output if it's not needing to change, it's not like holding a pixel in the buffer and not updating, that would cause problems, i'm saying just repeat the pixel individually instead of a frame

i'm not sure if this concept can be explained any better...
 
obobski said:
None of you got that...

yes TMB is Temporal Motion Blur
but what i'm saying is to remove frames entirely, keep the pixels individual, and just repeat pixel output if it's not needing to change, it's not like holding a pixel in the buffer and not updating, that would cause problems, i'm saying just repeat the pixel individually instead of a frame

i'm not sure if this concept can be explained any better...
You can't "remove frames entirely". You don't paint individual pixels straight to the screen, you paint them into a buffer. Once the buffer is filled with the entire frame, you do a buffer swap to show that frame on the screen. You can't just leave a few pixels, because it's going to get entirely wiped by the next framebuffer swap.

Unless I'm completely missing what you're saying...
 
I quite dont get it. If you want accurate motion blur you need to calculate intermediate steps for each frame, info about previous frames dont help at all. Hence the need for rendering a scene multiple times per frame.

If you want to cut down rendering time of individual frames by just calculating the actually "moving" pixels, it would be alot better to decide this per-object, not per-pixel
 
each pixel is independent, and you have each pixel displayed exactly as long as it needs to be, so you aren't re-rendering for a frame, individual pixels are addressed by the GPU, and individual planes are addressed to provide texturing at a similar level of updateability
So you need N backplanes for your framebuffer (which itself is X * Y pixels in size)? And you need to compare all those planes for all pixels to determine what has changed and how to interpolate between them? That's a lot of bandwidth and a lot of memory...
 
This would be a good approximation for slow-moving objects, but does nothing for that data that is missing because an object moved more than one pixel from the previous frame.
 
If each pixel is given a finite display time - then I presume this must be calculated somewhere then stored to your cube. Trouble is you then need to know the future kinda frame by frame or x by x in your parlance), so in reality it just seems to me you are saying lets pre-compute frames W (= n * Constant ) ahead. Otherwise you don't know what is displayed in each pixel and how it is lighted.

I can't see how this can be avoided from your description nor how it holds any value. You haven't avoided any work because you have to calculate how long pixels last (which depends on physics of the observer's and all static, dynamic and colliding bodies motion thru this environment ) and then determining how light sources are changing in position or type and brightness of illumination within this environment.

Without knowing this - a serious look forward in time function - you can't use such a memory structure. And the cost to do this would be HUGE and subject to change as dependencies in the environment start interacting with each other.

Try expanding what you envisaged using for example a 10 seconds action slide of say Return to Castle Wolfenstein - with 32 players blasting at each other in fury!
 
g__day said:
If each pixel is given a finite display time - then I presume this must be calculated somewhere then stored to your cube. Trouble is you then need to know the future kinda frame by frame or x by x in your parlance), so in reality it just seems to me you are saying lets pre-compute frames W (= n * Constant ) ahead.
Well, not really. You can just speculate it based upon the instantaneous velocity of the pixel. But I think the primary problem is still that this only works with very slowly-moving objects, a problem that is adequately handled by MSAA.
 
your still not getting this...

and i'm not saying render the pixel straight to the screen, not entirely at least
this concept entirely defies anything currently existant in 3D processing, by trying to over-ride anything that's done and run in a "Cleaner" cut solution

let me try, again (the equation for W= is still somewhat the problem, as I know what I want it to do, but putting that into text isn't entirely the best (and this is the last time I try to put something like that into text while on my lunch break in the school library...way to little time to think it over))

ok, what I'm saying is to work it somewhat similar to alpha pixels, somewhat
it's based on the concept of a 4-D object
x - width
y - height
z - depth
w - time

the w variable is to convey the TMB information, the pre-computation is something I hadn't put a ton of thought into, and I understand your point on that, but this shouldn't require pre-computation

the time display could be used as an elapse feature, each pixel rendered with all of the TMB data already on it, so the image is processed TMB w/o having to stack 4 frames
the time display could also be used for backrounds or looped textures (which are becoming less common, but a sky or sun scene could employ them, in theory)

I understand all of your comments on this, but your all hitting on the edges of it, it's definately nice to see people trying to help and comment onto it, but i'm feeling that I haven't conveyed it fully

the elapse render feature seems to be the best idea
instead of stacking 4 frames into a buffer, like T-Buffer did, render the pixels with all the TMB features on them

your time elapse is based on how many times the screen is re-drawing a second
if it's 30 hz, your getting 30 frames in each second, so N is 1/30th of a second long
if it's 60hz, your getting 60 frames in each second, so N is 1/60th of a second long

your basically "exposing" each pixel for a duration
this wouldn't require an amazingly powerful GPU, just an amazingly high pixel fill rate GPU
each pixel is rendered "pre-TMB'd" and then put onto the buffer

that's the closest I can come to my original concept that would make sense
and it's not as good as the original concept

the original concept removes FPS and full frames from the solution, and instead would rely on basically a super fine tiling system (you could start with larger-ish tiles, maybe 100x100 at max) and each tile is given it's own 2 pixel/4 texture op unit (or something similar)

so you'd have a GPU, say 32x2
allows for 16 tiles if each pipe unit is devoted, but to allow alternation just swap the pipes on rendering operations, this would require a fast GPU however (400-800MHZ core?) to handle making the changeovers without lagging individual tiles down

another tiling configuation could be FPS based, and focus the majority of the tiles in the middle main FOV area, and have larger tiles for the non-standard FOV areas (like the bottom of the screen, the top, and a few other points) to allow for higher precision and better performance on the focus points

yes these are now two seperate ideas...
 
obobski said:
Ok, we were toying with a hypercube (4-D object) today and it got me thinking, it could be the answer to TMB

you have the X, Y and Z values to represent the pixel's loaction, and with 4-D you have a W value which can represent time or another variable, but my thinking was to have W represent:

√refresh hz = n
second/n = x
x = fractional display time for a pixel

each pixel is independent, and you have each pixel displayed exactly as long as it needs to be, so you aren't re-rendering for a frame, individual pixels are addressed by the GPU, and individual planes are addressed to provide texturing at a similar level of updateability

the advantage is you can cut re-rendering down a lot, whereas 3dfx's technique with T-Buffer rendered each frame 4 times, and compiled them
this technique would hold each pixel for either a correct ammt of time, or a specified amout of time (hence, all sorts of blur and distortion effects are now easily possible in hardware)

not sure if this is explained entirely correctly
but basically i'm saying give each pixel a finite display time, yes if you stared at the same point you'd have re-draw, but for motion it would be much more fluid, instead of trying to implement TMB via frame compliation, implement it via REAL temporal output of the individual pixels, or pixel planes

if this is hard to understand I can try explaining it more, but it's not the simplest thing to convey out, it does however seem a lot more logical than just amping up the power and bandwidth to run a TMB solution by rendering each frame 4 or 8 times to compile down
it can also run a lot lighter on bandwidth requirements, i'd venture to guess this style of rendering would run VERY WELL on RV530

The computational power to do that is beyond current hardware.
 
By the way:
You're = you are.
Your = your posessions.

For those of us that see meaning in written words instead of sounds, it makes things very hard to read.
 
There's one critical thing that your idea (or explaination of the idea) is missing: how are you going to translate the 4-D buffer to a final frame?
 
translation of the 4-D buffer to a final frame, explain what you're asking please

I'm not entirely oblivious, but that question makes it seem like you might be still not getting this
as to the your/you're problem, i'll remember to correct the issue for your enjoyment, and so that you're without cause to gripe at me
 
Back
Top