Anyone have a comment on this?/reason it's not employed?

Well, you've said that you're going to use a 4-D representation for objects. How, precisely, do you intend to translate that to a 2D image on the screen?
 
the 4th dimension is used for a time value, it's invisible...

the output to the screen is identical to a normal 3D render, do you see alpha pixels? no, do they exist? yes

similar concept
but not entirely the same
 
Then I fail to understand how there's any motion blurring going on, if you're doing nothing more than rendering a discreet timeslice (which is the norm today).
 
I don't understand how you are mapping this 4d cube to a 2d window surface at any point in time either.

Imagine if you had a scene where you were skating thru a car park and certain cars where blowing up. How would you render it given your course could change and which way whenever you want to turn, and a car exploding turns into a gigantic grenade.

So how are you mapping this 3 dimension scene plus time, into a cube where time slices are one dimension? And how are you allowing a frame of view to move randomly and have light sources, types and directions changing?
 
Basically he's talking about the accumulation buffer technique of doing motion blurring. Each pixel is given a timestamp (when you first draw it) and a duration (how long the image remains). Then you can use a shader to manipulate the pixel, for example, fading it out as time passes. However, this ends up being the exact same thing as the accumulation buffer method, and you will still get the jarring (non-)transition between each rendering pass/frame.

Of course, maybe he's trying to say that all the primitives should be changed to from points, lines and polys into lines, planes and volumes that extend into an addition dimension (time). Then the GPU would have to somehow sum/integrate over this added dimension to get the composite image for a specific time interval (shutter speed). Of course, this is really the same thing as blurring by velocity as described here ( http://developer.nvidia.com/docs/IO/8230/GDC2003_OpenGLShaderTricks.pdf ), but with infinate samples (is that even possible?).
 
obobski said:

So you suggest that a tile based deferred renderer could collapse the rendering of the multiple sub-frames representing the different discreet time samples and write the combined result, right?
If so this solution only saves framebuffer bandwidth, but motion blured rendering is not necessarily bandwidth limited - not any more than normal rendering.

And I still cannot get how this could be turned into a continuous method.
Your hypercube explanation makes no sense whatsoever.
 
No, your still not understanding what I'm trying to say...

Hyp-X got a part of it with his tiled guess, but I just don't even know how to try explaining this anymore...

Basically the original concept has been corrupted due to your interpretations, so I'm having trouble seperating the original concept and the spawn concept of combination of time lapsed pixels

The original concept is basically:
1. Use of 4 dimensions to represent the pixel location: X, Y, Z, and W
2. The 4th dimension is used to represent time
3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away
4. This method would require a new GPU (if nobody had gotten that yet) design, and would also require some serious coding
5. The concept stems from a hypercube, in that a hypercube exists in 4 dimensions (or more?) and the 4th dimension is usually time
6. Basically to understand this method, consider it more along the lines of Einstein's theory of everything existing at a certain point in time, but application of that to a pixel, and with duration and more management

The idea is a more detailed display of pixels to enact TMB, and no this isn't like 3dfx's method of using T-Buffer where in frames are compiled to create motion blur, this isn't using a special buffer, it's putting the time elapse and the motion blur data into the rendering pipeline, it's more like hardware implementation through a non-conventional design, beyond that I doubt I can even try to explain further...
 
Please tell me, how would you set a time for a pixel to show if every person who plays the game, plays it differently. What if someone looks at the pixel longer than the time? Does it reappear again? Does it just disappear because its time is up? I dont get it, having a "time limit" on a pixel sounds actually rather stupid to me and completely pointless. Maybe this could be done in some place where the motion is pre determined, but that sounds only like cutscenes.
 
here's another guess: the scene is represented as a cloud of points, arranged in say a 4 dimensional kD tree type structure (where the 4th dimension w is used to represent time,
and x,y,z are world space coordinates). To display a pixel, you obtain a 4 dimensional volume swept out by the pixel during the time slice (presumbly some kind of approximation is used to obtain it) and intersect that volume with the kD tree, combining all (or maybe the k "nearest") points you find to get a display pixel.

Basically , you don't rasterize triangles at some fixed time anymore, you store energies incident to points on your geometry (I think you might need to store some kind of incidence angle as well for accurate view dependent shading), and associate a (small) time range or timestamp with each point. The time ranges or timestamps should be randomly distributed across a frames exposure time for good quality, and basically give you a time interval during which you can reasonably use the corresponding points energy in a flux estimate (for a display pixel).

I'm not really sure how you would go about obtaining the per-pixel flux estimate...

Serge
 
Last edited by a moderator:
obobski said:
The original concept is basically:
1. Use of 4 dimensions to represent the pixel location: X, Y, Z, and W
2. The 4th dimension is used to represent time
3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away
4. This method would require a new GPU (if nobody had gotten that yet) design, and would also require some serious coding
5. The concept stems from a hypercube, in that a hypercube exists in 4 dimensions (or more?) and the 4th dimension is usually time
6. Basically to understand this method, consider it more along the lines of Einstein's theory of everything existing at a certain point in time, but application of that to a pixel, and with duration and more management
Well, that's precisely what I thought it was, then. This will only be good for slow-moving pixels, and may provide some computational efficiency. Even then, it's probably not going to be any better than simple MSAA, since it can't track motion over more than one pixel between frames.
 
psurge said:
here's another guess: the scene is represented as a cloud of points, arranged in say a 4 dimensional kD tree type structure (where the 4th dimension w is used to represent time,
and x,y,z are world space coordinates). To display a pixel, you obtain a 4 dimensional volume swept out by the pixel during the time slice (presumbly some kind of approximation is used to obtain it) and intersect that volume with the kD tree, combining all (or maybe the k "nearest") points you find to get a display pixel.

Basically , you don't rasterize triangles at some fixed time anymore, you store energies incident to points on your geometry (I think you might need to store some kind of incidence angle as well for accurate view dependent shading), and associate a (small) time range or timestamp with each point. The time ranges or timestamps should be randomly distributed across a frames exposure time for good quality, and basically give you a time interval during which you can reasonably use the corresponding points energy in a flux estimate (for a display pixel).

I'm not really sure how you would go about obtaining the per-pixel flux estimate...

Serge


if i knew what kD tree was, or wtf you just said
i'm guessing that's what i'm trying to say...lol
sorry for not knowing of those things, but i'm guessing I get what their function is, w/o knowing their name (but idk)
 
Chalnoth said:
By the way:
You're = you are.
Your = your posessions.

For those of us that see meaning in written words instead of sounds, it makes things very hard to read.
*Cough*

posessions = possessions

If you're going to be a smartass, do it right.
 
best way to do it, IMO is have a general shader on every small object, rockets, cars, whatever that knows how long it took to render the last frame so can guesshow many pixels along the object will be in the next frame. then just render the object to texture many times with 1 pixel offset each along the whole length. should make projectiles look MUCH better.
 
Here's an idea;

You have a scene with a single square, the top left coordinate of the square is at x=1, y=1, z=2 and it is 5 pixels wide and tall, so in one draw call, it creates 25 pixels which all have a scene depth of 2(z) as the square is flat on the screen. The (temporal) pixels are sent to the graphics card's temporal pixel buffer with a set amount of time that it (the graphics card) should display them before they fade out, lets say a W=0.1 seconds. Depending on whether or not the pixel is hidden because of it's depth in the scene and plausibly it's alpha value, the graphics card maps these pixels to the display pixels on the screen buffer. In order to get a fuid image you would have to add intermediate pixels between the current pixel and it's predecessor (I.e the previous pixel to come from a certain point on a surface). This would be done asynchronously to the creation of the temporal pixels being sent to the temporal pixel buffer.

At low frame rates it might look a bit odd and the temporal pixel buffer would have to be huge for a complex scene... I think I just described a very crude version of what psurge was talking about, but tbh I'm out of my depth :LOL:
 
Does anyone remember playing games on the ancient green-screen monitors that had really slow phosphorescent fall-time? That was some pretty spiffy temporal blur for free... nothing like Hardhat Mac or Miner 2049'er with trippy glow-trails when you run.

I'm having serious problems translating obobski's vague suppositions into concrete data flow...

What form is used for object space modelling? Classical polygonal data?

If the origin of these "4-D pixels" is the same as traditional renderers, then how is the time component generated? Is that value attached to the source triangle somehow? Is this data created by the driver or the gpu somehow and then stored in the resultant rasterized pixel buffer?

What value is there in storing this "time" element? Can a pseudo-algorithm be proposed by obobski that describes what is actually done with this time value?

A pixel Time-To-Live seems rather useless. A value that represents motion of the pixel during a timeslice might be of worth, but a single value doesn't represent that at all. At a minimum you'd need data about how this pixel is moving during that timeslice... and, really, once it's rasterized I can't see how this value is of use. To properly rasterize a timeslice you'd have to compute the positional data of the original primitive, wouldn't you? It's not the pixel that is moving in time, but rather the represented object that is moving in time. And the motion of that object does not create the exact same pixel over time when it is motion, because it will also be changing in distance and incidental angle to the observer.

As others have said, I'd think you'd want to attach the time delta information to the orginal geometry such that the rasterizing process could do its work computing the values for the resultant pixels for positions during that timeslice. But doesn't this come out to be an accumulation-buffer analog anyhow? No matter how fancy you get with representing this timeslice, the fact is that you'll still need to subdivide it into discrete sub-frames for rasterization to be combined in the final output frame.
 
obobski said:
3. Time is used not only to tell the pixel when to appear, but how long to be part of the scene, and when to go away

What use is there in storing rasterized pixel data if you aren't going to show it at all? If a pixel passes all culling and visibility tests and is rendered, then why in the heck would you not want to show it? Why would you want to have an algorithm that does a ton of work on pixels, stores additional values, and then doesn't show it?

Realize that a "hypercube" is a theoretical object that can only be approximated in an arbitrarily representational model. We cannot see in four dimensions, so it follows that any 3D projection is an arbitrary representation that fits our expectations. Likewise, even if time *is* the fourth dimension, humans cannot see into it, but are constrained to this instant of time, thus the time component cannot be directly viewed but intead inferred by the way our brains process visual data. In short, our visual cortex is an accumulation buffer that we consider to be a present representation of what we see, although what our eyes are actually sensing is quite a bit less than that mental representation.

If you do have an idea that is more than a half-developed thought, then you need to try to break it down into algorithmic components. Show us what the source data structures might look like, how those structures are translated (usefully) into 3D space, and then rasterized to take advantage of this extra data. Also, it appears you are neglecting that time is already represented in these models, just not as being attached to discrete objects, but as simply being a constantly ticking counter that models time throughout the entire data structure. Time is (for our purposes) universal and independant of objects, so why would you want to attach a time value to an object? Or a pixel?

There are, obviously, an infinite number of ways that data representation and processing can occur, but virtually all of the possibilities are pointless, inefficient, or incompatible with the discrete binary nature of computers. The few ways that do work well are currently being used and optimized for. Obviously you can get wonderful rendered images if you're willing to integrate values across a range, but that simply isn't feasible today.

Anyhow, I'm not trying to put down your idea, but I would like the idea clearly stated in terms that can be evaluated against what is in use today rather than very vague allusions.
 
Dave B(TotalVR) said:
best way to do it, IMO is have a general shader on every small object, rockets, cars, whatever that knows how long it took to render the last frame so can guesshow many pixels along the object will be in the next frame. then just render the object to texture many times with 1 pixel offset each along the whole length. should make projectiles look MUCH better.

Ragemare said:
Here's an idea;

You have a scene with a single square, the top left coordinate of the square is at x=1, y=1, z=2 and it is 5 pixels wide and tall, so in one draw call, it creates 25 pixels which all have a scene depth of 2(z) as the square is flat on the screen. The (temporal) pixels are sent to the graphics card's temporal pixel buffer with a set amount of time that it (the graphics card) should display them before they fade out, lets say a W=0.1 seconds. Depending on whether or not the pixel is hidden because of it's depth in the scene and plausibly it's alpha value, the graphics card maps these pixels to the display pixels on the screen buffer. In order to get a fuid image you would have to add intermediate pixels between the current pixel and it's predecessor (I.e the previous pixel to come from a certain point on a surface). This would be done asynchronously to the creation of the temporal pixels being sent to the temporal pixel buffer.

At low frame rates it might look a bit odd and the temporal pixel buffer would have to be huge for a complex scene... I think I just described a very crude version of what psurge was talking about, but tbh I'm out of my depth :LOL:


there, somebody gets it
sort of along those lines, and I do apologize for not explaining it very well...
that's basically the time element i'm talking about, you transition the pixels, somewhat like AA's 0, 1/4, 1/2, 3/4, 1 thing (for intensity) and then some of ragemare's idea with intermediate pixels

and then the tiling concept proposed earlier, merely to ease up the buffer size

so with say, 32x2 rendering solution
1x2 per tile, clock it high, and then alternate tiles around so it doesn't lock a single pipeline into a single tile, and so you can have more than 32 tiles, it should cut that buffer's size down...a lot (you'd have a lot of small buffers vs 1 huge one, but it would give you the adv. that it's more "randomized" and it's like having 32 gfx cards to render a single image, 32 Voodoo2's (Voodoo2's with high clock speeds, SM3.0, and good FP precision, but still Voodoo2 styling)...

so does that make more sense?
 
(Assuming I understand what obobski is trying to say)

I think most of the confusion is coming from his use of the word "pixel". My impression is that when he says "pixel" he is not talking about an address in a 2d framebuffer with associated color and Z information. In his scheme there is no framebuffer or rasterization at all. I think he's proposing that you shade 3d points on surfaces at randomized times. The resulting data-set is similar to a framebuffer only in the sense that rgb values for monitor display are derived from it.

To get display pixels for obobski's scheme, you look at all the 3d points that fall within a pixel during the frames exposure time and somehow combine them (this is the part I don't understand how to do).

In framebuffers as we know them, each pixel also corresponds to a shaded 3d point (you can use the pixel x,y,z to map to a location in world-space). The thing is, every such 3d point is shaded at a common time - hence the temporaral aliasing.
 
Whatabout rendering the 7-dimensional radiance distribution as a function of space, direction, time and frequency. Then displaying would be as easy as taking a 4d hypercube, integrating it over the time direction and for each color integrate over the frequency dimension the result mutiplied by a scaling function. Really simple and shouldn't be all that much work.
 
Last edited by a moderator:
Back
Top