Anyone have a comment on this?/reason it's not employed?

psurge said:
(Assuming I understand what obobski is trying to say)

I think most of the confusion is coming from his use of the word "pixel". My impression is that when he says "pixel" he is not talking about an address in a 2d framebuffer with associated color and Z information. In his scheme there is no framebuffer or rasterization at all. I think he's proposing that you shade 3d points on surfaces at randomized times. The resulting data-set is similar to a framebuffer only in the sense that rgb values for monitor display are derived from it.

To get display pixels for obobski's scheme, you look at all the 3d points that fall within a pixel during the frames exposure time and somehow combine them (this is the part I don't understand how to do).

In framebuffers as we know them, each pixel also corresponds to a shaded 3d point (you can use the pixel x,y,z to map to a location in world-space). The thing is, every such 3d point is shaded at a common time - hence the temporaral aliasing.


ignoring stepz's comment
your like the only person who is getting this, it's amazing...but nonetheless your getting it
along with the remval of buffers :)
the combination would be from the time data, from a point in the refresh
say you have 60hz
you then have 60 points to display per second
so it'd be w/e vlue out of 60

45/60 would be the 45th rrefresh frame of that render pass...so your viewing a pass as a full second, 60hz, or 110hz, or w/e
and then you label each as N/refresh

and then the exposure data and which refresh frames it crosses
33-47/60
frames 33, 34, 35,..., 45, 46, 47


while this seems like it'd eat up a ton of computationl power, consider that your only going at around 30-35 FPS max...and most likely more like 25 FPS
so your not talking your gonna have a refresh scheme like

3/250
or something like that
FPS > 40 isn't needed if your doing proper TMB
 
psurge said:
(Assuming I understand what obobski is trying to say)

I think most of the confusion is coming from his use of the word "pixel". My impression is that when he says "pixel" he is not talking about an address in a 2d framebuffer with associated color and Z information. In his scheme there is no framebuffer or rasterization at all. I think he's proposing that you shade 3d points on surfaces at randomized times. The resulting data-set is similar to a framebuffer only in the sense that rgb values for monitor display are derived from it.

To get display pixels for obobski's scheme, you look at all the 3d points that fall within a pixel during the frames exposure time and somehow combine them (this is the part I don't understand how to do).

You are describing some sort of voxel based rendering.
There's the sort of voxel based rendering that renders objects built from point sprites. You can apply a different transformation to them (based on time) and render them that way, but the result would be incorrect due to occlusion problems. (I think this is what you mean you don't know how to combine them.)

In framebuffers as we know them, each pixel also corresponds to a shaded 3d point (you can use the pixel x,y,z to map to a location in world-space). The thing is, every such 3d point is shaded at a common time - hence the temporaral aliasing.

Well this is the limitation of working with rasterization.
It would be possible to modify a ray tracer to take a different time for all the pixels. While it would obviously complicate things it's still easier to do that in a ray tracer because it renders the pixels independently (barring any RTRT "optimizations")
 
obobski said:
FPS > 40 isn't needed if your doing proper TMB

There are complete threads about this subject so I wouldn't go off-topic, but you are wrong (at least when talking about interactive applications).
 
Hyp-X said:
There are complete threads about this subject so I wouldn't go off-topic, but you are wrong (at least when talking about interactive applications).
Even with noninteractive applications, you'll want to have high framerates due to the occlusion problems you mentioned.
 
HypX, sort of...

I do have an idea on how to combine the points, but it would be dog slow, and as you mention, it involves ray-tracing ("rendering" the 3d points in the first place pretty much requires raytracing as far as I can tell).

Basically, once you have all the points potentially affecting your pixel during the frames exposure time, you consider them one by one:

- take the timestamp T of the point, use it to compute the camera position at time T
- ray trace from the point to the camera position and discard the point from the flux estimate if it doesn't reach the eye. Moving objects would have to have associated bounding hulls that account for object motion to avoid having to compute the state of the entire scene for every different timestamp.

Alternatively, maybe one could shoot out rays from the camera at randomized times, obtaining an intersection point with the scene, and then gather up all the nearby "3d points" to compute shading information that way (should handle occlusion fairly well).

But yeah, as stepz so sarcastically pointed out ;) , it doesn't sound all that practical. My guess is that the number of 3d points you'd need to avoid graininess in the image would be pretty insane.

[edit]
On the other hand, as obobski states, the "3d point" rendering can be optimized in the sense that unless that 3d point corresponds to a surface that is moving (or is subject to changing lighting conditions) you can keep it around for the next frame...
 
Last edited by a moderator:
I did understand what you said, and for motion blurring this is no differerent then using the accumulation buffer. The problem is that motion blur is a continuous thing, it's not a few frames blended together, it's all point's inbetween two points. So while your 4D idea is fine for representing the intermetiate points of a pixel's movement, it does nothing for the inbetween points. Compositing them as you suggest would just blend the 4D pixels you send the card, fading out those farther in the past. This is exactly what the accumulation buffer method does, except you have to submit your pixels as a complete frame. On the other hand, your method would allow for the program to send more pixel samples for faster moving objects, making their motion trails smoother while avoiding the cost of rendering everything at a much higher fps. Although, imho it's not that great an advantage.

As I said later it would be better if you paired each pixel with a velocity (or higher order forces), and then integrate over it's movement. Thus, getting a perfect motion blur, at least to whatever order (force) you build the GPU to handle. Of course, this is a ridiculously expensive thing to do, and a better solution is to take a number of samples based on it's velocity, not unlike the method I mentioned in the last paragraph. Of course, the paper I linked in my last post does something fairly similar and is already very fast on modern GPUs. I have no doubt that that is what's being used in UE3, PGR3 and other next-gen games.
 
DudeMiester said:
As I said later it would be better if you paired each pixel with a velocity (or higher order forces), and then integrate over it's movement. Thus, getting a perfect motion blur, at least to whatever order (force) you build the GPU to handle. Of course, this is a ridiculously expensive thing to do.

I think the empasis here should be on "ridiculously expensive" as it would be necessary to integrate not the intervening pixels, but the intervening geometry that would generate those pixels. A rapidly moving object could blur across an entire frame and the angles described by the camera, the object, and the light sources could (and most certainly would) be entirely different for every single point along the path of the object you cared to pick. This means that you have to integrate the object continuously across that path, or at least have some method of generating enough discrete samples as to contribute the necessary information to the rasterization process... which would at a minimum be an iteration for each pixel the moving object "passes through" in the course of the motion. And deriving and blending this continuous blur completely ignores how difficult it would be to generate reflections or shadows for that blurred object.

As far as obobski's idea goes, I'm still waiting for some kind of pseudo-algorithm to indicate what form the source "geometry" is stored in, how that geometry is evaluated and manipulated, and then how it is finally rasterized and offered up to the screen for display. Saying "it would require a new GPU" ignores that it must A) be directX and OpenGL compliant somehow, and B) have a way to spit out the data to a monitor just like every other video card does. Unless this idea is so theoretical that it could only work if we abandoned current architectures and completely built a new design from the ground-up to accomodate this new idea.
 
Back
Top