This is probably a stupid question...

Grall said:
Rev,

How can you call that motion blur, when there's no blurring at all? All it's doing is drawing the model multiple times and fading them out. You can do that effect completely in software, which in fact, tons of games already has over the years, many of them BEFORE 3dfx announced this "blur" feature of theirs.

*G*

Except doing this in software implies reprocessing the geometry multiple times, plus doing a lot of other calculations... Not a big problem if you are already fillrate bound, I agree. IMHO, the big problem of the 3dfx implementation of motion blur and other effects is that for most things (except RGSS AA, which gave pretty good results back then) the number of buffers was way too low to get a realistic look. IIRC, there was a white paper that defined a good number for nice-looking DoF, blur... effects at 16 samples. Note that if released, the V5 6000 could have processed 8 samples, which would have been much better looking.
 
There's several different things here.

There's a technique more correctly called 'motion trail' rather than 'motion blur' which is to take the last frame and alpha-blend it at some level on top of the new one. This gives an echo of previous frames, with the number of 'trail' copies seen determined by the blend percentage.

The first application I remember seeing of it was in Magic Carpet, by Bullfrog, waaaay back, which used a software implementation. It was used in some accelerated 3D games, but since it looks pretty poor unless you're running at really stupid framerates it's not been much bothered with.

From the looks of the screenshots this is a sort of half-and-half with a trick using the T-buffer.

Motion blur is a big unsolved problem in real-time 3D right now. Frankly there's not even a 'decent but a bit slow' approach - it's just 'naff' or 'much much too slow'.
 
How can you call that motion blur, when there's no blurring at all? All it's doing is drawing the model multiple times and fading them out.

If you honestly think a post-render, hardware effect is going to magically tween digital geometry updates into some form of analog transitioning... I have some bridges to sell for cheap.
 
Grall said:
Rev,

How can you call that motion blur, when there's no blurring at all? All it's doing is drawing the model multiple times and fading them out. You can do that effect completely in software, which in fact, tons of games already has over the years, many of them BEFORE 3dfx announced this "blur" feature of theirs.

*G*

What was significant, of course, was that 3dfx pushed the T-buffer as a part of its "cinematic computing" thrust long before nVidia coined the very same phrase. It was unique and worthwhile precisely because it was a hardware feature--not a software feature (as nVidia's many copy-cat attempts at the time illustrated.)

The blurring is an illusion created by the subtle fade out of the effect in the particular case of the Q3 thing. Without the fade effect you can see in the screen shots it didn't look like motion blur at all (as you see in the cheap nVidia imitations of it at the time.)

"Motion blur" is an illusion itself, something 3dfx billed at the time as a "cinematic" effect. You see motion blur in movies where it either results from the inability of the camera at 25fps to keep up with a moving object, or else an intentional effect such as the streaking star fields seen in Star Trek NG, etc. In real life with a good set of peepers you see far less motion blur than you do in the movies, where its usual effect is to portray the speed of moving objects more convincingly by exaggerating that movement.

Bottom line is still pixel fill rate, however, as it would do you little good today to make a chip with the PS/VS power of R3xx that did 2 pixels per clock and ran at ~183MHz. But with the T-buffer, the "cinematic" effects 3dfx PR'ed for the VSA-100 series were really more of a side-show to its real purpose for the T-buffer which was hardware-jittered FSAA. 3dfx's intent was that developers code specifically for the T-buffer to use it to produce "cinematic" effects apart from FSAA, and the Q3 thing was just a hack 3dfx played around with for PR purposes (ie, it was a poor-man's demonstration of the effect, IMO, as Q3 was never written with T-buffer support in mind.)

I can recall Tim Sweeny commenting that he thought depth-of-field would be really cool for things like rifle targeting scopes (although I don't think he ever did anything with that idea.) An argument at the time--and a serious question--was what happened to your FSAA when you utilized the T-buffer to do special effects other than FSAA (which may have dampened developer interest in "cinematic" effects with the VSA-100.) I remember at the time Scott Sellers hinting around that DX9 was where the "real action" was going to be and where the VSA-xxx would "really" shine--and thinking how odd it was to hear that since DX8 was only just introduced at the time. Now that DX9 is finally here his comments are much clearer ...;)

I think the R3xx chips today can probably do everything much better than the VSA-100 (it would have taken a pair of V5 6K's running in parallel at 183MHz--or 4 parallel V5 5.5K's running at ~183MHz) just to approach the raw fill rate of my 9800P running at 445MHz--and even then the 9800P would blow them out of the water.) But it's going to take developers who want to go beyond DX7/8 programming before we see any of these "cinematic" effects to any degree.

Personally, I'm not much of a "cinematics" effect guy because the effects are largely artificial and really do not mimic the way the eye functions--you don't see much (if any) motion blur, and things like "depth of field" relate entirely to how a camera focuses as opposed to the human eye. I think these effects can be useful, though, as long as they aren't abused. I have more of a "photorealistic" taste, actually. But I do like high-quality FSAA, though, because in life objects don't normally exude pixellated stairsteps...;)
 
(as you see in the cheap nVidia imitations of it at the time.)

As opposed to the cheap 3dfx imitation of motion blur?

Gah. They both sucked. "Blur" is not the word I'd use to describe either one of the outputs. Even on that motorcycle game somebody linked to in this thread, it still looks sucky.
 
WaltC said:
Personally, I'm not much of a "cinematics" effect guy because the effects are largely artificial and really do not mimic the way the eye functions--you don't see much (if any) motion blur, and things like "depth of field" relate entirely to how a camera focuses as opposed to the human eye. I think these effects can be useful, though, as long as they aren't abused.
Not sure about this one.

I'd say that personally I am aware of 'motion blur' in my visual system, and I definitely have a very dramatic depth-of-field. Perhaps I'm more sensitive to the latter because I switch between glasses and contacts often.

I'd say that moving images that don't contain motion blur - even at high frame rates, 100fps+ - are 'disturbing'. They don't look natural to me.

I think this is one of those things that varies wildly from person to person.
 
WaltC said:
Personally, I'm not much of a "cinematics" effect guy because the effects are largely artificial and really do not mimic the way the eye functions--you don't see much (if any) motion blur, and things like "depth of field" relate entirely to how a camera focuses as opposed to the human eye. I think these effects can be useful, though, as long as they aren't abused. I have more of a "photorealistic" taste, actually. But I do like high-quality FSAA, though, because in life objects don't normally exude pixellated stairsteps...;)

This statement made me think a little. Why are we trying to emulate movie effects anyway shouldn't we be trying to emulate real life?
Like lens flare. I remember JC or someone talking about how lens flare in real life is actually generated inside the eye itself. Unfortunately it's hard to visualy emulate what it lloks or feels like to fall off a 10 story building bacause there aren't that mean ppl. around to decribe it. Even if there were you have the subjectivity of each witness. I guess you need a camera that acts physically identical to a human eye and start throwing it around.
sorry about OT maybe someone should start a new thread ..
 
Admittedly, the screenshots do look like some kind of cheesy-looking motion blur approximation. But I distinctly remember at the time (when the Voodoo5 was being used) that you rarely see the "trail" or multi samples but a cool looking "blur-like" thing. It was dumb (used the way it was in Q3... surely that cannot happen in real life looking at running characters) but still cool.

I thought this was a good page about motion blur. The "What does this mean to the programmer?" section showed what's correct and what's false. Most definitely the 3dfx T-Buffer is false.
 
Dio said:
I'd say that personally I am aware of 'motion blur' in my visual system, and I definitely have a very dramatic depth-of-field. Perhaps I'm more sensitive to the latter because I switch between glasses and contacts often.

Actually, you have the depth of field of a roughly 25mm lens focussed through an aperture of (in photo-terms) f4-f22.
Which, at most distances and given the resolution of the retina, makes DOF effects very occasional. Even more significantly, humans focus on what they are looking at (unless drunk to the brink of unconciousness :)), making DOF, for the most part, subjectively non-existant. Furthermore, DOF gets subsumed in the off-center resolution fall-off of the eye. Things simply get less sharp off center regardless of distance from the plane of focus.

If you are simulating a 3D world, DOF is simply inappropriate. It would be like looking at the world through a pair of binoculars that someone else continously kept adjusting focus on. Completely unnatural, and damn annoying to boot.

Motion blur has been done to death here. The jury is out on whether it really is an asset in motion pictures, or if it simply mimics film artifacts (which can be important for percieved "reality"!) or if, given the awful framerate of motion pictures, the benefits is also generally worth the artifacts induced by motion blur. When simulating a virtual reality, as in games, motion blur is unnatural. So is the stroboscopic effect of limited framerates obviously, but generally motion blur would seem to simply add another artifact, rather than compensate for the limited framerate.

Both features look way neat on feature lists though, particularly given the current "cinematic rendering" rut. And some may find the effects cool, particularly the programmers that get to do them I guess.

Entropy
 
If you haven't seen the demo in real time, please stop criticizing it. I downloaded it when it came out (thanks Rev!) and have seen the demo in action.

q3.jpg

Maybe ill run it later tonight in my voodoo5 rig just for kicks.
 
I would say that if I look at something moving quickly - say, helicopter blades, a ceiling fan, or another car's wheel when pacing it on the motorway - I see only a blurred representation.

So I'm not sure how you can argue that motion blur is 'unnatural'. If so, what am I seeing when I look at these kind of objects?
 
gkar1 said:
If you haven't seen the demo in real time, please stop criticizing it.

I know what I'm talking about.
If you like the effect, you're welcome.

The link that Reverend posted is pretty good, but written with a cinematic slant, and to explain the theoretical benefits of motion blur. Be aware that you could easily write something similar and kill off the concept utterly. Only the critics do not have anything to sell, so why should they bother?

Entropy

PS. There was a German who published a pretty good dissertation on motion blur a year or so ago. He had the decency and intellectual integrity to actually include as one of the fundamental assumptions that "the viewer remained focussed at the center of the screen". It impressed me that he didn't "forget" to mention this, as is common (Revs link for instance). But it's the fundamental reason why motion blur, regardless of sampling sophistication, hardly ever will be approriate for general appliance in virtual reality simulation, and is detectable (whether regarded as enhancing or not) in cinematics.
 
Dio said:
I would say that if I look at something moving quickly - say, helicopter blades, a ceiling fan, or another car's wheel when pacing it on the motorway - I see only a blurred representation.

So I'm not sure how you can argue that motion blur is 'unnatural'. If so, what am I seeing when I look at these kind of objects?

That is a function (a limitation) of the eye. If you could get such a high rotation speed on the output from a 3D card (and monitor, of course!) you would be seeing the exact same things. Simulating multi-k rotations per minute on a computer screen isn't practical, however, and that's why we need to "simulate" them.

Motion blur can be defined as "movement during a timeframe" (the time frame here being the refresh rate of the eye). Effective simulation of, say, a helicoptor rotor on a computer screen would probably mean locking the frame rate and using "pre-blurred" blades -- a blade texture representing the movement during a specified time slice.

To do this with complex objects -- maybe a moving person -- must be insanely hard, but the 3dfx approach certainly is the wrong way, as they are leaving "trails" behind the figure and that sure isn't how motion blur looks like. You would need a system that could buffer several renders of moving objects, and that is pretty absurd.

Ironically, one way of realizing true motion blur would be extremely low frame rates and the above mentioned buffering (motion picture-style) while another way would be extremely high frame rates (as real-life). Ho hum...

All IMO of course ;-)
 
RussSchultz said:
As opposed to the cheap 3dfx imitation of motion blur?

Gah. They both sucked. "Blur" is not the word I'd use to describe either one of the outputs. Even on that motorcycle game somebody linked to in this thread, it still looks sucky.

Agreed that 3dfx was "imitating" motion blur as it was the first original stab at it in a 3D chip. What annoyed me about nVidia at the time though was that they aped everything 3dfx was doing relative to T-buffer FSAA and effects with much inferior software imitations, while they simultaneously attempted to downplay the significance of it all. Heh--that's pure nVidia for you...;) Here we are a few years later and 3dfx is gone but FSAA is alive and thriving, and nVidia's new slogan is 3dfx's old "cinematic effects" spiel.
 
Entropy said:
Actually, you have the depth of field of a roughly 25mm lens focussed through an aperture of (in photo-terms) f4-f22.
Which, at most distances and given the resolution of the retina, makes DOF effects very occasional. Even more significantly, humans focus on what they are looking at (unless drunk to the brink of unconciousness :)), making DOF, for the most part, subjectively non-existant. Furthermore, DOF gets subsumed in the off-center resolution fall-off of the eye. Things simply get less sharp off center regardless of distance from the plane of focus.

Heh...this reminds me of a truly, truly awful DVD my wife and I rented a couple of days ago--starring "Joe Estevez".....M. Sheen's (older?) brother...It was so bad we couldn't go further than 20 minutes into it....;) The funniest thing was the movie was so low budget (but of course "digitally mastered for widescreen presentation") that the camera was constantly out of focus from scene to scene!.....It was really funny--the camera would usually focus on the closest subject fine but people standing a couple of feet deeper in the scene were blurred! They either had a bad camera or cameraman or both...;) Can't say as I've seen one that poorly done in awhile...
 
Motion blur can be defined as "movement during a timeframe" (the time frame here being the refresh rate of the eye).

It is extremely misleading to talk about the "refresh rate" of the eye, as if the human visual system operated using discrete, or progressive, "frames".

Visual information is detected by the eye using cells with different chemical characteristics and is transferred to the visual cortex through many different neural pathways carrying different kinds of information and this is all happening in a continuous manner, so talking about a discrete scene or perception is really impossible.
 
CorwinB said:
Direct3D Motion Blur using Vertex Shader

This one looks pretty nice (IMHO better than the 3dfx one).
Well, set the perspective so that one sphere and its "trail" is in front of the other for a short time... you will see the impressive effect of non-ordered transparency ;)

You'd have to split all blurred objects into convex parts and order them back-to-front to make it look right. And if the viewer moves, everything should blur... a bit too much work for cards that don't support order-independent transparency.
 
Dio said:
So I'm not sure how you can argue that motion blur is 'unnatural'. If so, what am I seeing when I look at these kind of objects?

When we are talking about motion blur in movies, it is there because the object in question has moved during the exposure time of the film enough to blurred. So it is blurred because it is moving in relation to the camera. However, in the real world we can track fast-moving objects with our eyes so that they appear perfectly sharp. Putting motion blur in a computer game is unnatural in the sense that it forces blurring even to objects we are choosing to track.

When we are talking about depht of field and motion blur we have to distinguish between movies and movie-like demos where the cameraman (or imaginary cameraman) decides where the camera is focused and how it is moving and computer games, where the player is normally choosing where to direct his attention. In the latter case you don't need to mimic camera or film artifacts.
 
Back
Top