Who said PS2 graphics were bad???

Alway's thought film projectors used continuous lighting in between film transport, it uses flashes then? (Otherwise 48 fps would be slightly meaningless.)
 
There are rotating shutter(s) in a movie projector to block the light when the film is advancing. Without it, the film will looks like silghtly out-of-sync.
 
Ugh... my fault :p You can think as the shutters closed twice for each frame. Hence the 48 fps.
 
TAA for slow LCDs?

An idea. There are and will be plenty of slow elderly flat panels around, with notable to horrible ghosting in fast games. Would TAA actually be more useful than FSAA there? Hiding the ghosting effect *and* removing jaggies and other spatial artefacts?

I mean, with today's top accelerators, you already get three-digit FPS at 1024 x 768 or 1280 x 1024 (typical LCD resolutions) in most games, so the (real-life) fillrate would be there. Whereas only newer (and relatively expensive) LCDs have an average response time under 25 ms, thus being able to display over 40 FPS average.

Good idea? What about it, any chance of seeing it done in near future products?

(To clarify, I have the masses of cheap flat panels for desktop PCs in mind here, not laptops etc.)

[Edit: Whoops, didn't realize how off topic this was!]
 
Sigh.

There are genuine reasons why Motion Blur is a hopeless idea in a virtual reality/gaming context.

I'll start out with a general technical observation. Motion blur in the photographic/film sense is an artifact of non-instantaneous sampling. If you are talking about computer graphics, motion blur has been bandied about as a technique to reduce the flickering caused by low frame-rates (=low sampling rate of virtual reality). It should be obvious then that the amount of motion blur has to be frame-rate dependent, and approach zero as the framerate increases. We immediately run into problems both algorithmically, and in practise, since the framerate of a virtual reality scene varies, the screen has a fixed refresh rate, and an LCD panel may have yet a third response time. If there has been any work done to take this into account, I'm unaware of it.

There is a more fundamental reason why spending effort of solving such problems are a futile waste of time.

In a virtual reality/game you cannot know to what you should apply motion blur, BECAUSE YOU DO NOT KNOW WHERE THE OBSERVER FOCUSES HIS ATTENTION.

Let me examplify: Picture yourself sitting on a staircase leading down to a street. You aren't looking at anything in particular, just straight ahead to the house on the other side. Cars, bicycles, pedestrians pass by. It would make sense in this scene to apply motion blur to the objects that are moving, right? Now consider that a nice looking young lady comes bicycling down the street towards you. You turn your head and follow her as she pedals leisurely by. In this case, since you follow her, she is percieved as sharp/static in your field of view, whereas the whole of street and houses move within that field. That is, in this case it would make sense to apply motion blur to the surroundings, not the young lady. Now consider yourself in the position of a programmer: How on earth are you supposed to know which objects are moving _within the field of view of the observer_? Answer: you can't. If you apply the blur to the mobile objects, you would get a blurred girl, and a sharp background, which would be wrong. Or, when the girl comes into view, you could assume that the observer will follow her and blur the background, which would look even odder if the viewer didn't do as you assumed. You might try to avoid dealing with the problem, by blurring everything in proportion to its' angular movement on screen, but then you have neglected that the observer may not stare emptily straight ahead all the time. The most obvious "aw heck, lets just apply it" approach is to apply it to objects that move in relation to the "static" objects in the virtual reality, but this is just as obviously wrong, since characters is what you tend to focus on. If you are attacked by someone in an FPS for example, you sure as hell aren't going to keep focussing your attention on the static wall, straight ahead...

Unless we manage to get _really_ sophisticated with eye-tracking movement, and can couple this to the identification and rendering of objects on the screen, motion blur creates more problems than it solves in a virtual reality situation, even when performed to perfection from a rendering standpoint. And even with such techniques, we still have the adaption to framerates problem.

It's just not worth it.

Particularly since the justification for applying motion blur in the first place is that the temporal sampling rate (frame-rate) is too low. It makes MUCH more sense then to simply increase the frame-rate.
There are situations when this can't be done (consoles with TV-displays) where you might actually have a known fixed sampling-rate. But even then, that doesn't help with the fundamental problem of adjusting to observer focus.

Motion blur makes sense for creating specific cinematic effects.
You can make a flawed argument for it when rendering to a fixed, low, framerate. (Good luck pulling it off algorithmically, and applying it in the right amount to the right objects without introducing anomalies.)
It makes no sense at all for general computer gaming.

Please find flaws with the above, and post them.

Entropy
 
I'll start out with a general technical observation. Motion blur in the photographic/film sense is an artifact of non-instantaneous sampling.

Its only a flaw if you are in digital world. But since we live in analog world it would be a flaw not to have it.


If you are talking about computer graphics, motion blur has been bandied about as a technique to reduce the flickering caused by low frame-rates (=low sampling rate of virtual reality). It should be obvious then that the amount of motion blur has to be frame-rate dependent, and approach zero as the framerate increases. We immediately run into problems both algorithmically, and in practise, since the framerate of a virtual reality scene varies, the screen has a fixed refresh rate, and an LCD panel may have yet a third response time. If there has been any work done to take this into account, I'm unaware of it.

Why do you say the framerate of VR scene has to vary ? Can't they lock it into a fix frame rate, even though if that number is very high ?

Are you talking about the differential between frame to frame, like when there is little movements and lots of movements ?

In a virtual reality/game you cannot know to what you should apply motion blur, BECAUSE YOU DO NOT KNOW WHERE THE OBSERVER FOCUSES HIS ATTENTION.

Yeah, if you are talking about how human eyes work, but the blurring notion we are talking about is the one of those camera tricks.

The motion blurring doesn't have to be exageratted, it has to be subtle if it were to work in FPS game. People shouldn't even notice it. Surely you don't want it to smear the screen with blurring, like you get when you have low shutter speed in camera, when you are doing high movement scene.

In FPS type game, you know where the person playing is focusing its most probably on the crosshair. In camera work, this blurring is done so to dictate the observer. Like when you make the background and foreground goes blurry and focus on the charater in the middle. That try to dictate the observer to watch that character in the middle.

Or a fast car, where they just focus on the car, and the background all goes blurry, because of motion blur, something like that.

But I agree, that motion blur require more sophisticated algo, than what we have now.
 
Entropy said:
Now consider yourself in the position of a programmer: How on earth are you supposed to know which objects are moving _within the field of view of the observer_? Answer: you can't.

if you don't know which objects are within the fov of the observer then your TnL pipe is really fucked up ;) the problem you state is easily solvable in camera (eye in ogl) space.
 
A few things.

One, a movie camera does leave its shutter open *close, but not exactly* to a full 1/24th of a second. However, it does NOT gain a full 1/24th of a second worth of information. Error tends to run in the 50% range for a number of reasons (one being that assumption of perfect absorption, when in fact the media has a specific molecular absorption cross section).

Two, flicker is reduced b/c of pulldown.. However the temporal sampling rate is still more accurately seen as 24hz.

Three.. Motion blur on a computer is complicated b/c it has to undergo THREE nonlinear transforms to get to the final receiver. Initial sampling rate, monitor sampling rate and finally the human eye sampling rate, all will play a role.

Contrast this with spatial antialiasing, where a pixel is more closely one to one between a renderer and the monitor.

...

As for Entropys concern, I think its a little bit off. 'Blur' as he defines it should NOT, a priori, be perceptible to the human eye on low frequency components if Mblur is done right. However, as it stands with current techniques (partially b/c eyetracking seems to NOT be factored into the eqn), there is a problem.
 
Back
Top