I've been pondering this for a while now, regarding motion blur in games. Most games I've played have no motion blur at all, which was fine for last-gen, but I think we're at the point where it wouldn't be as difficult as it was in the past.
I tend to enjoy racers more than anything else, and I've noticed something about two games in particular. PGR and Gran Turismo. They both use camera-based blur (GT only in replays), where the blur is based entirely on the movement of the camera. i.e. the camera rotates X degrees, therefore the screen should blur Y pixels in Z direction. Simple enough. I could do that sort of thing by hand, keyframing a blur effect in After Effects or something (in fact, AE can do it automatically so long as the object is moving in AE itself).
The downside is that it seems to apply only to the environment, and not to things in the environment. For example, the cars. PGR does in-game motion blur on the track, and even the interior of the car. But other cars driving by have no blur whatsoever. Likewise for replays. Both GT and PGR have blurring environments when the camera pans, but the cars are razor sharp the whole way through. Sort of ruins the effect. Maybe I'm being nitpicky, but blur is one thing I always try to get right in CG-land.
Why is it so hard for games to do per-pixel blurring based on motion vectors?
Here's how I normally do motion blur in CG: I render out the image sequence from LightWave with no blur, usually with medium 9-pass antialiasing. At the same time, I render out a separate sequence of motion vectors (taken from the render buffer), which take the form of psychadelic-colored images. I bring these into After Effects, and AE (through various plugins), reads the multi-colored images as motion vectors. A particular color is moving in X direction by Y pixels, and it applies those vectors to the full LW render at a per-pixel level. It requires only minimal human intervention to execute, which could probably be automated with a few simple algorithms. The result is extremely good, and extremely fast. Especially when compared to what LW can do natively.
Now, consider that LW would take a minute or more to render what a lot of games can do in 1/30th of a second or less. So if AE can do this high level of per-pixel vector blurring in only two or three seconds per frame, it should be a cakewalk for a game to do the same thing in a tiny fraction of a frame. Granted, I know it's not that easy.. hehe. But why not? What is it about "real" blurring that seems to befuddle most game developers?
And why is that when developers can't get it right, they somehow manage to totally screw it up in the attempt? I've heard of lot of complaints on gaming forums about the god-awful motion blur in this game or that game, and I keep trying to tell them that the effect can be nearly invisible if it's done correctly. PGR, for example.. although limited to camera movement, most people I've mentioned it to on forums seem surprised.. they didn't even realize the game uses motion blur. Because it's done correctly, rather than simply smearing all the pixels in one direction or another, or using some sort of idiotic image filter to "simulate" motion blur. They end up making it stick out like a cheap "lookie my filter!!" effect (light bloom, lens flare, anyone?), rather than the subtle effect it's supposed to be. Because it should be subtle.. you should have to look for it to even realize it's there, it should seem a natural part of the image. Some developers realize this and simply leave it out when they can't get it right. Others try anyway, and it ends up getting on people's nerves. Both casual gamers who think that blur is evil, and graphics whores like myself who get upset, not because they did it, but because they did it wrong.
I tend to enjoy racers more than anything else, and I've noticed something about two games in particular. PGR and Gran Turismo. They both use camera-based blur (GT only in replays), where the blur is based entirely on the movement of the camera. i.e. the camera rotates X degrees, therefore the screen should blur Y pixels in Z direction. Simple enough. I could do that sort of thing by hand, keyframing a blur effect in After Effects or something (in fact, AE can do it automatically so long as the object is moving in AE itself).
The downside is that it seems to apply only to the environment, and not to things in the environment. For example, the cars. PGR does in-game motion blur on the track, and even the interior of the car. But other cars driving by have no blur whatsoever. Likewise for replays. Both GT and PGR have blurring environments when the camera pans, but the cars are razor sharp the whole way through. Sort of ruins the effect. Maybe I'm being nitpicky, but blur is one thing I always try to get right in CG-land.
Why is it so hard for games to do per-pixel blurring based on motion vectors?
Here's how I normally do motion blur in CG: I render out the image sequence from LightWave with no blur, usually with medium 9-pass antialiasing. At the same time, I render out a separate sequence of motion vectors (taken from the render buffer), which take the form of psychadelic-colored images. I bring these into After Effects, and AE (through various plugins), reads the multi-colored images as motion vectors. A particular color is moving in X direction by Y pixels, and it applies those vectors to the full LW render at a per-pixel level. It requires only minimal human intervention to execute, which could probably be automated with a few simple algorithms. The result is extremely good, and extremely fast. Especially when compared to what LW can do natively.
Now, consider that LW would take a minute or more to render what a lot of games can do in 1/30th of a second or less. So if AE can do this high level of per-pixel vector blurring in only two or three seconds per frame, it should be a cakewalk for a game to do the same thing in a tiny fraction of a frame. Granted, I know it's not that easy.. hehe. But why not? What is it about "real" blurring that seems to befuddle most game developers?
And why is that when developers can't get it right, they somehow manage to totally screw it up in the attempt? I've heard of lot of complaints on gaming forums about the god-awful motion blur in this game or that game, and I keep trying to tell them that the effect can be nearly invisible if it's done correctly. PGR, for example.. although limited to camera movement, most people I've mentioned it to on forums seem surprised.. they didn't even realize the game uses motion blur. Because it's done correctly, rather than simply smearing all the pixels in one direction or another, or using some sort of idiotic image filter to "simulate" motion blur. They end up making it stick out like a cheap "lookie my filter!!" effect (light bloom, lens flare, anyone?), rather than the subtle effect it's supposed to be. Because it should be subtle.. you should have to look for it to even realize it's there, it should seem a natural part of the image. Some developers realize this and simply leave it out when they can't get it right. Others try anyway, and it ends up getting on people's nerves. Both casual gamers who think that blur is evil, and graphics whores like myself who get upset, not because they did it, but because they did it wrong.