Why only camera-based motion blur?

Jedi2016

Veteran
I've been pondering this for a while now, regarding motion blur in games. Most games I've played have no motion blur at all, which was fine for last-gen, but I think we're at the point where it wouldn't be as difficult as it was in the past.

I tend to enjoy racers more than anything else, and I've noticed something about two games in particular. PGR and Gran Turismo. They both use camera-based blur (GT only in replays), where the blur is based entirely on the movement of the camera. i.e. the camera rotates X degrees, therefore the screen should blur Y pixels in Z direction. Simple enough. I could do that sort of thing by hand, keyframing a blur effect in After Effects or something (in fact, AE can do it automatically so long as the object is moving in AE itself).

The downside is that it seems to apply only to the environment, and not to things in the environment. For example, the cars. PGR does in-game motion blur on the track, and even the interior of the car. But other cars driving by have no blur whatsoever. Likewise for replays. Both GT and PGR have blurring environments when the camera pans, but the cars are razor sharp the whole way through. Sort of ruins the effect. Maybe I'm being nitpicky, but blur is one thing I always try to get right in CG-land.

Why is it so hard for games to do per-pixel blurring based on motion vectors?

Here's how I normally do motion blur in CG: I render out the image sequence from LightWave with no blur, usually with medium 9-pass antialiasing. At the same time, I render out a separate sequence of motion vectors (taken from the render buffer), which take the form of psychadelic-colored images. I bring these into After Effects, and AE (through various plugins), reads the multi-colored images as motion vectors. A particular color is moving in X direction by Y pixels, and it applies those vectors to the full LW render at a per-pixel level. It requires only minimal human intervention to execute, which could probably be automated with a few simple algorithms. The result is extremely good, and extremely fast. Especially when compared to what LW can do natively.

Now, consider that LW would take a minute or more to render what a lot of games can do in 1/30th of a second or less. So if AE can do this high level of per-pixel vector blurring in only two or three seconds per frame, it should be a cakewalk for a game to do the same thing in a tiny fraction of a frame. Granted, I know it's not that easy.. hehe. But why not? What is it about "real" blurring that seems to befuddle most game developers?

And why is that when developers can't get it right, they somehow manage to totally screw it up in the attempt? I've heard of lot of complaints on gaming forums about the god-awful motion blur in this game or that game, and I keep trying to tell them that the effect can be nearly invisible if it's done correctly. PGR, for example.. although limited to camera movement, most people I've mentioned it to on forums seem surprised.. they didn't even realize the game uses motion blur. Because it's done correctly, rather than simply smearing all the pixels in one direction or another, or using some sort of idiotic image filter to "simulate" motion blur. They end up making it stick out like a cheap "lookie my filter!!" effect (light bloom, lens flare, anyone?), rather than the subtle effect it's supposed to be. Because it should be subtle.. you should have to look for it to even realize it's there, it should seem a natural part of the image. Some developers realize this and simply leave it out when they can't get it right. Others try anyway, and it ends up getting on people's nerves. Both casual gamers who think that blur is evil, and graphics whores like myself who get upset, not because they did it, but because they did it wrong.
 
The motion blur you are asking for is a solved problem. It's not even that hard if you plan for it up front. You can read all about the details of implementing it in by searching for "Deferred Rendering in Killzone".

Few games implement it because it is expensive. It take significant development time, frame time and memory. It's not just a matter of "be cooler and make it faster". Adding a feature like this requires giving up time and memory for other things. Calculating motion vectors requires recalculating the previous frame's vertex transforms so that you can find the difference between that and the current frame's. That effectively cut's your vertex budget in half. Storing motion vectors requires a 1280x720x2byte render target. That's enough memory for 28 512x512 dxt1 textures. Also, keep in mind that you only get 33 milliseconds in a 30fps game. An individual post-process effect is probably only going to be granted 2 or 3 milliseconds out of the budget at most. That's 300-500fps. Meanwhile, comparing the work that LW does to render a scene to the work it does to apply an image filter is not reasonable. If a filter took 2 or 3 seconds to apply in Photoshop, it might be more clear how difficult it would be to optimize down to 300 fps.

Camera-motion-only blur however can be constructed from the current depth buffer and the current and previous camera angles. It can be done without any extra vertex work and, on the 360, it can be done in one pass with no extra render target memory. However it doesn't look good on the cars, so usually the cars are drawn in after the blur.

I agree with the sentiment that bad effects are worse than no effect at all.

<rant that's not really about your post so don't feel flamed>
This forum is better than most, but "dev's are stupid and lazy" posts really piss me off. Sometimes we are stupid, but it's rare that we're lazy. More often, the uber-leetness armchair developers pine for just wasn't feasible in the game's limited budget.
</rant>
 
Thanks, Corysama, I'll take a look at that white paper for Killzone (I remember hearing about it, but didn't read it because I figured most of it would be way over my head.. hehe).

Good explanation, too, and pretty much what I was expecting. Possible, but power-consuming. I wonder if the Cell's SPUs would be a candidate for this? Shuffle off the motion processing on one of the SPUs? From what I've heard (and I don't know specifics, of course), the general consensus on general gaming boards is that no one is really using the Cell very effectively at this point, that programmers are still working their heads around the best way to get tasks split up properly while keeping the code optimized to run well, and not get bogged down in task management. Most folks think that developers are largely using the Cell like a "normal" chip.

And to respond to your mini-rant (I know it wasn't directed at me, nor is this directed at you, I'm just putting my two cents in), I don't think developers as a whole are lazy.. hehe. I know there are often outside pressures associated with the process that can have a huge impact on how things are done. And that a lot of decisions are made by the producer or director, who might not know much about programming.. asking for things that the programmers disagree with, or sometimes just can't do. The same thing happens in VFX. I've gotten into arguments with directors over how a shot should look, and it's an argument I nearly always lose, and end up doing it their way. Bugs me to no end, and sometimes I'll re-do the shot after it's been delivered, just for my own gratification.

Case in point about the game thing: I remember seeing a video of Gran Turismo 4, a couple months before the game went gold. It was a tour of the production process at Polyphony, and the steps that the game goes through during it's life. At one point, the top producers were all gathered around a TV screen, and they were worriedly pointing at a graphical glitch that caused the fields to invert for a moment, causing the whole screen to become visibly interlaced for a few seconds at the beginning of a race, before correcting itself. The final game had this glitch, too, which tells me that they simply weren't able to fix it in time, before Sony slammed the door and forced them to go ahead and publish the game as final product. I don't blame Polyphony for that glitch, because I know that they were aware of it before the product launched, and that they would have done everything in their power to fix it if they could. Thankfully, it's easy to fix.. just pause the game, and when you restart, the fields are back in order, and no further problems for the whole race.

Most flaws I've seen in development that could, in theory, be chalked up to "lazy development" are usually the result of a large corporate decision and not the fault of the folks that are actually making the game. My displeasure in those circumstances is usually aimed at the company higher-ups and not the programmers. For example, what I refer to as "half-ass" PS3 ports of X360 games, that run with noticable problems compared to the native version (usually in the form of piss-poor framerates). I don't think the porters were lazy, just that they were likely rushed to make the port without the time or manpower that they needed to do it properly. That's not the programmer's fault.. that's the big corporation that says "I'll give you five people and one month to get it done". I don't want to say anything specific because I don't know who you work for, and screaming at a particular publisher might just piss you off if that happens to be you.. hehe. :) On the flp side, if that's NOT you, you'll probably agree with me. :)

Anyway, good info, and I'll check out that Killzone paper. Thanks, Corysama. :)
 
Just to clarify one point corysama made...

Calculating motion vectors will only increase your vertex workload if you do the motion vector calculation in a separate pass. As you'll see in the Killzone presentation, they calculate and spit out their motion vectors along with everything else they put in their G-buffer, using the RSX's ability to output to multiple render targets. Of course you still take up memory, bandwidth, and post-processing time even with that method.
 
Here's how I normally do motion blur in CG: I render out the image sequence from LightWave with no blur, usually with medium 9-pass antialiasing. At the same time, I render out a separate sequence of motion vectors (taken from the render buffer), which take the form of psychadelic-colored images. I bring these into After Effects, and AE (through various plugins), reads the multi-colored images as motion vectors. A particular color is moving in X direction by Y pixels, and it applies those vectors to the full LW render at a per-pixel level. It requires only minimal human intervention to execute, which could probably be automated with a few simple algorithms. The result is extremely good, and extremely fast. Especially when compared to what LW can do natively.

Now, consider that LW would take a minute or more to render what a lot of games can do in 1/30th of a second or less. So if AE can do this high level of per-pixel vector blurring in only two or three seconds per frame, it should be a cakewalk for a game to do the same thing in a tiny fraction of a frame. Granted, I know it's not that easy.. hehe. But why not? What is it about "real" blurring that seems to befuddle most game developers?
It's not that easy, unfortunately. And your extrapolations are based on faulty logic.

Lightwave is a raytracer. Gamers rasterize. They are totally different, with the latter being much faster. Just because the scene render in games are 1000x faster doesn't mean the motion blur can be.

AE makes really good motion blur because it uses scatter. A pixel has a velocity vector associated with it, so it is blurred by writing it out to neighbouring pixels. This is a very good approximation to the real temporally supersampled motion blur.

In games, GPU's can only do gather at fast speed. Image space motion blur on a GPU involves reading a pixel's motion vector, and assuming neighbouring pixels in that direction blur onto the pixel you're working on. This is only a good approximation of the above in areas where the motion vector is similar. Unfortunately, this often means edges of objects don't blur when they should and vice versa. Sometimes you can limit artifacts with a stencil buffer or, as corysama suggested for racing games, drawing some objects afterwards

The motion blur you are asking for is a solved problem. It's not even that hard if you plan for it up front. You can read all about the details of implementing it in by searching for "Deferred Rendering in Killzone".
KZ2 has the same problems I mentioned above. The edges of objects don't blur. It's just their interiors that do. IMO, this isn't a solved problem by any means, and KZ2's motion blur is really overrated.

Even for the camera-only motion blur, only rotational movement gets blurred correctly since the blur amount changes very gradually across the screen. Positional movement suffers from the same gather vs. scatter problem mentioned above, and you see this in racing games.
BTW: The Capcom "Framework" discussion led me to an old presentation on how to implement motion blur using pixel velocities calculated in the vertex shader.
Now Capcom does indeed modify the geometry when it does motion blur, and I think you're right that it's the same way mentioned in that NVidia document. Unfortunately, problems arise when two objects blur onto the same pixel or are even just near each other. You get quite a few artifacts. Just have to hope that the motion is fast enough that the gamer doesn't notice it. I personally think it worked pretty well in Lost Planet.
<rant that's not really about your post so don't feel flamed>
This forum is better than most, but "dev's are stupid and lazy" posts really piss me off. Sometimes we are stupid, but it's rare that we're lazy. More often, the uber-leetness armchair developers pine for just wasn't feasible in the game's limited budget.
</rant>
One thing I've wondered is why devs don't implement tiling more universally. Am I mistaken in thinking that nearly all games implement bounding volumes and object level frustum culling? It seems so ridiculously easy to simply use a narrower frustum for each tile and traverse the scene graph multiple times.
 
Now Capcom does indeed modify the geometry when it does motion blur, and I think you're right that it's the same way mentioned in that NVidia document. Unfortunately, problems arise when two objects blur onto the same pixel or are even just near each other. You get quite a few artifacts. Just have to hope that the motion is fast enough that the gamer doesn't notice it. I personally think it worked pretty well in Lost Planet.

Agreed. I love the MB in Lost Planet... that and the particle system. Everyone should use quarter size particle buffers!

One thing I've wondered is why devs don't implement tiling more universally. Am I mistaken in thinking that nearly all games implement bounding volumes and object level frustum culling? It seems so ridiculously easy to simply use a narrower frustum for each tile and traverse the scene graph multiple times.

New thread worthy!
 
Agreed. I love the MB in Lost Planet... that and the particle system. Everyone should use quarter size particle buffers!
I'm going OT, but IMO the explosions are far and away the best in any game. The smoke looked really lame in early builds, but check out the explosion at 16 seconds in here:

http://xboxmovies.teamxbox.com/xbox-360-hires/3978/Multiplayer-Mech-Warfare-HD/

That's the kind of effect that I wouldn't even know where to begin if implementing it myself. It's got great volume to it and the motion is awesome. Not your simple animated sprites (though occasionally a smoke fadeaway looks that way).

This video is also a good example of where the motion blur works well on most objects despite showing artifacts when pausing, but looks rather ugly on the minigun.
 
Good discussion, guys, thanks. :)

I was watching some GT5PD replays, and noticed that they did put motion blur on the car wheels (finally.. the wagon-wheel effect on non-blurred wheels in GTHD was getting on my nerves). I'm guessing that they went the way of past GT games and most other racing games, and simply swap out the polygon model for a flat poly textured with a pre-blurred image.

Workarounds in that situation appear to be necessary, as per-pixel blurring based on motion vectors alone doesn't work on rotating objects. I noticed it in the early gameplay footage of LittleBigPlanet, when they roll the skateboard down the hill at the end. The wheels "fuzz out" due to the motion vectors pointing off in odd directions. It was something I was able to easily duplicate in LightWave using the method I described up top. It's easy to see why that sort of blurring wouldn't work in a racing game, where every car would be riding around on four tribbles instead of tires.

Good for bringing up Lost Planet, though.. although I never bought the game, I remember noticing the excellent use of motion blur in the demo.

The wheels in GT5P do behave oddly, though, and I haven't stopped the video yet to examine what it's doing. When the cars drive by, at times it appears that part of the wheel (usually the upper or lower half) appears to lose it's motion blur. Seems to depend on the angle of the wheels in relation to the camera, and possibly the combination of the wheels and the blurred roadway behind them (which would explain why it affects only the top/bottom of the wheel rather than front/back, because of the blurred background seen underneath the car).

Visible banding/stepping on high-speed whip pans, but that's probably unavoidable. And perhaps my eye is trained to spot that sort of artifact from my time working in LightWave, where I would usually push to avoid things like that (and so keep an eye out for it in early renders). So maybe it's not even visible to the average gamer. :)
 
AE makes really good motion blur because it uses scatter. A pixel has a velocity vector associated with it, so it is blurred by writing it out to neighbouring pixels. This is a very good approximation to the real temporally supersampled motion blur.
This is rather hacky, you are ignoring that the invisible surface below the leading edge should be contributing but obviously can't ... also to make it work on both the leading and trailing edges you have to scatter half forward and half backward. In effect you are partly projecting forward in time which could get a little bit jerky if the underlying assumption of motion staying constant gets violated.

If you make the simplifying assumption that there will only ever be two surfaces with translational motion contributing to a pixel I have the sneaking suspicion you could probably do this with a pure gathering approach and make it fast.
 
This is rather hacky, you are ignoring that the invisible surface below the leading edge should be contributing but obviously can't
Of course, and that's why I said it's an approximation of temporally supersampled motion blur.

However, it's a very good approximation. You only apply this effect to things that are moving fast and are blurred, which is the best case for ignoring artifacts. That's why scatter motion blur (like in After Effects) looks nearly perfect. You could even handle rotational motion with a curvature term.

The problem is that scatter blur is not doable on GPUs with any reasonable efficiency. You need to use gather blur, which has a lot more artifacts, and thus needs a lot of hacks to make it look good in the general case.

One specific case, however, that works well with gather blur is car game replays. The camera is focused on the car (i.e. no blur) and everywhere else pixels have velocities similar to those of neighbouring pixels. You can just mark car pixels and ignore them in the gather blur.
 
FYI,

Project Offset using realtime motion blur that is calculated againsts every object in the scene. Its not a post proccess effect and so far is the only engine I can think of that does this.

Download one of their "sneek peak" videos, it demostrates the Motion blur and isnt only camera based.

http://projectoffset.com/
 
Listen to the commentary ... "the engine performs motion blur as a post process" :) It does work across edges though.
 
Project Offset using realtime motion blur that is calculated againsts every object in the scene. Its not a post proccess effect and so far is the only engine I can think of that does this.
You forgot about Lost Planet and Capcom's Framework engine. We had some good commentary on this method of motion blur but it got lost in the B3D crash.
Listen to the commentary ... "the engine performs motion blur as a post process" :) It does work across edges though.
Well, even when you do a better motion blur using a motion vector pass with a vertex shader, the blurring itself is done as a post process.

Of course, it's possible that they came up with a method entirely different to anything we've seen before.
 
The wheels in GT5P do behave oddly, though, and I haven't stopped the video yet to examine what it's doing. When the cars drive by, at times it appears that part of the wheel (usually the upper or lower half) appears to lose it's motion blur. Seems to depend on the angle of the wheels in relation to the camera, and possibly the combination of the wheels and the blurred roadway behind them (which would explain why it affects only the top/bottom of the wheel rather than front/back, because of the blurred background seen underneath the car).

The wheel blur in GT is pretty interesting. It doesn't seem to be as simple as fading in a pre-blurred disc based on rotational speed. In real life, if the camera were still as a car passes by, then the bottom of the wheels would not be blurred while the top would be, since the bottom would be more-or-less stationary relative to the camera. If the camera were panning in the forward direction of the car, but twice as fast, then the top would not be blurred and the bottom would be. Perhaps the partial blurring of the GT wheels is trying to mimick this effect.
 
Back
Top