What do you mean by that? I'm trying to understand the difference between the two types of MB that you're talking about. What makes one require more processing time than the other and impossible in realtime?
The post-processing method of motion blur (which we've seen in PGR3, Capcom titles, Gears of War) selectively blurs the image based on an of-screen buffer containing a 2D velocity vector for each pixel. Since the velocity info for each model is generated the same exact location that you see the model on-screen, anywhere outside the object silhouette the velocity is zero (or its set to velocity of a model behind it) and therefore won't be blurred as it should. However the blurring *should* extend past the silhouette, because with true motion blur an object appears "stretched" in the direction of movement relative to the camera.
The "proper" method of motion blur, which you see in the photo mode, is done by actually rendering several images across a set period of time (this period of time corresponds to the time the camera's shutter is open). It's actually a method of anti-aliasing, since you get temporal aliasing when you sample at discrete time intervals. When you render multiple times to an accumulation buffer and apply a filter, you're actually doing the same kind of super-sampling you do to combat jaggies. While this isn't necessarily impossible in real-time, you can imagine how much it would kill your performance to actually render each visible frame multiple times. However since the photo-mode doesn't have to be in real time, they can just quickly render 16 or so sub-frames and produce a beautifully blurred image.