I can do both. I'm cool just like that.
Well, I can explain it simply, given that the papers and presentations linked in this post go into the gritty details already.
MB is obtained in Real-Time by rendering the frame you want to see motion blurred first and then an offset buffer, or velocity buffer, in second. Offset buffer which is calculated on a per vertex basis (according to the movement direction) and then stored in screenspace (RGB, like a Normal Map, if you will). And finally you apply a blur on the pixels of your original frame based on the vectorial (movement) information stored in the offset buffer.
And as they say a picture worth a thousand words, so here's some pictures to illustrate that:
You render the frame you want to se motion blurred, in this case a tunnel:
You calculate the velocity information and then buffer it, store it into screenspace RGB information, this gives you something like this (without the pointing arrows, of course
) :
And finally you apply your blur effect (akin to the ones you have in Photoshop, Matrix Convultion and the like) to the frame. the amount of blur applied per pixel, relatively to the other pixels of the frame, is determined by your velocity buffer:
Pictures shamelessly stolen from an Ati presentation and uploaded to
xs.to.
I hope that this explanation was of any use to you, or to anyone else.
And here's the promised paper and presentations:
An academic paper on a RT implementation.
http://www.dtc.umn.edu/~ashesh/our_papers/motion-blur.pdf
A sample article from the excellent ShaderX book series, from an implementation from Ati guys:
http://www.ati.com/developer/ShaderX2_MotionBlurUsingGeometryAndShadingDistortion.pdf
An Ati presentation that sum up the technique in more simplistic terms:
http://www.ati.com/developer/gdc/GDC2005_BringingHollywoodToRealtime.pdf
You have other sources on the subject in the references section of these papers.