A comparison of motion blur implementations *spawn

"The best" is hard to define because everyone has different opinions on how strong the effect should be in the first place. But if we want to speak objectively, i don't think there's a better implementation in a game than Driveclub photomode (which isn't real-time, but that's another discussion), maybe next-gen material?

21699864412_e068f6992c_o.jpg

21090359263_0b33f27b8f_o.jpg

21699851662_b435d01c02_o.jpg

21523456860_91135d3e2f_o.jpg

18602333519_02a0424fe7_o.jpg

20663423269_9b2e9377e7_o.jpg
 
'Best' can be defined as having no artefacts and applying both camera and object motion blurs effectively. Whether we like the results or not doesn't matter. So uncharted 4, for example, has situations where it works quite well, and others where it breaks (on an unfinished game). Character motion blur is actually the hardest one to get right as different parts of a character can be moving at different vectors, so that's probably the area we want to contentrate on. Does any game nail character moblur 100% of the time?
 
I am really intrigued by the SSR in those driveclub reflections. As MB is usually done as a post process, it blurs the entirity of the moving areas, which would incorrectly blur the reflections along with the asphalt texture, here though, the asphalt is blurred, while reflections stay sharp. Very good results.
 
I am really intrigued by the SSR in those driveclub reflections. As MB is usually done as a post process, it blurs the entirity of the moving areas, which would incorrectly blur the reflections along with the asphalt texture, here though, the asphalt is blurred, while reflections stay sharp. Very good results.

Then you'll love this one
16454465708_daac62cab5_o.jpg


Some more shots from my album, wondering if we'll ever get such quality of moblur (maybe it's unnecessary to have such quality in real-time)
17185808261_ba99984ff3_o.jpg

16967789170_3eea0ce7f4_o.jpg

16947928537_e243d847ea_o.jpg

16452618934_d7ba028204_o.jpg
 
"The best" is hard to define because everyone has different opinions on how strong the effect should be in the first place. But if we want to speak objectively, i don't think there's a better implementation in a game than Driveclub photomode (which isn't real-time, but that's another discussion), maybe next-gen material?

21699864412_e068f6992c_o.jpg

21090359263_0b33f27b8f_o.jpg

21699851662_b435d01c02_o.jpg

21523456860_91135d3e2f_o.jpg

18602333519_02a0424fe7_o.jpg

20663423269_9b2e9377e7_o.jpg

If photomode counts then you might as well count prerendered cutscenes.
 
Your 4th picture shows how shadows also behave correctly. Makes me think their photo-mode solution is just brute-forcing this by acumulating multiple frames into one pic and calling it a day.
 
Your 4th picture shows how shadows also behave correctly. Makes me think their photo-mode solution is just brute-forcing this by acumulating multiple frames into one pic and calling it a day.
Yep. Photomode in Driveclub is using accumulation to handle every applicable problem.

Haven't used it in a while, but I recall it having sampling issues if you cranked things too high. It would seemingly take the same number of samples regardless of what you're doing, so if you maxed out the MB and used tons of DoF, it would stop accumulating before things stopped being grainy.
 
Our eyes also blur the images we process, the easiest way to test that is waving your hand in front of your face like an idiot (or... John Cena?). You lose all that information no matter how hard you try to capture it and cameras are actually much better at filtering out motion blur at very low shutter speed (they effectively handle changes in light much faster than our eyes can).

Some examples
To see why this is a fallacy for gaming, do the very same experiment but then repeat it with the hand moving with the same velocity from one side to the other. Notice that the hand now does not blur. For blurring to work you HAVE TO know what the viewer is looking at. And current systems can't which means even a theoretical ideal algorithm will produce unnatural effects. Is a player looking across the street looking straight ahead (traffic blurred, background not), or following cars with his eyes? And is he following the traffic from left to right, or from right to left?

While blurring is a naturally occuring phenomenon, that doesn't mean it can be simulated in games. Like DOF effects, the moment the player looks at something else than the game designer expected, the experience is jarringly unnatural.
 
To see why this is a fallacy for gaming, do the very same experiment but then repeat it with the hand moving with the same velocity from one side to the other. Notice that the hand now does not blur. For blurring to work you HAVE TO know what the viewer is looking at. And current systems can't which means even a theoretical ideal algorithm will produce unnatural effects. Is a player looking across the street looking straight ahead (traffic blurred, background not), or following cars with his eyes? And is he following the traffic from left to right, or from right to left?

While blurring is a naturally occuring phenomenon, that doesn't mean it can be simulated in games. Like DOF effects, the moment the player looks at something else than the game designer expected, the experience is jarringly unnatural.
It's only unnatural if you consider the game view as your eyes instead of a camera. AFAIK pretty much all developers turn off MB in VR applications and not just for performance or aesthetics.
 
It's only unnatural if you consider the game view as your eyes instead of a camera. AFAIK pretty much all developers turn off MB in VR applications and not just for performance or aesthetics.
..but because it, regardless of implementation, causes unnatural effects.
Games pretty much always casts the player as the "doer". Not as viewer of an optical recording of something that happened. There is no camera, either actual or conceptual.
 
Many games applied motion blur even when objects/characters were moving very slowly; that is a case when motion blur can be considered "unnatural".
An example is The Last of Us when you get motion blur on charters as soon as they walk or crawl or every time the camera starts to move.

Aside for objects that are moving fast (car wheels spinning at 2000rpm; tennis ball travelling at 200kmh) motion blur in games is used/applied to simulate a cinematographic effect, a camera phenomenon or create an "artist effect" rather than simulate the human vision.
From my experience the speed at which an object travels is not the deciding factor when it comes to use motion blur...most developers simply apply mb to anything that moves :yep2:
 
Last edited:
Games pretty much always casts the player as the "doer". Not as viewer of an optical recording of something that happened. There is no camera, either actual or conceptual.

Whaaaaaaa?!?!? I always thought it was lakitu filming me play with a camera hanging from a fishing rod.
 
..but because it, regardless of implementation, causes unnatural effects.
Games pretty much always casts the player as the "doer". Not as viewer of an optical recording of something that happened. There is no camera, either actual or conceptual.
If unnaturalness is the problem, the it can also be argued that lack of motion blur ALSO produces unnatural effects.
 
..but because it, regardless of implementation, causes unnatural effects.
Games pretty much always casts the player as the "doer". Not as viewer of an optical recording of something that happened. There is no camera, either actual or conceptual.
If unnaturalness is the problem, the it can also be argued that lack of motion blur ALSO produces unnatural effects.
 
Does anyone have high quality screens of Quantum Break? I played it at a private MS event a few days ago and the blur was really unnerving. I initially thought it had to do with the sub-HD resolution, but when standing still everything appeared to be much smarter. It's some kind of temporal blur because on characters it produced a lot of ghosting. Which is really a shame because the 2014 footage had extremely high quality motion blur without the blurring of all detail, and the ghosting artefacts. It was comparable to the order with regards to the CGI-look. Now... with all the low resolution reflection flickering and unstable ambient occlusion, it seems like they are developing the game backwards,seeing as the 2014 versions had much better motion blur, as well as graphics.
 
Is that mo blur or spatial weird blir - this game has a lot of effects!

Yeah, it's quite heavy on the post processing side. And i changed my mind about that on QB, i think it fits the game perfectly even though it can look quite fuzzy at times. I also think these are maybe PC screenshots, or at least some of them are. Like this one:
image_quantum_break-31039-2722_0007.jpg
 
yeah the PC version looks super clean in comparison. The Xbox One game in realtime/person has a muddy look, and the blur is not like... Dead Rising 1 on Xbox 360, but more like.... GTA3 on PS2.. if that makes any sense :D
 
Back
Top