Post processing is of course a very important element in our work, so it makes sense for any game to try to replicate all this for a better final image.
Any kind of CG imagery is way too perfect to feel realistic, because all the algorithms are built with simplifications and shortcuts - you can never have the molecular level imperfections in objects and atmosphere and camera lenses and such that define practically any kind of live action material everyone's used to. Neither is it anywhere near the world you perceive through your eyes and the way your brain interprets the signals. Whether it's a real-time engine running on limited resources or a highly sophisticated software renderer taking days for a single frame, the results are never good enough and it's probably going to remain so for the foreseeable future.
Then there's the problem of efficiency. You can always fine tune your models, shaders, lighting and renderer settings and still never reach perfection - but any reasonable business venture would be unable to afford this amount of time, and there are always shortcuts to modify the results to your liking. It just doesn't make sense to try to do thing the "right" way if there are other options to get to the same results.
So we do a lot, a LOT of stuff in comp.
We do prefer 3D motion blur calculated by the renderer because it's way better at handling nonlinear changes and it also takes care of stuff like changes in lighting and shadows; but depth of field is usually applied in post. We also add almost all the lens effects in 2D as well, things like glows and small image imperfections like chromatic aberration or vignetting and such. The key thing here is to make it subtle, almost unnoticeable; but when you compare the raw renders to the final comp it's still night and day.
Also, we tweak the basic image a lot, although this is a more complex question. Rendering in passes gives you a huge amount of control over the image: you can get separate images for things like diffuse color, SSS, reflections, atmospherics, even the contribution of each light - and compositing apps like Nuke can also let you do some basic 3D work like project an image into the UV space of an object or move 2D projections in camera space. It's actually very similar to deferred rendering with a G-buffer, but taking it to a whole new level.
This is basically a legacy tech from the days when raytracing and such was computationally expensive and thus studios preferred the processing efficiency of 2D to re-rendering the frames several times. Nowadays some places prefer to get as much as possible from the renders and minimize post work; others still take advantage of the more artist intensive 2D tweaking.
Then there's the grading, which is actually not a new thing. This was a common practice in movies well before the digital age, utilizing many different techniques; for example Se7en is a good example where Fincher relied on chemical processes (silver nitrate retention) to give the imagery a special look. Even though the original drive for this was to create a unified look over material shot in different conditions, it quickly became another artistic tool in the box. Computers of course have greatly expanded the possibilities; a quick look at LOTR should give everyone a good idea.
So, post effects are an integral part of any kind of movie imagery, whether it's a VFX heavy shot or a simple one; thus, game renderers are absolutely required to implement as much of it as they can.
And then there are all the realtime techniques that also rely on post processing for effects like ambient occlusion and antialiasing and whatever else...