Sure, the case you speak of couldn't benefit from on-chip MRTs since you'd need to sample adjacent pixels from the G-Buffer. However I still think this kind of operation is atypical of lighting passes. To take your example, calculating ambient occlusion in screen space will yield incorrect results when e.g. dynamic opaque objects are rendered in front of static geometry. All of a sudden pixels in the static background will sample adjacent pixels belonging to the object in front which will obviously change the ambient occlusion calculation (which shouldn't be the case, especially if the object happens to be far from this static background). You might be able to help by looking at objects ids and such, but in all cases you'll still lose the GBuffer data of the adjacent pixels in your static background (since covered by your object in front), which means your ambient occlusion becomes variable. This is likely to be observed with a "halo effect" on the silhouette of dynamic objects.
I think we agree on the principles; on-chip MRTs is only useful if you don't start sampling adjacent pixels (which you will need to do in certain cases anyway, like post-processing or some future screen-space lighting pass effect).