What, proven wrong so now you're running away? You know your two triangle example is a load a crap, but won't acknowledge it.
Actually it's the use case for MSAA Z readback but until the penny drops I see no point in pursuing this with you further. If your lighting equation is dependent on scene Z then you want the frequency of Z to match the sample frequency. For the best image quality on edges, each triangle that falls within a pixel must have a 1:1 relationship between Z and albedo, normal, specularity etc. Lower resolution Z creates an edge artefact for those aspects of lighting that are dependent on Z.
You still haven't shown why the difference between MSAA Z (i.e. hardware supersampled) and MSAA rendertarget Z gives better quality in shading.
I think you need to use more precise language because that's just a blur.
You haven't explained why anyone with half a brain would run the lighting shader on all subsamples if Z is the only thing that's slightly different.
If your lighting pass is per sample
anyway it's detecting whether the samples in a pixel are identical (non-edge pixel) or that they're different (edge pixel). Whether you decide to attach any meaning to a pixel where all the samples are the same but Z varies (Z will always vary unless the triangle is square to the camera) is up to you - but my interest here is in enhanced triangle edge quality in deferred shading.
And the latter part is all Arun and I have a problem with. DX10.1 does nothing visible for IQ except for subtle shadow correction, which you said isn't even what you're talking about.
When triangles from two different objects share a pixel, a shared Z is meaningless. In conventional MSAA resolve this is irrelevant - but deferred rendering's shading pass must have Z in order to obtain any meaning from the G-buffer. If the shading pass reads the G-buffer at 2560x2048 (4xMSAA on 1280x1024) but reads Z at 1280x1024 then you will get subtle rendering errors where triangles from two different meshes meet within a pixel. They're subtle errors but they're there nonetheless.
The performance aspect is arguable too, as you completely ignored Arun's point there.
G-buffer creation is bandwidth bound - MSAA'd creation, particularly in making use of the GPU's compression features which are there to save bandwidth, is a big win. Any kind of supersampled G-buffer creation is just a crushing waste of bandwidth because none of the bandwidth savings that come from using MSAA are available.
Storing distance in a 32-bit rendertarget has far more precision than Z in the depth buffer, so trying to skip this in DX10.1 comes at an IQ cost.
32-bit Z to the rescue.
Who was the one that said "bandwidth hit during G-buffer creation isn't crippling like it is with supersampling"? How do you save on BW without color compression?
The saving is during creation, not on read-back.
Know any deferred renderer that doesn't store the normal in the G-buffer? MSAA or SSAA, it's the same space. Only with the latter, though, do you get different values for each subsample. Normal not only changes far more than Z within a pixel, but a small change also affects lighting result far more.
Two overlapping triangles 10m apart have quite different Z values.
You don't render at 45 degrees. That would be moronic.
If you render at significantly less of an angle you get the cost of 4xSS at the IQ of ~2xSS. Great.
Jawed