I kinda doubt it, the perspective errors caused by doing linear interpolation for the pixels in a quad rather than just doing the perspective divide correctly per pixel seem hardly worth it.
You think so? We're talking about a half pixel of linear interpolation. If you choose the right slope then errors should be virtually undetectable. Back in the day, software renderers linearly interpolated over far larger areas to minimize the divisions.
The weights for the linear interpolation are of course only calculated once per pixel
I'm talking about perspective correct weights. Imagine if you have an attribute 'a' which has a value of 1 at vtx1, 0 at vtx2, and 0 at vtx3. Also, make an attribute 'b' which has a value of 0 at vtx1, 1 at vtx2, and 0 at vtx3. Now calculate 'a/w' and 'b/w' at each vertex, interpolate, and divide by interpolated 1/w. For each pixel, you now have 'a', 'b', and can calculate '1-a-b'. These are the persepective-correct weights. Also interesting is that 'a' and 'b' (and 'c') may be needed anyway for rasterization (see section 5.1
here).
Now for each attribute, it's a simple a*Attr1+b*Attr2+(1-a-b)*Attr3 for each pixel. It makes more sense to me than making a and b regular linear weights and calculating {a*(Attr1*(1/w))+b*(Attr2*(1/w))+(1-a-b)*(Attr3*(1/w))}*(1/(1/w)), even when considering that Attr_*(1/w) can be calculated once per vertex per attribute.
Now, as I mentioned before, DX11 means the weights aren't quite the same for each pixel, but it should be easy to handle if the quad-centre method is used.