I doubt it, if you're talking about the general case of resolving an MSAA'd render target.
In AC, which seems to use deferred rendering, it seems there are two versions of the D3D10.x code:
- 10.0 - creates a set of render targets (G-buffer), which include Z data (relatively slow because of lack of Z compression). In the tone-mapping+AA-resolve pass the Z data is lower quality per sample (within each pixel the Z gradient across samples belonging to each triangle isn't available)
- 10.1 - creates a set of render targets, but there is no need to explicitly include Z in these as this is automatic (saving a pile of bandwidth). The tone-mapping+AA-resolve pass has full quality Z data, as though a conventional forward renderer had been used, not a deferred renderer.
Note that both versions of the code use multiple colour samples per pixel. The difference centres on the quality of the Z data recorded per pixel and the bandwidth overhead incurred in D3D10 because Z compression isn't as good. Z is actually written twice per pixel during G-buffer creation
once so that the GPU can determine depth for visibility of each new pixel (uses Z compression) and again for the deferred rendering algorithm to consume (uses MSAA's colour compression for Z data, as the G-buffer pretends that Z is a colour).
So the problem for R600 is that the application is explicitly coded to use a different kind of G-buffer (with extra Z data) - for the driver to "force" R600 to work like RV670 it would have to intercede in both the creation of the G buffer and in the tonemap+resolve shader.
For what it's worth I suspect R600 could be made to do this. But the level of driver interference is much higher than it would be for HDR+AA (R5xx) or UE3+AA scenarios. Also, ahem, that doesn't sell HD3xxx...
See that thread I linked for the slides and discussion over the performance and quality issues that separate D3D10 and D3D10.1 when performing deferred rendering MSAA.
Jawed