Crackdown: shader simulates raytracing Tech

Xenos doesn't have the current gen problem (actually, "last-gen", since G80 has the same capability as Xenos, being a D3D10 GPU) - it can resolve individual MSAA samples, whereas last gen hardware can't.

I guess that the "outlines on everything" effect is achieved by accessing individual MSAA samples at poly edges and determining if there's a surface discontinuity there (according to Z).

Jawed

But, didn't you think accessing the sub-samples to enable MSAA while using deferred shading is quite a cost? I guess I was confused by the term "deferred lighting" and "deferred shading". Maybe they were just referring to something else.

In case of deferred shading, I assume the basic scheme to use MSAA is like this:
1. Generate the geometry buffer with MSAA on and resolve it without downsampling for the purpose of accessing sub-samples. It needs more than one tile to finish the 4x sized g-buffer (let's assume it's 4xAA).
2. Run the lighting shader on the g-buffer. In the shader, you have to sample 4 sub-samples and run the full lighting calculation on each of them (because you have no info whether they are continuous), and then interpolate them to get one final pixel. That is, to some degree, turning the multi-sampling into super-sampling.
 
But, didn't you think accessing the sub-samples to enable MSAA while using deferred shading is quite a cost?
When you convert the g-buffer (which in XB360 has to be constructed as distinct tiles) into something that's readable for the lighting pass, you end up with something that's ~30MB, yes: 1280x720 x8 (colour+z) x4 (samples) bytes. And there may be a second G-buffer for additional attributes, e.g. another 4 bytes per pixel (i.e. another 50%). Z will be shared by both G-buffers, so they'd need to be at the same resolution to be of any use.

Hopefully we'll find a more detailed description of their technique in some developer conference...

I guess I was confused by the term "deferred lighting" and "deferred shading". Maybe they were just referring to something else.

In case of deferred shading, I assume the basic scheme to use MSAA is like this:
1. Generate the geometry buffer with MSAA on and resolve it without downsampling for the purpose of accessing sub-samples. It needs more than one tile to finish the 4x sized g-buffer (let's assume it's 4xAA).
2. Run the lighting shader on the g-buffer. In the shader, you have to sample 4 sub-samples and run the full lighting calculation on each of them (because you have no info whether they are continuous), and then interpolate them to get one final pixel. That is, to some degree, turning the multi-sampling into super-sampling.
They might pre-process the g-buffer to identify edges (caused by Z-discontinuities) which need the black outline.

But separately, you're right, in the final lighting pass they need to down-res from the MSAA'd g-buffer to final screen res. That should be a fairly simple "texture filtering" problem for destination pixels that have no "edge" within them (whether it's a surface edge or a discontinuity, black-outlined, edge) - but all edge pixels need to be special-cased, "super-sampled" as you say.

:D I think we're pretty much in agreement, it's just that you weren't aware that XB360 supports non-blended-MSAA render to texture. I'm not a developer, so I'm not really best-qualified to talk about this stuff though.

Admittedly there's some non-overlap between "deferred shading" and "deferred lighting", so I might be reading too much into all this.

Jawed
 
Last edited by a moderator:
Back
Top