The hardware is designed to handle changes in render target efficiently so I don't see that as an issue, although as you say it's not really a benifit for us (at least not how it's expressed in the standard API). G-Buffer is one of the alternative terms used to describe the buffers containing the per pixels attributes that you build up prior to applying your lighting passes.When it says deferred multi-pass deferred shading I take that to mean a lot more than Z pre-pass (sorry, I don't know what a G-buffer is), instead interpolating per-pixel lighting parameters (mainly normals) then combining them in a later shader. It seems that if the render target is changed that would defeat the SGX's deferred rendering and increase outgoing bandwidth a lot, all for no benefit.
The quality advantage isn't that big on SGX as we always render at 32 bit (or higher) irrespective of the external target bit depth and just do a single dither when we write out a the end of tile.Yeah, I didn't think there would be, and obviously it'd increase image quality a lot. All I meant was that SGX's depth buffer being 32-bit float internally gives it an advantage over a 24-bit depth buffer.
Would take with some salt, but if this paste is correct:
http://pastie.org/1254872
Then it looks like Tegra 2 may in fact not support 24-bit or greater depth buffers. nVidia does have a non-linear depth extension to try to counter this, but it's still 16-bit.
While we're on the topic of image quality capabilities, Tegra 2 does have anisotropic filtering which SGX does not.
True, although I'd argue that basic rendering quality (24 bit FB/Z) was more important than aniso Aniso also isn't in the ES2.0 API for other reasons afaik...