Its not black and white though. Xenon GPU modifies the command buffer
on the first tile with screen space bounding areas so you don't need to use software tiling.
I.e. A valid Xenon GPU Command buffer is
Set Tile 0
Render *GPU calculates tile coverage*
Resolve
Set Tile 1
Render
Resolve
No CPU interaction is required for it to work.
Anything that is not in Tile 1 screen area will be skipped at the command buffer level (i.e. not submitted from the front end to the main GPU units).
So its somewhere in between... argueable the command buffer is a captured scene which the tile unit (actived when rendering Tile 0) writes back data about tile coverage for subsquent tiles.
So is it culling, or is it really tiling? I mean, where you have "GPU calculates tile coverage", is it actually calculating which tile each GPU spans, or is it just calculating if it's inside tile 0 or not? Either way, it's still at least screen space culling done at the geometry level (don't know about z and w) and before triangle setup/rasterization. But I don't know if this is something modern IMRs do too; I was under the impression that guard band clipping was used during scanline setup, ie post triangle setup. Is the tile bounding in Xenon clipping as well, or just culling?
I would argue that just having an explicitly addressed intermediate render output buffer that can not act directly as framebuffer means you're "tiling", for a reasonable definition of the word. You output to tile memory, which is fast, then can burst data back to the target buffers in a single resolve. The only way this data flow would occur in an IMR is if the target buffer writes are being cached, but without explicit resolves there'd be no configuration available for avoiding writeback of temporary data like depth/stencil. I assume Xenon does have the ability to not resolve these buffers, which should be another tiling advantage over IMRs.