Its not an optimization, its how they all work. I suggest you look at some Linux drivers and the homebrew Dreamcast docs to see how a TBR and IMR (wrong name of course) work exactly the same from the APP point of view. But how actual triangles render is irrelevant for what you think your argueing aboutChalnoth said:That's still a driver-level optimization technique. It has nothing to do with the scene buffer, and has completely different performance characteristics due to the fact that it's stored in system memory, and has no overflow problems (overflow in this case will typically just mean that something's processing too quickly, so a stall there won't affect much of anything).
The Tile Acceleration Buffer (IIRC think thats the name its been a while) is what your talking about with regard deferrement. This is the captured scene in the classic circa 1998 terms. Its the buffer that gets fiddled with by the chip (exactly how is NDA'ed) so that triangle A goes to tile 1, triangle B goes to tile 1 and 2 and triangle C goes to tile 2.
Why do you think a TBR couldn't not use system RAM (for the TAB) like an IMR?
Some TBR do indeed use VRAM and of course that places a limit to how many polygons per tile, but thats not a hardware limitation just an optimisation, there is no reason why they couldn't do it all over system RAM. Especially if you don't have the AGP write back problem...
As I pointed out I know of least one 'tiled' renderer (I'm very careful not to call it a TBR...) that doesn't have use a TAB.
You seem to be thinking that TBR are restricted to doing things exactly the same as the PowerVR series 1, they have evolved alot since them.
There alot more popular than just the usual PowerVR, Intel suspects would suggest, expect to see hybrids taking over the world soon...
Chalnoth said:Anyway, I'd be highly surprised if this buffer really was larger than a few dozen draw calls in size.
Your just showing your inexperiance of how the hardware actually works here.
Think about why a dozen draw calls would reduce parellel processing between CPU and GPU to a minimal level (think pipeline bubbles, for example CPU physics or GPU rendering a character)