DemoCoder said:
Well, like I said in another thread, it has to be more sophisticated than tiling, it has to be a deferred renderer like DreamCast. Otherwise, you wouldn't be able to flush and re-read tiles fast enough, and eventually you'd be stalled flushing the tile to system RAM or reading it back.
All current GPUs are tile based.
Or it can subdivide the screen in N viewports corresponding to N tiles and render one tile at time.
No need to flush all the tiles..at most it might need to flush all the tiles one time after rendering to do some global (on the image) filter, like tone mapping, but even in this scenario it would be pretty efficient.