Shader Units Versus Hardware Tiling/HSR on a TBDR

Lazy8s

Veteran
Would it be more effective on a TBDR like PowerVR to have the shader units designed to also sort the scene into tiles and determine all of the visible surfaces, replacing dedicated tiling and image synthesis coprocessors?
 
Tile Based sorting can only be done after Vertex position is computed, so after at least part of the Vertex Shader has been executed.
As for Surface Visibility tests, they are basically at the Fragment Shader level, as long as depth isn't modified I suppose you can just ignore them, but anyway I don't see how you could do that before having sorted everything into tiles...

Code:
Capture Scene & Sort into Tiles:
Polygon Draw command -- Vertex Shader (Position only) -> Tile

Surface visibility:
For each Tile
  For each Polygon
    Vertex Shading
    massive parallel depth test (basically; visible surface determination)
  For each visible fragment
    Fragment Shading.
  Copy Tile to Framebuffer

That's the idea, it requires a rather heavy Vertex Shading power since you transform the polygons most likely more than once.
Of course you might instead process the whole Vertex Shader and store the results in memory, so that wouldn't require to have many Vertex Shading Units, but more memory instead; it's always the same tradeoff, memory or computation.

BTW, if you process the whole Vertex Shader at Tile Sorting, then you only store the computed data required for the Fragment Shader.
(That is, only what you computed in the Vertex Shader that is used in the Fragment Shader, which shouldn't be take that much space; typically I would say a couple vec4)
[That's around 6MB for 200k vertices per frame, not so small ^^. But we have 256MB+ cards now...]

[Edit : We discussed that already in some other thread, try to use the search feature, you might get interesting infos.]
 
  • Like
Reactions: Geo
You could move the second vertex shade post visibility test so that its only performed on visible vertices...

John.
 
Back
Top