TBDR: some questions.

bonfo

Newcomer
Hi,
I'm trying to go in depth and understand the TBDR .I'm reading this article: http://www.beyond3d.com/articles/tilebasedrendering/index1.php that's quite old, but I think that the PowerVR architecture is not changed a lot.

Now i have some doubts about the HSR algorithm. That article says in the second page that the HSR is made with an internal Z-buffer on a per Tile basis. But in the third page it explains in details how the HSR is made with a ray-tracing approach.

I think that the right HSR algorithm is the ray-tracing one and that the internal z-buffer is used only to store the result of that algorithm and used on the rendering stage.
Am I guessing right??

thanks for any hints ;)
 
I suppose TBDR could be called ray-tracing using a very loose definition of the word, but in reality it's still just rasterization in spatially confined areas. In ray-tracing you start with a ray and search for the triangle that first intersects it. In rasterization you start with a triangle and determine which rays intersect it. TBDR finds out which triangles are associated with a group of rays, then goes through those triangles one by one. It still uses a Z-buffer. (Note that I sometimes used ray instead of pixel in this paragraph, because they're equivalent in that each ray from the viewpoint maps to a pixel.)

I think there are three ways that you can do HSR on a TBDR, but I've never seen details about which is correct:

A) Sort the geometry in each tile front to back.
I think a lot of TBDR articles mistakenly believe that that "sorting" of geometry implies this, but what companies like PowerVR are trying to convey is binning the triangles into tiles, i.e. sorting in X and Y, not Z. This is an easy operation because once you transform a triangle, you know where it is on screen, and you know which tiles it's a part of. Front to back sorting, however, needs comparison between triangles and either takes too long or needs incoherent data structures. I personally don't believe this option is ever used.

B) Do a Z-only pass for each tile before rendering it.
The on chip Z-buffer makes it pretty cheap to raster Z-values really, really fast. Just blast through all the triangles in a tile first to populate the Z-buffer, and then after that you will never run the pixel shader math and texture fetches on a pixel twice unless there's transparency. The disadvantage, of course, is that you have to run through the geometry twice.

C) Don't do anything!
Strange as it may sound, this is still a form of HSR because early Z-reject is still there (basically what ATI and NVidia have also done since GF3 and R100), and in terms of framebuffer bandwidth you're only using as much as the final pixel requires. Without TBDR, you potentially have to read and write temporary data several times before getting your final pixel.


I think the HSR part of TBDR has been overemphasized. The heavy bandwidth saving is the key advantage of a TBDR. However, don't forget that we're only looking at the pixel cost here. The binning of triangles is not free in terms of bandwidth, space, or computation.
 
...
TBDR finds out which triangles are associated with a group of rays, then goes through those triangles one by one. It still uses a Z-buffer.
...

For doing that you only need a good algorithm for building bounding boxes to bind triangles to tiles and divide the render surface and then a simple Z-Buffering algorithm to fill the Z-buffer. That seems your solution B.
But in this way I really don't see were the ray are used. :cry:

A) Sort the geometry in each tile front to back.
I think too that nobody use this technique ;)

B) Do a Z-only pass for each tile before rendering it.
I think this is the most likely, but as I already said: were are the rays?
In the article the ray casting is used to get the information on the order of the geometry and to determinate the pixel color. So more than going through the triangles maybe the Z-Buffer is filled with the information from the ray. But if it is like this, how do i manage alpha-blending?
I think i will use the in formation form the ray to know which is the opaque pixel and then blend with the other to get the final color, but so why i need the Z-Buffer if i'm able to render straight forward?

C) Don't do anything!
Sorry... I didn't get what you mean. My bad english :oops: :rolleyes:

Thanks for the answers anyway ;)
 
While I'm not sure why a Z buffer would be required in general, it is needed in order to be compliant with rendering techniques used in some games like certain depth of field or other post processing effects that sample the buffer.
 
For doing that you only need a good algorithm for building bounding boxes to bind triangles to tiles and divide the render surface and then a simple Z-Buffering algorithm to fill the Z-buffer. That seems your solution B.
But in this way I really don't see were the ray are used. :cry:
They're not used. I'm speaking conceptually. Primary rays in raytracing/raycasting are really isomorphic to screen pixels in rasterization. I spoke in that way to highlight the difference between rasterization and ray-tracing.

There are no real rays. I'm just trying to explain why the B3D article went down that path. Kristof is a pretty smart guy too, so I can't really explain it better than that (other than maybe he was a n00b back then ;) ).

While I'm not sure why a Z buffer would be required in general, it is needed in order to be compliant with rendering techniques used in some games like certain depth of field or other post processing effects that sample the buffer.
Sorry, when I said "It still uses a Z-buffer" I meant in terms of the rendering process and wasn't referring to an external fullscreen Z-buffer. I know it's on-chip most of the time.
 
Maybe i did some mess. :???:
I'm only trying to understand how the HSR is implemented in a TBDR or, better, in the PowerVR architecture. :)
For what i've read is implemented on a per-pixel basis, and for what the article says, but not only that article, for each pixel the color is determined finding the object that is closer detecting all the intersection between the pixel and the objects in the scene. Is this the method used? I call this method "ray-casting" even if it does not really use rays.

Sorry if you already answered, but if it is so i was not able to get it. ;)
 
Back
Top