Question about Xenos and texture units

Mat3

Newcomer
The texture units in the 360 GPU aren't tied to a particular array, according to the Beyond3d article. It looks like the texture units can do their work and just push their work through to whichever shader array is available. Am I understanding this correctly?

My question is, why do all the PC GPUs (I'm thinking of Radeons but I think it's the same for Nvidia) have texture units that are tied to a particular array. It seems like the 360 GPU would be more efficient and provide better load balancing.

Also, in the picture of the linked article, the arrows indicate the data flow goes all from the texture units to the shaders. Does data travel back to the texture mapping units (this is what it seems like when I read other materials, but obviously my understanding of the 3D pipleline is a bit fuzzy)?

Thanks
 
The texture units in the 360 GPU aren't tied to a particular array, according to the Beyond3d article. It looks like the texture units can do their work and just push their work through to whichever shader array is available. Am I understanding this correctly?
Yes.

My question is, why do all the PC GPUs (I'm thinking of Radeons but I think it's the same for Nvidia) have texture units that are tied to a particular array. It seems like the 360 GPU would be more efficient and provide better load balancing.
Perhaps it's a matter of scale.

In the end it seems that when ATI GPUs had the ring bus, in an attempt to share TUs with all SIMDs, there was a large die space penalty and seemingly bandwidth/latency were problematic.

Also, in the picture of the linked article, the arrows indicate the data flow goes all from the texture units to the shaders. Does data travel back to the texture mapping units (this is what it seems like when I read other materials, but obviously my understanding of the 3D pipleline is a bit fuzzy)?
Yeah data defining the required work needs to be sent, effectively the texture coordinates and level of detail. In dependent texturing (often fetching without filtering) the ALUs calculate the texel addresses.

Jawed
 
Thanks. It's still a bit fuzzy to me. So I'm clear, are texture mapping units and texture address units different names for the same thing?

And they happen after (or before) the pixel shaders?
 
Thanks. It's still a bit fuzzy to me. So I'm clear, are texture mapping units and texture address units different names for the same thing?
TMUs contain addressing units, which use the coordinates and level of detail to compute the addresses of the texels.

And they happen after (or before) the pixel shaders?
The pixel shader contains instructions which specify the texturing operations amongst all the other math.

With simple textures (such as the clothing on a character) the texture mapping of the clothes can start immediately the pixel shader starts processing a pixel that coincides with a clothing surface. In the simplest case that might be all the pixel shader actually does (it'll look really grotty without some kind of nice lighting calculation though). The coordinates for the texture are provided by the vertex shader + interpolator, so the pixel shader doesn't need to do anything, just pass the coordinates on to the TMU.

For something more interesting, such as the face, the shader will specify a "face" texture + a "shiny" texture (specular). The face can be mapped automatically, just like the clothes, but then the specular needs a more complicated calculation, based upon the direction of the light, the direction of the camera and the geometry of the face. e.g. the nose is shiny at certain angles to the light, and that varies depending on where you (the camera) are.


So the pixel shader specifies the texturing of the face in two parts:
  1. simple mapping of the face features
  2. specular texture followed by math to work out all those angles and then adjusts the pixel depending on the shine
Try this article:


http://www.beyond3d.com/content/articles/34/

Jawed
 
Thanks again. One more question. Could the shader SIMD units in the 360 and recent GPUs be able to handle the various texture operations, or would they have to be designed with this in mind?
I'm thinking of the trend to have less specific or fixed function units, could they say, have sort of the bare minimum acceptable number of the various texture units, and anytime it becomes a bottleneck, have idle shader units help out. Or would supporting this make the shader units and the scheduling hardware more bloated?
 
Thanks again. One more question. Could the shader SIMD units in the 360 and recent GPUs be able to handle the various texture operations, or would they have to be designed with this in mind?
I'm thinking of the trend to have less specific or fixed function units, could they say, have sort of the bare minimum acceptable number of the various texture units, and anytime it becomes a bottleneck, have idle shader units help out. Or would supporting this make the shader units and the scheduling hardware more bloated?
I think the choice in this scenario is the programmer's and it wouldn't make much sense to try to allocate a share of the work between two different kinds of processors - with the newest APIs texture filtering quality metrics have to be met and it may not be possible to make ALUs match those metrics in a cost effective way, that is it could be very slow.

Jawed
 
Back
Top