Texture unit question.

patroclus02

Newcomer
After working some years on 2D development, I want to learn the basis of 3D world. I read a lot and I think I get the key on the whole pipeline process, hardware, etc.

But, I’ve got problems understanding some concepts.
I read in an article “A texturing unit is a unit that is capable of producing one dual textured on-screen pixel... meaning that per clock it can calculate one on-screen bilinear filtered pixelâ€￾

As far as I’m concerned, a bilinear filtering requires 4 texels to get a pixel on screen, that needs more bandwidth than point filtering, obviously. But I see no relation between producing one dual textures pixel and bilinear pixel...

Dual texture means that you apply 2 textures to the same polygon surface...
Any help on this please??

Thank you!!
 
You've created a context where one didn't exist, I think. The article (yay Kristof!) doesn't present the text in that way (separating the ideas of sampling and filtering), and it also seems you understand the difference just fine anyway :smile:
 
So, If I get it right...
Each texture unit can apply 2 textures per clock
Each texture unit can do bilinear pixel filtering per clock. Does this mean that it would cost that same as a point sampling filtering?? A trilinear filtering would cost 2 clocks?? And anisotropic??
:rolleyes:
 
A modern texture unit is single texture, so one per clock. Bilinear has a memory access cost compared to point sampling as you note, but on a modern GPU it's hard to engineer a case where it's not free because there's always sufficient memory bandwidth.

And yes, trilinear costs another clock and set of memory accesses to the texel, anisotropic filterings depends on the number of taps and texture orientation on a modern GPU (it's never fully invariant).
 
So, why does the article say that a texture unit can do double texture pixel??

Also
The 3Dfx Voodoo2 processor also runs at 90 MHz and has 2 texturing units so it also has a Theoretical Peak fill-rate of 180 Mpixels (or Texels).

From the context, I suppose it has a single pixel pipeline with 2 texture units.
Then, at 90MHz, that will be 90MPixels/s
And if TWO textures are applied to the SAME pixel, 180MTexels/s

But I don't se how it can 180MPixels/s be possible...
 
patroclus02 said:
So, why does the article say that a texture unit can do double texture pixel??
Because it's outdated and wrong on that point.

But I don't se how it can 180MPixels/s be possible...
You should have read the following paragraphs and pages. It isn't 180 Mpixel/s, but back then MPixel and MTexel were often used interchangeably.
 
I did read the whole article, but I got confused...

What I think is:
MPixels and MTexels are the same if there're the same pixel and texture pipelines.
So, I don't understanmd why MTexels and MPixels were considered the same if 2 texture units per pixel pipeline were used...
 
patroclus02 said:
I did read the whole article, but I got confused...

What I think is:
MPixels and MTexels are the same if there're the same pixel and texture pipelines.
So, I don't understanmd why MTexels and MPixels were considered the same if 2 texture units per pixel pipeline were used...

Even NVIDIA today uses Billions of Pixels/s on it's homepage for their recent GPUs, despite it actually being Billions of Texels. Call it a bad habit that got stuck from the past.

Fill Rate (Billion pixels/sec) 15.6

http://www.nvidia.com/page/geforce_7900.html

In reality it's actually 15.6 Gigatexels/s and 10.4 Gigapixels/s.
 
patroclus02 said:
I did read the whole article, but I got confused...

What I think is:
MPixels and MTexels are the same if there're the same pixel and texture pipelines.
So, I don't understanmd why MTexels and MPixels were considered the same if 2 texture units per pixel pipeline were used...
In the TNT2/Voodoo3 days, the 2 textures per pixel pipeline was more of a limitation on the software side of things: you could combine the color values from two different textures to the same pixel, but the hardware itself only had one texture unit per pixel unit, and thus required two cycles to output one pixel.

Today GPU's are much more general, and can apply up to 16 textures per pixel, as well as do much more complex mathematical operations to calculate the final result. Basically, in today's pipeline, you think of textures as 2D arrays that are used as lookup tables to simplify calculations for determining the final pixel value. They can include any type of data, not just color data. Sometimes textures represent some property of the surface at a particular point (color, surface shape, gloss, etc.), sometimes they represent mathematical functions (you could make a texture representing sin(x), for example).

Since texture filtering is basically a linear interpolation between texels, sometimes it is useful, and sometimes it isn't. How useful it is depends upon the data content of the texture, how that content contributes to the final pixel color, and how that content is accessed between neighboring pixels.
 
Ailuros said:
Even NVIDIA today uses Billions of Pixels/s on it's homepage for their recent GPUs, despite it actually being Billions of Texels. Call it a bad habit that got stuck from the past.

http://www.nvidia.com/page/geforce_7900.html

In reality it's actually 15.6 Gigatexels/s and 10.4 Gigapixels/s.
They're using the term "fill rate" in that sense to refer to the fact that their chips have 24 pixel pipelines, not the texturing ability. So the figure 15.6 Gpixels/s is "right" with regards to the numerical value and units used, but not with the term quoted with respect to what it technically implies.
 
Chalnoth: V3 (like V2) had one pipeline with 2 texture units. Only VSA-100/TNT(2)/G400/Rage128 had two pipelines with one texture unit per each pipeline.

btw. can anybody explain to me, why Matrox G400 can use trilinear filtering along with multitexturing? It has only bilinear TMUs (i think) and no other chip from this generation, which uses bilinear TMUs, has this ability.
 
no-X said:
btw. can anybody explain to me, why Matrox G400 can use trilinear filtering along with multitexturing? It has only bilinear TMUs (i think) and no other chip from this generation, which uses bilinear TMUs, has this ability.
Most likely for the same reason today's GPUs can do it: it just loops and spends another cycle on texturing. By the way, I'm not sure none of the other chips of that generation can do it as well.
 
Xmas said:
Most likely for the same reason today's GPUs can do it: it just loops and spends another cycle on texturing.
It's a bit strange, that G400MAX with trilinear+multitexturing performs very similar to TNT2Ultra with multitexturing+bilinear only (link)
Xmas said:
By the way, I'm not sure none of the other chips of that generation can do it as well.
If the hardware is capable of doing trilinear filtering + multitexturing, than the driver doesn't allow it. It simply doesn't work.
 
no-X said:
It's a bit strange, that G400MAX with trilinear+multitexturing performs very similar to TNT2Ultra with multitexturing+bilinear only (link)
Most likely because of the G400MAX's higher bandwidth.
 
Thanks, I think I've got it. :smile:

I still don't extactly know what means that graphic card can apply 16 textures per pass.
For example

GeForce 6800
12 / 12 / 24
Textures / Pixels / Z Samples
Per Clock
Textures Per Pass 16

:?:
 
patroclus02 said:
I still don't extactly know what means that graphic card can apply 16 textures per pass.
It means that a pixel shader can sample up to 16 textures, process them, and output a color.

It's not so relevant any more as it was several years ago. Some hardware was only able to sample one texture, do some basic operations, and output a color (overwriting or blending with the color already in the color buffer). So it required multiple polygons to be blended on top of each other to create some effects that require multiple textures. In other words, multiple passes were required. Nowadays we can do most things with a single pass, in part thanks to being able to sample multiple textures per pass.

So, it was quite relevant when we had fixed-function pixel processing, but with programmable pixel shaders it's no longer a limiting factor.
 
But the above specs

12 / 12 / 24
Textures / Pixels / Z Samples
Per Clock

mean that each pixel pipe has only one TMU. And it can apply one texture per clock. So it would require 16 clocks to apply 16 textures. Are those 16 clocks one pass??
 
patroclus02 said:
mean that each pixel pipe has only one TMU.
With modern processors you shouldn't really consider this - a "Pixel Pipe" (being a ROP) doesn't necessarily have any relationship to the number of texture processors.

Are those 16 clocks one pass??
Thats entirely dependant on program thats requesting the textures; but in this context it does mean a single pass - if a program requests more than 16 textures then more passes are required for each 16 more layers.
 
patroclus02 said:
But the above specs

12 / 12 / 24
Textures / Pixels / Z Samples
Per Clock

mean that each pixel pipe has only one TMU. And it can apply one texture per clock. So it would require 16 clocks to apply 16 textures. Are those 16 clocks one pass??
Probably much more than 16 clocks, since it's likely that any shader using 16 textures would do much more than just one or two operations on those textures. Yes, that would be considered one pass.

Multi-pass is where you actually send a triangle to the graphics card more than once, in order to apply different effects. This is frequently done when there aren't enough resources on the GPU to perform all of the desired operations at once.
 
Back
Top