About Texture Cache

ultrafly

Newcomer
Why tell the NV25's texture cahces is "Dual Texture Caches"?
and the NV25's texture caches is 256k or 512k?

How many bytes fetch when cache miss?

THK.
 
Dual, erm one for each TMU layer ?

256k or 512k... think much smaller IMHO... unless you'r talking kbits and even then... if they had such large caches they would put it in the marketing docs.

Bytes fetched on a miss... only NVIDIA knows and I doubt they will tell...
 
ultrafly said:
and the NV25's texture caches is 256k or 512k?
Texture caches are way smaller than that.
From some test I made nv25 seems to have 8k texture cache
in a single texture per polygon scenario.
There is almost no point in increasing these texture caches cause you're not going to exploit any much more locality in memory accesses.
The hw should have very huge caches to fit the current set of textures for a given amount of frames to do better than a small cache.
Texture caches are designed to put out a lot of different data per clock cause they have to serve a lot of pixel pipelines that accesses different textures at the same time.

ciao,
Marco
 
Kristof said:
Dual, erm one for each TMU layer ?

256k or 512k... think much smaller IMHO... unless you'r talking kbits and even then... if they had such large caches they would put it in the marketing docs.

Bytes fetched on a miss... only NVIDIA knows and I doubt they will tell...

Why is it that the graphics chip companies are much stingier with technical details than CPU manufacturers? Is it just tradition, or are there other reasons?
Frustrating.

Entropy
 
Entropy said:
Kristof said:
Dual, erm one for each TMU layer ?

256k or 512k... think much smaller IMHO... unless you'r talking kbits and even then... if they had such large caches they would put it in the marketing docs.

Bytes fetched on a miss... only NVIDIA knows and I doubt they will tell...

Why is it that the graphics chip companies are much stingier with technical details than CPU manufacturers? Is it just tradition, or are there other reasons?
Frustrating.

Entropy

Maybe it is just the fierce competition today, in old good days there was a lot of more technical information about graphic architectures. Think about Silicon Graphics machines: Reality Engine, Infinite Reality.

There is still information in computer graphics conferences, for example about texture caches:

http://graphics.stanford.edu/papers/texture_cache/

http://graphics.stanford.edu/papers/texture_prefetch/

(from Akeley & Hanrahan course in Stanford)

However IHV doesn't really provide much technical information about their implementations, in fact I think some of what we get in their 'white' papers is just FUD and disinformation.

In contrast IBM, Intel and AMD provide extensive information about thier CPU architectures (but not all the information). A reason could be that there is a lot of more research (open) on going in CPU architecture than in GPU architecture and there is no point in hidding that information. It is even good for marketing (for example SMT/Hyperthreading).
 
Actually, I think that the major difference is in how you program for CPU's and GPU's. Quite simply, with a GPU, low-level management of the core architecture is hidden from the programmer, while with a CPU, everything is exposed.

Because of the fact that programmers have control over extremely low-level details of the CPU, the manufacturers must release the low-level details of the architecture.

But with a GPU, the more or less standardized nature of the programming interfaces mean that only the drivers need low-level control over the GPU, meaning only driver developers need to know the low-level details. These companies would really rather not release anything that they don't have to.
 
Back
Top