Rendering of a sidewalk

aths said:
Matt B said:
interesting work, just thought id say that you have some serious scale inconsistencies on your dirtmap version ;)

What is that supposed to mean? The dirtmap shows sand. Since there is no infinite resolution its hard to show every single grain.


Sorry, Its not an important point, just that the sand texture looks like its at least twice as big as it should be, if you look at the size of the rocks and pebbles and stuff on the sand they are much bigger than the rocks and pebbles on the pavement, also the sand looks much blurrier than the pavement. This would be fixed by just scaling down the sand texture.
 
Matt B said:
aths said:
Matt B said:
interesting work, just thought id say that you have some serious scale inconsistencies on your dirtmap version ;)

What is that supposed to mean? The dirtmap shows sand. Since there is no infinite resolution its hard to show every single grain.


Sorry, Its not an important point, just that the sand texture looks like its at least twice as big as it should be, if you look at the size of the rocks and pebbles and stuff on the sand they are much bigger than the rocks and pebbles on the pavement, also the sand looks much blurrier than the pavement. This would be fixed by just scaling down the sand texture.
I think a problem is, one cannot have textures at full sharpness when the texture-filter should not result in flicker. Also I don't have a very good camera, it is a Nikon Coolpix 2000. Anyway, I will try to make a better dirtmapping-image.

edit: Here is a more noisy and sharper dirt:

dirt_s.jpg
 
Simon F said:
aths said:
The second, which is far more important from a renderering performance point of view, is to reduce the data transfer from texture memory to the chip. Unless the graphics chip has a very big cache (i.e. unlikely), I don't think this technique on its own will solve that issue. Of course, you could perhaps compress each tile with another technique.

I guess, in a sense, your technique is vector quantisation where each vector is "enormous".
As far as I know, the bandwidth-issue comes up when using a lot of textures or very larges textures. Would you say that bandwidth is also the main issue in "normal" condition (lets say, 2-3 512x512-textures per pixel)?
 
aths said:
Simon F said:
aths said:
The second, which is far more important from a renderering performance point of view, is to reduce the data transfer from texture memory to the chip. Unless the graphics chip has a very big cache (i.e. unlikely), I don't think this technique on its own will solve that issue. Of course, you could perhaps compress each tile with another technique.

I guess, in a sense, your technique is vector quantisation where each vector is "enormous".
As far as I know, the bandwidth-issue comes up when using a lot of textures or very larges textures. Would you say that bandwidth is also the main issue in "normal" condition (lets say, 2-3 512x512-textures per pixel)?
Only 2 or 3? That's probably a bit on the low side.

Once you are using more texel data than will fit inside the texel cache (which, in all honesty, won't be very large) the texture data transfer across the external bus will become important.

Taking your example, and assuming you have 16bpp textures, then a 512x512 texture+MIP map is ~680KB. (Turning off MIP mapping only makes things worse). I'd imagine that there might be a silicon budget for, say, somewhere between a 4kb~64kb texture cache, so you will only be able to store a small fraction of a texture at a time.

In effect, the cache really only saves re-reading data that are shared between pixels when doing the texture filtering operations. You are thus probably still going to need to read, on average, 3 texels for each pixel.

Even if you could address individual texels with 100% efficiency out of the external RAM (which you can't do anyway), that would be 48bits/texel op. If we assume a measly 2 pipes (or dual texturing) then we are up to 96 bits. That's becoming a significant proportion of your bus width.
 
As far as I know, common games use a basemap, a lightmap and may be a detailmap or environment map. In my work I set the average count of 4 textures as default for the performance considerations. (And the average filtering mode to trilinear 2x AF. While the basemap should be filtered with 16x AF, due to questionalbe "optimizations" the average real used AF should be around 4x-6x. On the other hand, the lightmap is ok with bilinear.)

The cache is to avoid re-reading the texels when filtering, yes. Afaik the triangle fill algorithm is optimized for the best cache hitrate, meaning to have as much as possible same texels on the quad borders. With trilinear and particularly anisotropic filtering, the hitrate should be quite high. Afaik, with AF enabled most graphic cards are gpu-clock bound, while the memory clock is more important for filtering with lower quality.

Im my work I imply as average thump-rule value, to read 1 new texel per bilinear sample. Is that assumption ok?
 
aths said:
As far as I know, common games use a basemap, a lightmap and may be a detailmap or environment map.
OK I'm including some translucency effects as well.
Im my work I imply as average thump-rule value, to read 1 new texel per bilinear sample. Is that assumption ok?
As a rule-of-thumb, it's not a bad assumption (provided the developer turns on MIP mapping, grrr. Perhaps a "thump-rule" is needed after all.).
 
Simon F said:
aths said:
As far as I know, common games use a basemap, a lightmap and may be a detailmap or environment map.
OK I'm including some translucency effects as well.
I also imply an overdraw factor of 2.0, so one have to render twice as much as is visible (counted in pixel) – this way I consider some effects with transparency (while the rendering of big-size smoke and so on will probalby result in a much higher overdraw factor.)

Simon F said:
aths said:
Im my work I imply as average thump-rule value, to read 1 new texel per bilinear sample. Is that assumption ok?
As a rule-of-thumb, it's not a bad assumption (provided the developer turns on MIP mapping, grrr. Perhaps a "thump-rule" is needed after all.).
Whats about AF? Lets say, 2x AF, trilinear filtered – so we have 4 bi-samples. It is ok to still assume to read one new texel per bi-sample?
 
aths said:
Whats about AF? Lets say, 2x AF, trilinear filtered – so we have 4 bi-samples. It is ok to still assume to read one new texel per bi-sample?
Well, that would depend on how AF is implemented and there isn't really a standard but, I would say, on average, "no". With AF you are using a filter with a higher number of taps per pixel and I doubt there'd be as much overlap between the used texels in adjacent pixels.
 
Back
Top