Most under-used 'modern' feature: 3D texture maps?

Grall

Invisible Member
Legend
Not counting the higher-order surfaces support that only one particular chip supports, of course. 3D textures is afaik supported by all DX8 and up chips, but to my knowledge no software has ever used them for anything.

Silly or expected?

Opinions, please. :)

*G*
 
IMHO expected, they are huge to store and the bandwidth, filter and cache costs are quite high as well so... you'r stuck with low resolution and low number (due to their size) of these 3D textures in addition to slow performance linked to the high cache cost (linked to their size), bandwidth (trilinear volume requires 16 texels IIRC versus normal 8 for 2D trilinear) and filter clock costs (twice trilinear basically - probably multicycle op since you don't want dedicated silcion for a filter op that is rarely used).

K-
 
Grall said:
Not counting the higher-order surfaces support that only one particular chip supports, of course. 3D textures is afaik supported by all DX8 and up chips, but to my knowledge no software has ever used them for anything.

Depends on what you mean with "no software". I'm not sure if there are any games that uses it (but I'm not completely ruling out that there may be), but many of my demos use it. I don't think it will be used primarily for materials for a while due to storage space, but rather for shader effects and lookup tables of various kinds, where it's quite useful.
 
Grall said:
Not counting the higher-order surfaces support that only one particular chip supports, of course.
No current chip supports a very good form of HOS, for one reason or another.

NV2x: Low performance
R2xx: Not flexible
R3xx: Low performance
Parhelia: Little to no market penetration
 
Tenebrae uses 3d texture for light attenuation (works nicely on pre-ps 2.0 hardware, Tenebrae 2 engine allows non-spherical lights, which are nicely done with texture matrix and 3d texture). I also vaguely remember Carmack talking about using them for dynamic lights, dunno if any Q3/D3 engine variants use them for that.
 
3D textures would have been usefull, if only the mipmap pyramid hadnt been invented the wrong way up. With adaptive resolution texture maps 3D textures would be usefull, without it they are nearly always too big.
 
There needs to be a "3D" S3TC or VQ equivalent. Like use a 4x4x4 or 8x8x8 blocksize and just extend the algorithm. I guess the problem is, block unpacks to a much bigger size (64 or 512 texels) which might cause inefficiencies on architectures that don't cache compressed texels.
 
Hmm I must not have pressed submit the other day.

3d textures are used alot in medical imaging aswell.
 
Humus said:
MfA said:
if only the mipmap pyramid hadnt been invented the wrong way up.

"The wrong way up"? :?: :?
I wouldn't have put it that way, but I understand what MfA means, and agree (at least to some extent).

Currently the "feeling" of mipmaps is that you have one full base texture (mip level 0 in OGL), and then make less detailed versions of it (mipmap 1...N). If the mipmap pyramid was "the other way up", you'd think of it as a low res texture (I'd like mip level 0 to be a 1x1 texture), and then you can add more detail by adding higher resolution mipmaps.

It might seem as two different ways to look at the exact same thing, but the difference is that with the second way you haven't locked yourself into how the highest resolution mipmap will look. And the second way is more natural if you want textures with different max resolutions in different areas of them.

(Eg, The lower right part of a texture might contain more important details, so it will get 10 mip levels. While the upper left might be rather fuzzy any way, so it will only get 6 mip levels.)

Then there's of course the problem that the poor hardware guys need to make hardware that can texture from a nonregularily stored texture, and that the poor driver guys need to write texture management code for textures that you might add some more detail to after it's initially created. But hey, I'm in neither of those two positions, so it shouldn't be a problem. :devilish:
 
It would be cool if you could arbitrarily add more detailed levels to a texture.

Say, for continuous loading algorithms, you could load and unload the low-resolution levels very quickly as you needed (since, if you're loading them in, they're going to be a ways off in the distance anyway) and just append them to the pyramid, with the hardware treating the highest level as what it treats level 0 as right now.

Right now you have to either load in the entire texture, or load in just the first level and generate the other levels, and most of those textures will be unloaded without ever using anything anywhere near level 0 (as an example, imagine a vast outdoor level where the player is running straight down the middle - there's going to be tons of textures to the left and right of him that will only ever need very low resolutions, with only the stuff he's running directly towards that will ever need level0).

[edit] though now that I think about it, you could do this with the current system too.. so nevermind :)
 
Just as a side note....

Does anybody know if the hyperspace effect in Homeworld 2 (Great Game, although I've almost finsihed it now :cry: ) is done with a 3D texture, or just a 2D dynamic texture (i.e. a texture that's updated as it passes through the ship)?
 
Ilfirin said:
It would be cool if you could arbitrarily add more detailed levels to a texture.

Right. In my previous job I was doing 3D over a network, where texture maps arrived slowly. It's easy to compress a texture such that it's refined as more data arrives (for example, using wavelets), but unfortunately it's not possible to just add the refined level as a new texture level.
 
ET said:
Right. In my previous job I was doing 3D over a network, where texture maps arrived slowly. It's easy to compress a texture such that it's refined as more data arrives (for example, using wavelets), but unfortunately it's not possible to just add the refined level as a new texture level.

Well, if you know how big the final texture is, you should be able to create a managed texture and fill in the LODs as you get them and just use SetLOD() to tell d3d what's the highest LOD the graphics card can use. It's not as favorable as being able to append higher levels of detail, since even if you only ever need up to the 16x16 mip level of a 1024x1024 texture, you'd still have to allocate space for 1024x1024 down.

But I wonder if, even if you're only using a small number of the texture levels, it'd still be faster to just load in the highest detail level and just autogen the rest. Or maybe if you could only load in the middle level, autogen down, then load in the highest level if and when you need to and autogen down. Not sure what all the rules are behind mipmap autogen though.
 
Basic said:
Humus said:
MfA said:
if only the mipmap pyramid hadnt been invented the wrong way up.

"The wrong way up"? :?: :?
I wouldn't have put it that way, but I understand what MfA means, and agree (at least to some extent).

Currently the "feeling" of mipmaps is that you have one full base texture (mip level 0 in OGL), and then make less detailed versions of it (mipmap 1...N). If the mipmap pyramid was "the other way up", you'd think of it as a low res texture (I'd like mip level 0 to be a 1x1 texture), and then you can add more detail by adding higher resolution mipmaps.

It might seem as two different ways to look at the exact same thing, but the difference is that with the second way you haven't locked yourself into how the highest resolution mipmap will look. And the second way is more natural if you want textures with different max resolutions in different areas of them.

(Eg, The lower right part of a texture might contain more important details, so it will get 10 mip levels. While the upper left might be rather fuzzy any way, so it will only get 6 mip levels.)

Then there's of course the problem that the poor hardware guys need to make hardware that can texture from a nonregularily stored texture, and that the poor driver guys need to write texture management code for textures that you might add some more detail to after it's initially created. But hey, I'm in neither of those two positions, so it shouldn't be a problem. :devilish:

Its OK, you'd just have to accept the memory thrashing such an idea would cause ;-)

D3D (and I think OGL) lets you specify the top map used within a map chain so you can, in theory at least, do this this now, you don't need to change the order of the maps. But if you're relying on it to make extra texture memory available, expect performance to die due to previously mentioned memory thrashing. This would however be improved on HW with virtual texture addressing capabilities...

John
 
I was just being coy, and basic got it right.

That said, I think it would be usefull if hardware directly supported adaptive resolution textures. You can do it with shaders in theory, I think there was a paper on doing this at last year's hardware workshop, but it isnt all that efficient ... especially without real flow-control.

In practice you would probably implement it with tiles at each LOD rather than individual texels, and some sort of extended quadtree to encode locations of the tiles+LODs (extended to store extra info in leafs about neighbours and such to prevent you from having to traverse the tree for each tile for "common cases").

It would get really interesting if you could use such a texture as a render target BTW :) (A requisite for adaptive shadowmapping.)
 
MfA said:
I was just being coy, and basic got it right.

That said, I think it would be usefull if hardware directly supported adaptive resolution textures. You can do it with shaders in theory, I think there was a paper on doing this at last year's hardware workshop, but it isnt all that efficient ... especially without real flow-control.

In practice you would probably implement it with tiles at each LOD rather than individual texels, and some sort of extended quadtree to encode locations of the tiles+LODs (extended to store extra info in leafs about neighbours and such to prevent you from having to traverse the tree for each tile for "common cases").

It would get really interesting if you could use such a texture as a render target BTW :) (A requisite for adaptive shadowmapping.)

Not sure how current mip mapping + demand mode loading (with address virtualisation) doesn't give you dynamic res support i.e. a higher map level only gets loaded into the system if its accessed and then only one page at a time. Further to this the existing API lets you clamp the top map used allowing you to force a reduction in texture detail, if you really want to. Just reversing the order in which mip maps are stored does none of this for you.

Render targets are a bit of an odd one as its probably better to start out rasteristing the top map, generating all sub levels and then paging them out when something else wants the memory. Anything else would require you to keep the scene data for the render sitting around to generate higher res maps later as necessary, this migth be an interresting approach in itself but would turn out slower if you're dealing with a complex intermediate render.

John.
 
John, the highest resolution mipmap as such doesnt exist in the type of setup Im referring to. The highest resolution varies with the location in the texture. I did miss that in principle PS-3.0 hardware which can do address virtualization has almost all the hardware needed to support my idea. You could clamp the LOD in the pixel shader using a seperate LUT texture with the minimum LOD for each location in the texture. (The small unsupported part is that this LUT itself is in principle non hierarchical, you could apply the hack hierarchically but that would be getting kinda ridiculous, which limits scaling in the extreme cases.)

Can you upload partial textures/mipmaps though? (And is the virtual memory system flexible enough to allow void parts?) If not big sparse textures (think 1024^3 3D textures) would still present a huge problem, even if you could virtualize unused parts all the way back to the HD via system virtual memory after loading (can you do that?).

With virtual addressing nearly all the support is there for variable resolution textures and variable resolution render targets (again being able to virtualize unused parts is not enough, the unused parts shouldnt be stored anywhere). The extra hardware needed is minimal for variable resolution render targets, and non-existant for textures ... now all it needs is some API extensions to do it cleanly, get to it ;)

As I said to Simon last year, PVR has it relatively easy supporting variable resolution render targets ... as long as the resolution only changes between tiles of course. You guys would have the easiest time supporting adaptive shadowmapping.
 
Back
Top