Allegorithmic Texture Compression

From that description it looks like it can generate compressed textures directly
I asked someone who has access to the evaluation kit, and the sample program's output is a DXTC-compressed .dds file with the full mipmap chain, so I guess it does indeed ;) (now, I wonder how good/bad the compression quality is, though... If it has to do it at loadtime, I fear it'd focus more on speed than quality, meh!)

As for the thing I said I was working on, I guess I might as well scrap it. It was a way to further losslessly compress a DXTC file (to save storage costs, not GPU cycles!), and while the initial results I had were promising and were able to beat bzip2 slightly, I then tried to compare it against LZMA. Needless to say,I got smacked down, sigh. Ah well, better luck next time I guess! :) (and fwiw, LZMA is a really, really impressive thing...)


Uttar
 
IIRC, in a Quake Wars presentation, they mentioned geometric texturing blah, blah. I can't remember the name, but the point was given a few parameters it'd procedurally generate the mega texture for the map. Then an artist could touch it up. Given that's it's procedural + some touch ups. One could store just the necessary seeding information plus the touch ups and that would make content delivery much easier. Especially, when it comes to player made maps, mods, the initial game and so on.

How far off is this?
 
Even with G80 that's still too expensive to things like pure virtual (procedural) textures. I'd be interested to read JC's comments (as always!) - do you have a link ?

You know, I can't find it for the life of me!

IIRC, it was a quakecon speech? Ugh, my memory sucks.
 
As for the thing I said I was working on, I guess I might as well scrap it. It was a way to further losslessly compress a DXTC file (to save storage costs, not GPU cycles!), and while the initial results I had were promising and were able to beat bzip2 slightly, I then tried to compare it against LZMA. Needless to say,I got smacked down, sigh. Ah well, better luck next time I guess! :) (and fwiw, LZMA is a really, really impressive thing...)
Uttar
That is interesting because I've contemplating trying the same thing myself. I must look up LZMA because I'd not heard of it.
 
I'm suprised we've not yet seen a hardware implementation of Perlins improved noise. The rework KP did a few years back was designed to produce a hardware friendly function, with a minimal real estate requirement. We've had a noise function is several Shader Languages for a while now but AFAICR they all return 1!
i believe the latest 3dlabs cards had support for this, dont know if it was simplex or not though, + no noise shouldnt be something a card should be doing
 
That is interesting because I've contemplating trying the same thing myself. I must look up LZMA because I'd not heard of it.

I did look it up, Lempel-Ziv Markov, used 7zip.

I've also been thinking about further compressing DXT-compressed textures by splitting them 3 channels (color1, color2, flags) and compressing these seperately, but then you should probably make an additional choice about which principal colour to put into which channel, and that information needs to be kept as well.
 
What I've done some month ago - inspired by an excellent paper"Real-Time DXT Compression" by J.M.P. van Waveren - is to convert an image to YCoCg color space and compress it with DXT5. The Y channel that holds luminance information was compressed as alpha (with high quality), while both chroma channels were compressed with lower quality. This basically compressed an image by a factor of 4, without any visible detail loss - I tested it with Lena and I didn't see a difference. Of caourse, there is a drawback, because the texels have to be converted back to rgb in the fragment shader, but we speak about 5 instructions here. Still, we have one channel free (YCoCg uses 3 of RGBA), so I packed the height map into it (doing Blinn-style bump-mapping). The results were very good (quality-wise). A 1024x1024 color/height texture thus takes 1MB space, while original color+normalmap would need 8Mb. This all at moderate shader computation cost. I think it is a good tradeoff. I must also say that Blinn-style bump-mapping looks better to me then the usual normalmapping, maybe because it is possible to adjust the bumpyness of the surface dynamically. IMHO, it can be also more accurate at steep polygons, because one may use derivatives to sample the height accordingly. However, I can say nothing about the performance, because I used the shader designer and didn't write an app myself.
 
RE: the title of the thread... Is it really supposed to be allegorithmic? How does this pertain to allegories?
 
Well, fractal decompression is easy. Only figuring out how to compress it to about 1/100th or less of it's size is the hard part. ;)

But the main problem with any compression algorithm is, that you need the hardware to be able to find the right pixel. Because, decompressing the whole file by shader would defeat the purpose of doing it in the first place, unless it's an algorithmic one. In which case you can mostly just calculate the right value when you need it.
 
Back
Top