An idea -no doubt stupid

Davros

Legend
Could a game store its textures as a video, each frame being a different texture any advantages to doing it this way or is it just ridiculous ?
 
It's possible but would be extremely lossy and only save on storage space as they will still need to unpack the individual frames to make them usable.
 
If you have a lot of very similar textures it may have some advantages. The point of video encoding is to leverage the fact that consecutive frames in a video have many similar pixels. Multiple textures rarely share this property.
 
I'm surprised there are no wavelet-based texture compression methods in mainstream use. Wavelet transforms can be orders of magnitude more efficient than other types of compression and are computationally efficient...just sayin'
 
Wavelet transforms can be orders of magnitude more efficient than other types of compression and are computationally efficient...just sayin'
A few years ago I toyed around with JPEG2000. They generally needed around 10% of the disk space to deliver equivalent image quality to usual JPEG's but doing anything with them took hundreds of times more time (few ms to decode 2MP jpeg vs 2-3s for jpeg2k on P4 3GHz). It might have been due to sucky libraries though.
 
So, if you had software to sort all the game textures into an order of difference, could there be any benefit to making groups of similar textures that stored the differences rather than the textures themselves? A bit like file versioning repositories.
 
I'm surprised there are no wavelet-based texture compression methods in mainstream use. Wavelet transforms can be orders of magnitude more efficient than other types of compression and are computationally efficient...just sayin'
Who says it hasn't been tried? The thing with texture compression is that you need (fast) random access. That, pretty much, means you need a fixed rate of encoding (or indirection but that's not pleasant either)

Consider this: After you have done the wavelet transform on a block of pixels, you have a set of coefficients which are, typically, clustered/biased towards zero, which would require quantisation and entropy encoding. How do you get a fixed rate encoding?


FWIW many years ago (in the lead up to developing PVRTC) I did experiment with transform-based texture compression
with fixing the per-block bit-budget and then progressively throwing away small terms until it would fit,
but the results weren't great.
 
Would it be better for streaming such as megatexture? Like, use waveform transform to compress individual 128x128 blocks of a big texture for streaming? Although, with a block this small, the advantage of waveform transform is probably not going to be very great...
 
Wasn't Rage using some kind of waveform-compression on the huge image? Pretty sure that's also why they need GPU decoding or fast CPU to get the images from disk to GPU fast enough.
 
Would it be better for streaming such as megatexture? Like, use waveform transform to compress individual 128x128 blocks of a big texture for streaming? Although, with a block this small, the advantage of waveform transform is probably not going to be very great...
I would consider 128x128 to be huge for blocks... but I guess I'm looking at it from a texture compression perspective.

One slight problem is that to get the best out of the hardware you really need to use the native texture compression scheme, which means you either have to recompress "on the fly" (not really a good idea), or instead use a lossless compressor on top of the native format (e.g. Strom and Wennersten's "Lossless Compression of Already Compressed Textures" (slides))
 
Wasn't Rage using some kind of waveform-compression on the huge image? Pretty sure that's also why they need GPU decoding or fast CPU to get the images from disk to GPU fast enough.

Here's a question: for the Rage PC version, why don't they have an option to do a really large install with the textures already partially decompressed? Wouldn't this cut down on the amount of CPU/GPU power required at runtime to transcode the textures?
 
It would make sense to have that option and I think that disk cache thing it is able to do (but not use apparantly) kind of does that. I'm sure there will be mods out soon enough that are able to use the decompressed images from disk.
 
@OP: No, it wouldn't make much sense. Modern video codecs are heavily based on the idea of what eye can and cannot see when it comes to images displayed in a rapid succession. Details missing or error introduced by the codec are invisible to your eye when viewed as a movie but can be quite jarring when watched frame by frame. Also random access is a must, as mentioned by Simon. Data access issues would be magnified by keeping textures as videos, as most (if not all) modern codecs deal with key frames and differential frames with motion encoded for the latter. Video codecs are good for what they are being employed: coding videos. They would be a miserable fit for texture compression (unless you want to use video as a texture of course).

@Simon F: Wasn't PVRTC essentially a wavelet-like compression algorithm anyway?

Also a random thought: wouldn't an interesting side-effect of a wavelet-based compression algorithm be a free mipmapping?
 
@Simon F: Wasn't PVRTC essentially a wavelet-like compression algorithm anyway?
No. I did use wavelets as a way of generating an "ideal" low pass filter (that was used in the compression process) but not for the method itself.

Also a random thought: wouldn't an interesting side-effect of a wavelet-based compression algorithm be a free mipmapping?
"Free" in terms of storage space, maybe, but probably damned expensive to render a MIP map level that is a couple of steps down the chain (as the pixel data would be scattered thoughout memory). IIRC pyramid-based texture compression was suggested quite some time ago (but I can't remember the reference).
 
Would it of been better if rage stored it's mt (on disk) as some form of dxt ?

Here's a question: for the Rage PC version, why don't they have an option to do a really large install with the textures already partially decompressed? Wouldn't this cut down on the amount of CPU/GPU power required at runtime to transcode the textures?

Yes, you wouldn't need to transcode from HD-Photo to DXT but you would need way larger distribution media and most importantly you'd be splitting your single MT file into several little DDS files. You'd trade the transcoding CPU overhead for the HDD seek overhead. I don't know if that's better but I'd rather use the larger distribution media size for less agressive compression.
 
Well you could distribute it the way it is and decompress on the install. At least I don't see a reason why not. SSDs can stream untold amounts of data with little difficulty, so the option would be cool.
 
I'm surprised there are no wavelet-based texture compression methods in mainstream use. Wavelet transforms can be orders of magnitude more efficient than other types of compression and are computationally efficient...just sayin'

Interestingly enough, wavelet-based texture compression was the topic of my latest research paper, currently under review for a graphics conference.

I agree with what Simon said: you need a fixed-rate encoding for fast random access, you need to use the existing texture compression formats to better take advantage of the hardware and you must also properly deal with the wavelet coefficients that are clustered towards zero.

My solution was to build a simple wavelet compression scheme on top of DXTC. I don't claim it achieves state-of-the-art image coding performance (quality for a given bitrate) , but it's a simple way to increase the flexibility (in terms of compression rate) of the existing DXTC formats. Here is a small preview of the results:


I will be happy to post a preprint as soon as the paper gets accepted.
 
Back
Top