I think they are showing a scaled down result (given that the original was 3MB!) . The text associated with the image saysGrall said:I never saw a jpeg looking like that third image. Looks more like a GIF or something like that...
*G*
"JPEG2000 image (middle) shows almost no quality loss from current JPEG, even at 158:1 compression."
Hudson said:"Consequently, we developed Hybrid Vector Quantization technology: HVQ, specializing in game images."
I very much doubt it. You can read how PVR-TC works if you want (a paper's on my website). It's quite different to VQ approaches; more vaguely along the lines of the BTC family (block truncation coding) family except that it really doesn't have blocks at all.Squeak said:Hudson said:"Consequently, we developed Hybrid Vector Quantization technology: HVQ, specializing in game images."
Simon F, could this be something in the same vein as what you?re working on? I mean with mentioning of ?clear outlines? and "hybrid VQ" technique.
Actually, the banding might actually make it harder! I don't know what family of wavelets are used in JPEG2000 but if, for example, they are using linear wavelets, then banding or noise in the source image would make it more difficult to compress.randycat99 said:Banding and color graininess is still fairly apparent in the reference image (for a 3 MB file), so this would tend to help out the wavelet sample image as far as not looking so different.
Simon F said:Actually, the banding might actually make it harder! I don't know what family of wavelets are used in JPEG2000 but if, for example, they are using linear wavelets, then banding or noise in the source image would make it more difficult to compress.
Simon F said:The problem with JPEG is that, IIRC, every 8x8 block is 'independent'. With wavelets, you can easily share information between neighbouring regions of pixels (predicting pixels) which potentially saves a considerable amount of storage.
MfA said:Getting optimum memory locality out of wavelet transforms is an extremely painfull excercise, and afterwards it is of course still slow compared to block transforms
Simon F said:I very much doubt it. You can read how PVR-TC works if you want (a paper's on my website). It's quite different to VQ approaches; more vaguely along the lines of the BTC family (block truncation coding) family except that it really doesn't have blocks at all.Squeak said:Hudson said:"Consequently, we developed Hybrid Vector Quantization technology: HVQ, specializing in game images."
Simon F, could this be something in the same vein as what you?re working on? I mean with mentioning of ?clear outlines? and "hybrid VQ" technique.
Two guesses are that maybe they've used a 2 stage approach where the errors from VQing are fixed by another stage of VQ, or perhaps they're using IVQ (interpolated VQ) where, again, the errors from a low quality image are corrected using VQ. <shug>
I don't see that necessarily the case. For a small neighbourhood of pixels, each wavelet transform only requires a small foot-print of low frequency + delta values. You can cache a small set of these values from each "level" of the frequency heirarchy and maintain quite reasonable coherence - AFAICS. You just gradually update this data as you scan through the compressed image.MfA said:Getting optimum memory locality out of wavelet transforms is an extremely painfull excercise, and afterwards it is of course still slow compared to block transforms.
Ha. I suppose we could throw BWT into the mixture as wellHell I think transform coding is not needed at all ... Simon should add VQ to PVR-TC (for the colour and modulation maps) and beat JPEG-2000 silly
AFAICS, the idea generally seen in BTC is to pack every block separately; CCC (colour cell coding) did drift a bit from this because it had a palette, but S3TC returned to the independent blocks. I suppose, in a sense, you are doing some VQ within a block but the palette is rather tiny (2 or 4 colours), but when I've seen VQ mentioned, you generally think of global systems.Squeak said:Doesn?t the BTC family do vector quantization within the blocks? I don?t know how many other compression methods use VQ, but I would imagine that even waveform/DCT based compression methods would benefit from it.
Gosh no. It's independent of YUV although you potentially could use it to encode the two low-frequency images. (I did try it and decided it had just as many disadvantages as benefits).I?ve read the two papers on PVR-TC, and to me it seems to be an advanced form of YUV compression? Is that right?
Well you want two (or more) but that's not the point. Err I really don't think I could summarise the whole paper in just a few lines!One thing I haven?t been able to deduce from the papers is why exactly you need two low res base textures (A and B). It seems to be the whole point of the scheme, otherwise it would ?just? be hardwired doublepass textures?
On the 4bpp mode you have 2bpp modulation plus and an average of 2bits of colour.I realise that somehow memory is saved, by blending two 16bit textures, but I don?t see how?
No, any mention of haar wavelets should refer to someone else's compression scheme. I've also only used linear wavelets as a means of implementing a low-pass filter that's close to "the ideal filter" for linear reconstruction but much cheaper.Haar wavelets are mentioned, but they only seem to be used for splitting high frequency from low frequency, not for actual compression.