ATI Develops HyperMemory Technology to Reduce PC Costs

Chalnoth said:
Well, sure you can do compression of random data. That is, if you know the random number generator used to create it and the seed....
The seed is a lump of C14 and the generator is a clock and geiger counter.

I think we're eliminating the "pseudo" prefix.
 
MfA said:
If anyone says he knows how to do compression of random data just step away very gently ... nothing good has ever come from talking to them, god knows Ive done it enough times to be sure about that.

Yeah good point. Mind you, you might be able to make some money by selling them a "halting test" program :devilish:
 
Simon F said:
Chalnoth said:
Well, sure you can do compression of random data. That is, if you know the random number generator used to create it and the seed....
The seed is a lump of C14 and the generator is a clock and geiger counter.

I think we're eliminating the "pseudo" prefix.
Awww....it's no fun that way!
 
Actually I've wondered whether it would be possible to compress random data by "somehow" converting the data into a waveforms (or sumpting) and then calculating the formulae for those waveforms . . . then adding some correction bits here and there to fudge the values into correct positions.

Then I realized that the formulae themselves would probably take more space than the original data :) (this of course assumes near-infinite computing power as well).

Entropy is a bitch.
 
Daliden, no need to do something as complicated as that, the decimals of pi contains all possible bit sequences so you can just supply an index into that. Of course, you need a pretty large index... :D
 
Okay simon lets do a 3 bit example

Z = start delimiter
X = end delimiter
Block size = 3 bits

000 ZX
001 Z0X
010 Z1X
011 Z00X
000 Z01X
100 Z10X
101 Z11X
111 Z000X

0+1+1+2+2+2+2+3=13/8=1.625 bits on average.

Like I said the delimiter has to be free and the block size free ( IE fixed )

As soon as you change the block size you can't do it so you can't compress compressed data.

Okay now this isn't optimal coding either because generally random data is considered to data with a high entropy so 000 should take either 2 or 3 bits instead of 0 bits.

Now there are a fair few cases were delimters can be free such as if your compressing single files you can use the file system to store the file system which delimites the file size ect. Imgtec wanna hire the inventor of a random data compressor lol :p

Here we go here is one possible set of optimal codings
( fixed)
010 ZX
101 Z0X
001 Z1X
100 Z00X
011 Z01X
110 Z10X
111 Z11X
000 Z000X
 
I'm very confused.

Since you can't actually do you delimiting for free without carrying out-of-band data somewhere, what use is this?

Also, why do you say that encoding is more efficient than any other order of encoding because the entropy is high? High entropy implies that the current bit does not affect the probability of the next bit, not that the current bit is more likely to be different than it is to be the same.
 
You will never be able to "compress" it and reduce the number of bits on average by more than the number of bits used to store the length. Ie it's useless. And since it's just a matter of storing the information in a different place, I wouldn't even call it compression.

Btw, the "fixed" optimal coding isn't better than the first coding.

But I understand that you're not serious.

On that note, I know a way to compress any random data to arbitrarily small size. It's an iterative compression, and you keep compressing it until it's small enough. The only side information you need is how many times you've compressed it.

Just add a 1 in front of all the other data, and see it as one huge integer. The compression just subtracts 1 from the integer. :D
 
Dio said:
I'm very confused.

Since you can't actually do you delimiting for free without carrying out-of-band data somewhere, what use is this?
I see it as a bit like someone offering a scheme where you invest 1000 pounds/dollars/whatever for 1 week and they guarantee to give you a return of 10% .... however they don't mention the £/$110 admininstration charge. :devilish:
 
Actually, it is possible to compress random data, to some extent. That is, if you divide the completely random data into discreet chunks, and are using one of a number of possible compression algorithms, there's always a nonzero probability that the chunk of completely random data that you're currently looking at will fit into one of your compression algorithms. Since those that don't fit won't be compressed, you'll get an overall win with any reasonably long string of random data.

Of course, the amount of compression that you're going to get will be small. The best use I could possibly think of compression random data would be to check how good a random number generator is...
 
Umm, no. Even if you manage to compress 1 block out of a long stream of blocks of random data, you still need to point out the location of the block you compressed or store a bit per block indicating whether or not you failed to compress it. This book-keeping will add to the size of your compressed data and eat up any benefit you may be getting from compressing that one block. Besides, the probability that you might be able to compress a block of random data by N bits is no greater than 1 in 2^N regardless of your compression method, so if you keep searching for a block that you can compress by as little as 32 bits, you will on average need to examine about 4 billion blocks, requring a ~32-bit pointer to the compressed block, and you achieve a compression ratio of no better than 0%.
 
Oh, yeah, I didn't think about that part of it. Makes sense given the definition of entropy when considering data.
 
GameCat said:
Daliden, no need to do something as complicated as that, the decimals of pi contains all possible bit sequences so you can just supply an index into that. Of course, you need a pretty large index... :D

Hmmm, except that hasn't been proven as far as I know. Now they are pretty sure the numbers don't follow some pattern as far as we know by brute computation and pattern testing but thats not of course the same as a true proof that it will hit all combinations :p
 
Actually, it is fairly simple to compress semi-random data, when you remove the "encode" step from the equation.

While theoretically, there is a mathematical representation to describe just about anything, figuring out the correct math is the really hard part. If you handle creating the artwork by just recording the steps needed to produce it, you can do so when you need it.

As you need random access as well, it is a bit more complex than that, as you have to come up with a way to generate the value at the specified index, not the whole artwork. So the macro recorder approach falls short. But that's where the math comes in: when you have a pure fractal representation, that's easy to do.
 
DiGuru said:
Actually, it is fairly simple to compress semi-random data, when you remove the "encode" step from the equation.
But you'd have to build the compressor after the random sequence, then. As a quick example, I just ran bzip2 on a 1MB random file created with the standard rand() C library function. It didn't compress at all.
 
Chalnoth said:
DiGuru said:
Actually, it is fairly simple to compress semi-random data, when you remove the "encode" step from the equation.
But you'd have to build the compressor after the random sequence, then. As a quick example, I just ran bzip2 on a 1MB random file created with the standard rand() C library function. It didn't compress at all.

Yes. Encode == compress. To generate those values, store the seed and the rand() function. In other words: only store an uncompress/decode function that generates the values from an empty set. A (shader) program.

The trick would be to come up with a clever way to generate that program while you are producing the artwork.
 
Back
Top