ATI Develops HyperMemory Technology to Reduce PC Costs

It depends how you measure entropy :p You can't compress G-Random data but. This comes about by the simple measurements of entropy generally given in CS you have to go through lots of levels of checking for different correlation which is after all computationally expensive. A possible attack on alot of open source key generators would be that they force a certain level of calculated entropy to be obtained but of course by doing that they are reducing the key space.

Since you can't actually do you delimiting for free without carrying out-of-band data somewhere, what use is this?
Now if your working on an existing platform which requires this out of band data anyway then well its useful. Say the TCP protocol since it requires the payload size well you don't really need to recencoded the size of the data again do you? So you get free delimiting. Having a fixed block size is also reasonable look at compression method already in use in graphics systems today DXTC? all are broken down into fixed block sizes.
 
Fixed block size is pretty much necessary if you want to do compression on time-critical data streams. You may want variable block size when you don't care so much about how long it takes to compress/decompress, such as with zipped files.
 
Chalnoth said:
Actually, it is possible to compress random data, to some extent. That is, if you divide the completely random data into discreet chunks, and are using one of a number of possible compression algorithms, there's always a nonzero probability that the chunk of completely random data that you're currently looking at will fit into one of your compression algorithms. Since those that don't fit won't be compressed, you'll get an overall win with any reasonably long string of random data.

Of course, the amount of compression that you're going to get will be small. The best use I could possibly think of compression random data would be to check how good a random number generator is...

no it's not.

You can compress anything you want down to a program that generates the information, but it will only work for the one dataset.

If you designed a program specifically to compress random data you probably wouldn't gain enough to add a header to the file.

You can figure it out mathmatically, just count of data streams in existence considerred to be random sequences. Given a perfect compressor, you wil at least have to hold enough information to store that number. I suppose it depends what percentage of data streams you would consider to be random out of a set of all possible. Going with something like half, I suppose in theory something could be written, but the numbers probably more like 99%.
 
Well, I think the bandwidth savings would be minimal (in fact, virtualized video memory could potentially increase bus bandwidth demands). The benefit is latency. If the entire texture doesn't need to be transferred across the bus before the card can start reading, it may increase performance by reducing latency.

But I still don't think virtualized video memory is a good enough substitute for on-board RAM.
 
Back
Top