Indeed, one might be so bold as to say that no algorithm can consistently compress random data in a lossless mannner (see Shannon's work in Information Theory). All compression works on patterns in the data, be it textures, video, code, or text. This is why there are specialized compression algorithms for each. For all intents and purposes, you can assume that random data is uncompressable.
What I was really refering to in my original statement was random access to the data.
Most data structures have enough reprtition to support 2:1 levels of compression, but if I'm reading 1 byte out of a compressed 1MB block, the cost increase is enormous over reading it out of a none compressed block.
If I need the majority of the 1MB of the data anyway, the cost is about the same as copying it somewhere before using it, which is still not cheap.