Is there any indications of new forms of compression in Cell???

[maven] said:
I don't think bz2 is a particularly good choice (if you want to use the SPEs to decompress), as the BWT only becomes reasonably effective on large blocks of data.
If I properly remember my testing from back then - and I really should reproduce it to make sure - compression ratio remains a fair bit above that of zlib even when only using 200KB or so. It isn't quite as interesting anymore, but it also has another interesting characteristic if the code was properly reworked (which it would have to be for the PS3 anyway): random reading in 200K chunks. I'm not convinced that's so useful since your first enemy in consoles is seek time, anyway, though.

Uttar
 
ShootMyMonkey said:
Even if you store it on disc in a heavily compressed format, you will still have to decompress it (and if you like, re-encode it to S3TC), in order for it to be usable as a texture, and so in terms of how much memory is utilized, you've gained nothing.
That's not true at all.
So long as the decompression cost is small enough - whether due to abundance of processing power, temporal coherence, or some other factors - you stand to have large gains in utilized memory.

There's plenty of usable models out there with data streaming/decompression that don't require instant on-demand random access to work - heck, lots of them are in use in games today.
 
Shifty Geezer said:
There's no such thing as lossless compression at 100:1 in anything 'natural'
Nonsense. I've got a compressor that does that... it's in the cupboard next to the perpetual motion machine ;)
 
ShootMyMonkey said:
I think you're missing the point here. 100:1 image compression is already possible right now. 100:1 lossless compression is very possible. But only in very specific conditions. There are fundamentally only two ways 100:1 lossless compression will *ever* be possible -- 1 ) you have a recognizable pattern of some sort or size that repeats itself at least 100 times. Or 2 ) you have patterns that may not necessarily repeat, but can be exactly reproduced procedurally and therefore, you only need to store information about the procedure to use and the parameters.

In the *general* case, it will never be possible simply because information won't always be structured so conveniently. You can pretty much guarantee that it won't. It has nothing to do with how powerful a CPU is -- it's simply a matter of the entropy of the information stream. Now there may be a day when lossy compressors start using basis functions that are so disgustingly complex that we can edge out the quality for a given compression ratio compared to what was achievable in years past. But even then, you can't just say that 100:1 with little quality loss will ever really happen. Maybe on a video stream that shows a single picture without moving, but again, that's a unique case.


That still has nothing to do with it. It doesn't matter that CELL can compress MJPEG video streams very quickly because the GPU still has to be able to *decompress* it quickly if you want to use it as a texture. The point of S3TC is that it's natively supported in hardware by the GPU, so there's no extra cost associated with it. Even if you store it on disc in a heavily compressed format, you will still have to decompress it (and if you like, re-encode it to S3TC), in order for it to be usable as a texture, and so in terms of how much memory is utilized, you've gained nothing.

When GPUs decode JPEG or wavelet-compressed images for very little cost (whether that be in hardware or through an explosion in ability to process more shader ops quickly), then we'll talk. But until then, the CPU's ability to compress images quickly is not going to help anything as far as getting more stuff on the screen at once.


Im sincerely thankful for the your explanations very increased me , and "lossless" (without losses) today is really almost impossible or little cash, but still I continue to believe ( it is more question of faith in developers ) 2 facts/information that perhaps bring indications of that the SPEs could carry through extremely high taxes of compression:

- In TRE of Barry Minor if it used (if deceitp me not have gpus to rasterize) 2 cells 2.4GHz to carry through practically all graphical processing and this with few months of work in an architecture that practically left the oven now, imagines what future it could be carried through?

- The Bandwidth FlexIO of cell for Gpu (35GB/sec - the way - 20GB/sec- cell to gpu is more faster than gpu RSX to cell...) is enough great and perhaps capable that is allowed until (who knows?) that good part of the work of the Gpu is saved in the some aspect compression textures, therefore if not me deceit has information that cell could be efficient as some kind help to rasterization and decompress ? And aspect to decompress will be that equivalent in the RSX has (is something "SPE like" in this gpu?) for "another side" of work in textures ?
 
Last edited by a moderator:
That's not true at all.
So long as the decompression cost is small enough - whether due to abundance of processing power, temporal coherence, or some other factors - you stand to have large gains in utilized memory.
Well, I should have been more specific in that I was referring to textures being stored in VRAM and how much VRAM is utilized (as the original question seemed to be about using more and/or larger textures). I did also mention the possibility of shader processing power being abundant enough that the decompression cost in shader space is insignificant, but for the most part, that isn't the case. Data in main memory is a different matter.

- In TRE of Barry Minor if it used (if deceitp me not have gpus to rasterize) 2 cells 2.4GHz to carry through practically all graphical processing and this with few months of work in an architecture that practically left the oven now, imagines what future it could be carried through?
Well, yes, if you're doing software rendering, I can certainly see the value. The main idea is to lead the rendering ahead a few frames (which are compressed so that they can stay in memory) so that ups and downs in rendering performance get hidden away because you're buffering off a bunch of frames which will be displayed a little later.
 
Simon F said:
Nonsense. I've got a compressor that does that... it's in the cupboard next to the perpetual motion machine ;)
And here I thought I was the only one with 100:1 lossless compression and a perpetual motion machine figured out properly, it's a nasty shock to find that Simon has them both as well.
 
andypski said:
And here I thought I was the only one with 100:1 lossless compression and a perpetual motion machine figured out properly, it's a nasty shock to find that Simon has them both as well.

Ill trade you those two for my cold-fusion reactor...I've been getting bored with it lately...
 
Shifty Geezer said:
Large variety of data means there's no scope to compress it (unless someone finds a mathematical formula for describing any image as a seed to a fractal process...)
Not really. The whole entropy shebang dictates that, if such a solution is found at all, the parameters to that fractal process will also have a minimum size that cannot be violated.
!eVo!-X Ant UK said:
Im not sure but if Cell can Do graphically whats been shown, i think it would be stupid for Cell not to have compression.
Cell is composed of eight more-or-less general purpose programmable processing cores. That means yes, it can be used to implement lots of different compression algorithms. This is what we mellow people call "software".

Just like your ordinary PC processor can be used to compress or decompress JPEG images, S3TC textures, Vorbis audio, MPEG4 videos, a zipped OO spreadsheet and whatever data encodings else that actually exist, Cell can be used for the same purposes.

That something is possible, in software, however does not imply that it's infinitely fast nor that it's an inherent function of the device.
S3TC compression somewhere in the texture sampling parts of your regular graphics chip OTOH is an inherent function of the device. Transistors have been dedicated specifically for that task.

See the difference?
 
Back
Top