Is there any indications of new forms of compression in Cell???

I was wondering about it and i wasn´t able to find anything. If yes, how much ram can we find in textures in the PS3?
Thanks a lot!!!
 
Cell likely will have little to do directly with textures*, and I strongly doubt it contains any sort of specialized logic for texture decompression. RSX (the graphics ASIC) on the other hand will certainly support all of the S3TC compression formats natively, and maybe even ATI2N, the new format with two 8 bit components that's designed for normal, height, and depth textures.

*EDIT: One exception to this would be run-time generation of procedural textures, but this is somewhat orthogonal to your question anyway.
 
Last edited by a moderator:
I wonder, how about overall compression of data on the Blu-ray disc?

That could speed up loading times, and help us get more textures in streaming games such as a next-gen GTA. Can the cell help here?

If the cell could help significantly there(not sure) and if it was combined with a fast blu-ray drive(say 4x, hopefully.), it would be almost like having extra ram for textures(thanks to streaming.). Dunnoh if that'd help if they included an Hdd. Is it better to simply let it all uncompressed on a hdd, or can compression(and any possible cell boost with it) further increase the benefits?
 
CELL would certainly lend itself to procedural textures, but procedural textures are largely limited to natural looking textures like wood, and rock.

There maybe a way to compress S3TC textures even more, and have CELL convert them into a S3TC texture before being sent to the GPU, but that seems rather unlikely, especially in the sense the developer may not feel the need to either go that extra step, as the gain might not be worth the processing overhead.

The fact that some demo teams in Europe have created amazing 3D demos (on the PC that is) in only 64KB using massive amounts of compression, shows that there is no end to how well you can compress things. But it all takes a lot of work.

Compression the data on the disk also maybe useful, if it also allows faster load times, as long as the decompression into memory does not add extra wait time before you can begin playing. Compression to fit more things on a Blu-ray disk does not make much sense, especially when developers have at the very least 25 GB of space to work with.
 
Well, CELL always seemed very appropriate for bzip2 compression to me, if it was optimized for it, which compresses 25-30% more than zlib from my experience. So if devs used and dedicated a fair bit of processing power to it, that would improve loading times from disk further, but I doubt that's what you were talking about, since this is unrelated to loaded textures.
And I'm sure a lot of devs won't bother and will keep using zlib - or will find out that it's better anyway, for other reasons.

Uttar
 
Uttar said:
Well, CELL always seemed very appropriate for bzip2 compression to me, if it was optimized for it, which compresses 25-30% more than zlib from my experience. So if devs used and dedicated a fair bit of processing power to it, that would improve loading times from disk further, but I doubt that's what you were talking about, since this is unrelated to loaded textures.
And I'm sure a lot of devs won't bother and will keep using zlib - or will find out that it's better anyway, for other reasons.

Uttar

In a way anything you can do that allows you to load faster, will allow you to move more data into ram through streaming. Even if it's not for textures directly if you can dedicate less ram to some other data thanks to streaming, than that space is free for more textures. If texture data can be compressed further on disk and such a thing boost loading speed, than it will allow for more streaming of textures, aka, more textures.
 
Last edited by a moderator:
Uttar said:
Well, CELL always seemed very appropriate for bzip2 compression to me, if it was optimized for it, which compresses 25-30% more than zlib from my experience. So if devs used and dedicated a fair bit of processing power to it, that would improve loading times from disk further, but I doubt that's what you were talking about, since this is unrelated to loaded textures.
I don't think bz2 is a particularly good choice (if you want to use the SPEs to decompress), as the BWT only becomes reasonably effective on large blocks of data.
bzip.org homepage said:
bzip2 usually allocates several megabytes of memory to operate in, and then charges all over it in a fairly random fashion. This means that performance, both for compressing and decompressing, is largely determined by the speed at which your machine can service cache misses. Because of this, small changes to the code to reduce the miss rate have been observed to give disproportionately large performance improvements. I imagine bzip2 will perform best on machines with very large caches
 
!eVo!-X Ant UK said:
Im not sure but if Cell can Do graphically whats been shown, i think it would be stupid for Cell not to have compression.

Of course it "has compression", whatever that means for you, i'm guessing it means it can decompress data on the fly.

The question here is about "new" ways of compression that could be more efficient than older models, which Cell might be able to handle better than other chips.
 
Heinrich4 said:
(Jpeg2000 is lossless compression almost 100:1)
You're a funny one.

But back to the original topic, I expect next-gen games to use art assets that are compressed with more complicated algorithms than what's in use today (which essentially is DXTC) eventually.
I also think this will be constrained to better entropy coding of precompressed formats, as for bandwidth reasons all these assets should still remain compressed in memory but in a form directly usable by the GPU / audio processor.

I.e. someone should start thinking about efficiently encoding the DXTC (or whatever other texture formats are directly supported) formats both losslessly and lossily (maybe something that compressed the 2 palette color components in a more traditional form and the flags as side-band information), so that they can be directly decoded to a compressed (fixed-rate) texture.

I may actually have a go at this...
 
Heinrich4 said:
But how tax rate is possible with this ? 10:1?, 20:1?

(Jpeg2000 is lossless compression almost 100:1)
You need to read up on the subject. Take a ganders at this for one...
http://kt.ijs.si/aleks/jpeg/artifacts.htm

There's no such thing as lossless compression at 100:1 in anything 'natural' without large areas of flat colour. It's scientifically impossible. Data is of absolute size, and the only way to compress it smaller is to express it in different ways taking advantage of patterns etc. Large variety of data means there's no scope to compress it (unless someone finds a mathematical formula for describing any image as a seed to a fractal process...)
 
Shifty Geezer said:
You need to read up on the subject. Take a ganders at this for one...
http://kt.ijs.si/aleks/jpeg/artifacts.htm

There's no such thing as lossless compression at 100:1 in anything 'natural' without large areas of flat colour. It's scientifically impossible. Data is of absolute size, and the only way to compress it smaller is to express it in different ways taking advantage of patterns etc. Large variety of data means there's no scope to compress it (unless someone finds a mathematical formula for describing any image as a seed to a fractal process...)

Very interesting the information of link.

I already read this thread and i see tens of topics argued in B3 forum (I access B3D forum has many years without registering me) over this possibility, only need one search to find topics about jpeg2000.

About to be possible 100:1 or i was not based on information of forums as www.igda.org and gamasutra and others (is only hipothetical ok?).
 
Last edited by a moderator:
[maven] said:
You're a funny one.

But back to the original topic, I expect next-gen games to use art assets that are compressed with more complicated algorithms than what's in use today (which essentially is DXTC) eventually.
I also think this will be constrained to better entropy coding of precompressed formats, as for bandwidth reasons all these assets should still remain compressed in memory but in a form directly usable by the GPU / audio processor.

I.e. someone should start thinking about efficiently encoding the DXTC (or whatever other texture formats are directly supported) formats both losslessly and lossily (maybe something that compressed the 2 palette color components in a more traditional form and the flags as side-band information), so that they can be directly decoded to a compressed (fixed-rate) texture.

I may actually have a go at this...

Thanx for explanations, but 100:1 as i say maybe a extreme possibility in the future who can speak necessarily will be or not possible to reach this level of compression in a tool optimized for spe/Spu one day? We have many thread around the world saying about jpeg2000.

And you believe that with the use of a SPE 3.2GHz if reaches 50:1(50 * more than powerpc at same clock) in mjpeg at TRE cannot be possible to obtain bigger taxes than 4:1 or 8:1 of s3tc(or streaming or something to obtain more compression) ? I think that we would have to better give credit to the future of the tools will be able to offer.
 
Last edited by a moderator:
Heinrich4 said:
About to be possible 100:1 or i was not based on information of forums as www.igda.org and gamasutra and others (is only hipothetical ok?).
Before I believe 100:1 compression of images will be possible without severe losses (let alone lossless) I'd need to see some real evidence of the fact. The only things I know that get high lossless compression are things with LOTS of redundant data. eg a 256x256 white to black gradient image is 192kb raw data, which compresses to 693 bytes as a .png. Your average photo will compress to about half size, maybe even 4:1, as lossless pngs, and png is pretty good (though can be compressed further a little). Methods like DXTC are lossy which is how they manage better compression. There was the subject of compression elsewhere (is DVD9 big enough for next-gen type thread) a while back and I found a website showing that the differences between compression algorithms. The difference in compression size was small; maybe 15% between the best and good compression schemes.
 
Thanx for explanations, but 100:1 as i say maybe a extreme possibility in the future who can speak necessarily will be or not possible to reach this level of compression in a tool optimized for spe/Spu one day? We have many thread around the world saying about jpeg2000.
I think you're missing the point here. 100:1 image compression is already possible right now. 100:1 lossless compression is very possible. But only in very specific conditions. There are fundamentally only two ways 100:1 lossless compression will *ever* be possible -- 1 ) you have a recognizable pattern of some sort or size that repeats itself at least 100 times. Or 2 ) you have patterns that may not necessarily repeat, but can be exactly reproduced procedurally and therefore, you only need to store information about the procedure to use and the parameters.

In the *general* case, it will never be possible simply because information won't always be structured so conveniently. You can pretty much guarantee that it won't. It has nothing to do with how powerful a CPU is -- it's simply a matter of the entropy of the information stream. Now there may be a day when lossy compressors start using basis functions that are so disgustingly complex that we can edge out the quality for a given compression ratio compared to what was achievable in years past. But even then, you can't just say that 100:1 with little quality loss will ever really happen. Maybe on a video stream that shows a single picture without moving, but again, that's a unique case.

And you believe that with the use of a SPE 3.2GHz if reaches 50:1(50 * more than powerpc at same clock) in mjpeg at TRE cannot be possible to obtain bigger taxes that the 4:1 the 8:1 of s3tc(or streaming or something to obtain more compression) ? I think that we would have to better give credit to the future of the tools will be able to offer.
That still has nothing to do with it. It doesn't matter that CELL can compress MJPEG video streams very quickly because the GPU still has to be able to *decompress* it quickly if you want to use it as a texture. The point of S3TC is that it's natively supported in hardware by the GPU, so there's no extra cost associated with it. Even if you store it on disc in a heavily compressed format, you will still have to decompress it (and if you like, re-encode it to S3TC), in order for it to be usable as a texture, and so in terms of how much memory is utilized, you've gained nothing.

When GPUs decode JPEG or wavelet-compressed images for very little cost (whether that be in hardware or through an explosion in ability to process more shader ops quickly), then we'll talk. But until then, the CPU's ability to compress images quickly is not going to help anything as far as getting more stuff on the screen at once.
 
Back
Top