[360, PS3] RAGE Discussion

2 questions...

1) Carmacks main reason to not go with 3 DVDs on the xbox was because of M$ lisencing/royalty fees right? So... does the same apply to the Windows Platform or is there a chance the PC will get 3 DVDs instead of 2?


2) Since PC games just install the entire DVD onto the HDD, does that mean the PC DVDs could have more/higher data compression than the xbox360 and thus once uncompressed onto the HDD, be of better quality? (since it wont have to stream)
 
I doubt that PS3 have enough vram and RSX power to keep 10Gb more textures,remember it still at 30fps , i think more compression means more CPU-GPU work for decompression

The whole point of MegaTexture is to be less heavy on RAM & at the same time do a higher quality texture with seemless variety.
 
And if id manage to compress the game enough for PC version , when you install RAGE completely to the HDD you can get as much data as PS3 version , theoretically...
The images are already compressed. Remember these are terabyte source textures being turned into megatextures to fit on discs. Compressing it that much requires lossy compression, like JPEG. It's exactly like starting with a 50 GB super-sized photo, and having to copy it onto a on 64 MB SD card or a 512 MB SD card. You'll use JPEG and it'll fit both cards, but the smaller card will have a worse rendering.

There is no solution. There is no fix. The data is compressed to fit the medium, and you can't get more data onto a smaller medium when that data is already tightly compressed. For the PC to get 25 GBs of megatexture data, it'll need 25 GBs of distribution media.

I doubt that PS3 have enough vram and RSX power to keep 10Gb more textures,remember it still at 30fps , i think more compression means more CPU-GPU work for decompression
You're missing entirely the point of megatexturing! :D Texture quality isn't dependent on RAM and processing power beyond a minimum.
 
The images are already compressed. Remember these are terabyte source textures being turned into megatextures to fit on discs. Compressing it that much requires lossy compression, like JPEG. It's exactly like starting with a 50 GB super-sized photo, and having to copy it onto a on 64 MB SD card or a 512 MB SD card. You'll use JPEG and it'll fit both cards, but the smaller card will have a worse rendering.

There is no solution. There is no fix. The data is compressed to fit the medium, and you can't get more data onto a smaller medium when that data is already tightly compressed. For the PC to get 25 GBs of megatexture data, it'll need 25 GBs of distribution media.

You're missing entirely the point of megatexturing! :D Texture quality isn't dependent on RAM and processing power beyond a minimum.

Im not sure if thats entirely true. Compression algorithims trade between performance vs compession ratio. On the PC decompression can be done offline and so a more intensive compression algorithm can be used given that there is unlimited time for it to run. On the consoles more agressive compression algorithms would be a problem as they would be used at runtime and thus greatly affect load times. To get arround this an offline install procedure is needed.

Just look at recent video codecs. As more power becomes available more aggressive algorithms can be used allowing more detail to be stored at the same file size.
 
Im not sure if thats entirely true. Compression algorithims trade between performance vs compession ratio. On the PC decompression can be done offline and so a more intensive compression algorithm can be used given that there is unlimited time for it to run. On the consoles more agressive compression algorithms would be a problem as they would be used at runtime and thus greatly affect load times. To get arround this an offline install procedure is needed.

Just look at recent video codecs. As more power becomes available more aggressive algorithms can be used allowing more detail to be stored at the same file size.
Big part of the idtech5 is using jpeg like compression to store the data on disc and runtime decompression and recompression to DXT.

Here is links to a papers from intel/id on the subject.
http://softwarecommunity.intel.com/...al-Time Texture Streaming & Decompression.pdf
http://cache-www.intel.com/cd/00/00/32/43/324337_324337.pdf
 
Just look at recent video codecs. As more power becomes available more aggressive algorithms can be used allowing more detail to be stored at the same file size.

Yes, but what you get on the discs in the game's box is already compressed and no matter how powerful your machine is, if the source data has lossy compression then you can't get the discarded information back.
 
Im not sure if thats entirely true. Compression algorithims trade between performance vs compession ratio.
Years ago when discussing compression schemes and disc formats, I found a table of compression format performance. The savings (lossless data) between a fast compressor and a slow and very intense compressor were a few percent at best. Thus what could be compressed to 103kb and decompressed in a second with a fast comrpession scheme could be compressed with stronger compression to 100kb and take 10 seconds to decompress (illustrative figures only).

There's a finite amount of compression you can do on data, and processing power can't overcome this. Here's an extreme illustration : imagine a computer with an infinite amount of processing power. Could it compress a 2 megapixel photo down to single bit of data, and then decompress that bit to recover the full image? Clearly not! Lossless compression is about finding patterns in the data that can be expressed in a shorter form than the full thing.

The only other solution is lossy compression which instead approximates data. For this you need comparisons of parts of that data which is very limited in the scope of a huge and varied 2D bitmap. Video codecs aren't a good reference because they are comparing frames to frames. Instead you need to look at 2D image compression. There's JPEG, JPEG2000, and a few others, and they sacrifice quality for size. Different algorithms can hide the loss in better ways, but you are fundamentally removing data to make the file smaller. The trick is knowing which bits to remove and that's where improved algorithms come in.

But the end result is processing power cannot change that. All it can do is facilitate the implementation of more sophisticated ways to express and simplify the data, but someone needs to invent these representations and algorithms. Unless id can find a new mathematical model for their image data and a correspending expression that can simplify it well beyond what JPEG etc. can do, there is no hope whatsoever for better textures on PC despite a smaller distribution medium. A smaller medium means less data, and compression requires the removal of data to fit.

With perhaps one exception I've just thought of, depending on the compression scheme they use for MT. - Scrub that - just read jlippo's link. id are already using JPEG-like compression. There is no way they can get 4x the compression with zero quality loss. MT is already achieving the same sort of quality as the other processing intensive methods.
 
If you read the following IGN preview of Rage gameplay to the end you'll see the following:



Important points: a) needing a third disc for multiplayer would most likely mean they threw away COOP, b) PS3 getting better megatexture quality than even PC, c) keep the discussion civil, we don't need any platform fandom trolling.

I think the quotes may get overblown. Carmack told one of the programmers have spent quiet a lot of time to work on compression for 360. The quality may not be that much different between PC/360 and PS3 at the end.
 
2 questions...

1) Carmacks main reason to not go with 3 DVDs on the xbox was because of M$ lisencing/royalty fees right? So... does the same apply to the Windows Platform or is there a chance the PC will get 3 DVDs instead of 2?

Disk swapping is another problem... Nobody wants to swap disks, especially on consoles that frequently.
 
There's a finite amount of compression you can do on data, and processing power can't overcome this. Here's an extreme illustration : imagine a computer with an infinite amount of processing power. Could it compress a 2 megapixel photo down to single bit of data, and then decompress that bit to recover the full image? Clearly not! Lossless compression is about finding patterns in the data that can be expressed in a shorter form than the full thing.


Yes, you can! Train a VQ compression table with that image only (so the VQ table has only that image in full-size), and make the tables available for both encoder and decoder. So, if an image is close enough to that one, make the encoder send 1, if not 0 (blank picture). So, when you 2 megapixel image hits the encoder, only a single '1' will be sent to decoder and the image will be reconstructed perfectly (even w/o loss :)

Yes, I know I am stretching.. But, it is possible theoratically in some special cases :)
 
YThere's JPEG, JPEG2000, and a few others, and they sacrifice quality for size. Different algorithms can hide the loss in better ways, but you are fundamentally removing data to make the file smaller. The trick is knowing which bits to remove and that's where improved algorithms come in.

But the end result is processing power cannot change that. All it can do is facilitate the implementation of more sophisticated ways to express and simplify the data, but someone needs to invent these representations and algorithms. Unless id can find a new mathematical model for their image data and a correspending expression that can simplify it well beyond what JPEG etc. can do, there is no hope whatsoever for better textures on PC despite a smaller distribution medium. A smaller medium means less data, and compression requires the removal of data to fit.

With perhaps one exception I've just thought of, depending on the compression scheme they use for MT. - Scrub that - just read jlippo's link. id are already using JPEG-like compression. There is no way they can get 4x the compression with zero quality loss. MT is already achieving the same sort of quality as the other processing intensive methods.

Not that I'm disagreeing with any side of the argument, when JPEG2k was first introduced it was producing visually more pleasing results for higher compression ratios at the cost of more processing.

Plus JPEG stores surviving frequency components in a lossless compression scheme for which I'm sure a lot of newer algorithms can be more useful.

Finally for something like Megatexture, I think a lot of improvements can be done over standard JPEG, like storing multiple compression layers to better utilize spatial correlation at higher pixel-distances.
 
Years ago when discussing compression schemes and disc formats, I found a table of compression format performance. The savings (lossless data) between a fast compressor and a slow and very intense compressor were a few percent at best. Thus what could be compressed to 103kb and decompressed in a second with a fast comrpession scheme could be compressed with stronger compression to 100kb and take 10 seconds to decompress (illustrative figures only).

Well, obviously you haven't been paying attention to the field of compression then. Even in the realm of lossless compression, the best lossless compresses can now triple the compression performance of GZIP/PKZIP/DEFLATE and some of them can do it while being the same speed.

Take a look at the difference here http://uclc.info/gimp_source_compression_test.htm between PAQ and GZIP. LZMA based compressors like 7-zip can handily beat gzip -9 on their lowest compression setting which runs faster! 7-zip's highest compression setting beats GZIP's by 2x while decompression speed (what's really important) is the same.


There's a finite amount of compression you can do on data, and processing power can't overcome this. Here's an extreme illustration : imagine a computer with an infinite amount of processing power. Could it compress a 2 megapixel photo down to single bit of data, and then decompress that bit to recover the full image? Clearly not! Lossless compression is about finding patterns in the data that can be expressed in a shorter form than the full thing.

Of course, you can't compress every input, but to suggest that lossless compression has 'run out of steam' and is yielding marginal returns is nonsense when in the last several years, several breakthroughs have yielded very substantial gains. The thing about lossless compression is that the problem is isomorphic to the halting problem (which is just a subcase of the Berry Paradox), you can never prove a compression is optimal, so you can't really state that a custom compressor can't be constructed which can do substantially better.

I think to conclude that we have reached the limits of still-image compression is premature.
 
Well, obviously you haven't been paying attention to the field of compression then. Even in the realm of lossless compression, the best lossless compresses can now triple the compression performance of GZIP/PKZIP/DEFLATE and some of them can do it while being the same speed.
Check out the results for the Kodak Grey-scale images as it's images I'm talking about here. The average is a pretty uniform 5.5 MB, not quite halving the source size. There are some better results, progress and all, from when I looked those years ago, but we're still talking at best a 20% compression advantage on lossless compression, depending on what the 'old form' was. Okay, my illustrative numbers were decidedly pessimistic, but the point that 'we can throw processing power to compress stuff' still holds. There isn't a nice uniform relationship between processing power and compression.

Of course, you can't compress every input, but to suggest that lossless compression has 'run out of steam' and is yielding marginal returns is nonsense...
I'm talking regards images. Usain Bolt just smashed the world record after years of runners only niggling little bits off, and I expect progress to be made in most fields of human endeavour. But as I said, it's an algorithm one, not a processing power one. Processing power is just a facilitator. And and lossless, still image compression advances are redundant progress regards the topic because, unless id have managed a secret comrpession scheme that they're not telling anyone about, they have not got a means by which they can get more information onto a couple of 8 GB DVDs then they can get onto 25 GBs of BRD. ;)
 
20% is not a diminishing return however. A 20% performance difference in speed or size is in fact, an enormous gain. If I offered to sell you a C compiler that could speed up all of your code by 20% you'd jump at the chance. 20% of a 25gb bluray would yield an addition 2.5gb of space, or 5gb on a 50gb bluray, which is like having an extra DVD.

Besides which, running LZMA on grey scale images (which is what they did) without pre-processing the input is not really a fair comparison anyway. PNG for example, has a pre-filter which does delta encoding. A better test of LZMA or PAQ on images which be to apply pre-filters first.

There hasn't been much activity lately on advancing still image compression on very hi resolution HD images (giga pixels), and while it may seem that the same techniques apply, there may be additional considerations when you have such an excess of information. Generally, the more information you have, the redundancies you can throw away. None of this matters for RAGE, but there could definitely be some improvements in specialized compression for multi-channel 32kx32k images.
 
20% is not a diminishing return however. A 20% performance difference in speed or size is in fact, an enormous gain. If I offered to sell you a C compiler that could speed up all of your code by 20% you'd jump at the chance. 20% of a 25gb bluray would yield an addition 2.5gb of space, or 5gb on a 50gb bluray, which is like having an extra DVD.

Besides which, running LZMA on grey scale images (which is what they did) without pre-processing the input is not really a fair comparison anyway. PNG for example, has a pre-filter which does delta encoding. A better test of LZMA or PAQ on images which be to apply pre-filters first.

There hasn't been much activity lately on advancing still image compression on very hi resolution HD images (giga pixels), and while it may seem that the same techniques apply, there may be additional considerations when you have such an excess of information. Generally, the more information you have, the redundancies you can throw away. None of this matters for RAGE, but there could definitely be some improvements in specialized compression for multi-channel 32kx32k images.
Your math is off. 20% of 25GB is 5GB and 20% of 50GB is 10GB. Your point is even stronger than you originally thought.
 
Your math is off. 20% of 25GB is 5GB and 20% of 50GB is 10GB. Your point is even stronger than you originally thought.
For DVD's to be able to rival BRDs for data capacity, you'll need gains of more like 80%. That is, a BRD with 25GBs of JPEGs will need ome incredible compression to fit at the same quality on a DVD!

And the point with diminishing returns is that each time you make a 20% gain on the previous efforts, that 20% is worth a diminishing amount, not a linear advance on the original. Looking at that table, if we take each compression scheme as an iteration (which of course it isn't, but we can think of it as such within the notion of progress), the difference between each compression scheme is reducing from technique to technique. The compression scheme of RAGE would need to be using compression from an early iteration of compression technology for a new scheme to offer substantially improved compression of the order of even 20% smaller files.
 
Texture quality isn't dependent on RAM and processing power beyond a minimum.
In this case Im not understand why multiplatform games suffer on texture quality like PS3<xbox360<PC
You're missing entirely the point of megatexturing!
Yes I think its similar rar-zip archives more time(CPU load) for high compression to extract the data,and less for less :)
 
Back
Top