Why not Compression to cut board cost?

bloodbob said:
Just give me non-destructive quatum reads ( which would break the laws of physics as they currently stand ) and i'll be happy just think like a 256 qubit register would hold more data then the atoms in the universe.

Well, it'd probably hold more data, but there's always the slight chance it will only hold an installation of Minesweeper.
 
bloodbob said:
Just give me non-destructive quatum reads ( which would break the laws of physics as they currently stand ) and i'll be happy just think like a 256 qubit register would hold more data then the atoms in the universe.
I don't think it makes sense to use qubits as storage. They're more for direct processing, the way I understand them. And, of course, a non-destructive quantum read doesn't just break the laws of physics, it utterly destroys them.

But if you could, you wouldn't need 256 qubits. You'd only need one, technically, since the Hilbert space for a single particle has an infinite number of dimensions.

The reality is that when a measurement is made, you can only possibly read one value for whatever observable you are measuring. So each qubit, if used for storage, would only be useful if it stored a single, discreet value (i.e. was in a particular eigenstate of the specified observable), and thus would be no more useful than a normal bit in a computer (assuming we're dealing with a spin 1/2 qubit, which doesn't necessarily have to be the case).
 
Is there a distinction between compressing data and consolidating it?

If my sometimes faulty memory serves, some of the so-called color compression being touted by video card manufacturers isn't so much compressing the data as it is just storing it in a manner that is less redundant. The data itself isn't really being changed into another representation.

Isn't the high level of "compression" when multisampling AA is enabled due to the fact that that each pixel winds up generating many identical samples, so it is easier just to keep one entry and indicate the other related samples are the same?

The data format isn't changed, they just don't bother writing down the same thing over and over. Aren't they just comparing their more sensible implementation to something that is horribly redundant and calling it compression?
 
3dilettante said:
Is there a distinction between compressing data and consolidating it?

If my sometimes faulty memory serves, some of the so-called color compression being touted by video card manufacturers isn't so much compressing the data as it is just storing it in a manner that is less redundant. The data itself isn't really being changed into another representation.

Isn't the high level of "compression" when multisampling AA is enabled due to the fact that that each pixel winds up generating many identical samples, so it is easier just to keep one entry and indicate the other related samples are the same?

The data format isn't changed, they just don't bother writing down the same thing over and over. Aren't they just comparing their more sensible implementation to something that is horribly redundant and calling it compression?

AFAIK that's exactly what most of the lossless compression algorithms do. Of course, they look for repeated patterns as well, but one of the basic compression methods is, indeed, to "keep one entry and indicate the other related samples are the same".
 
A simple counting argument and the basic pigeon hole principle says there is no guaranteed lossless compression ratio.

Imagine if all N-bit long strings could be compressed (guaranteed!) by a compression algorithm to N-1 bits long.

Let N = 3, for example.

Our strings are

000
001
010
011
100
101
110
111

There are 8 strings.

Now, there are only 4 possible 2-bit strings
00
01
10
11

So how can you put 8 pigeons into only 4 holes? You can't. 2-bits can only uncompress to 4 possible outcomes, and therefore, you've lost the ability to discern between atleast 2 different strings when you uncompress. Hence, it cannot be lossless.

Basically, you cannot put a finite smaller set into 1:1 bidirectional correspondence with a larger set.
 
3dilettante said:
If my sometimes faulty memory serves, some of the so-called color compression being touted by video card manufacturers isn't so much compressing the data as it is just storing it in a manner that is less redundant. The data itself isn't really being changed into another representation.
Kind of. 'A manner that is less redundant' is still 'an 'alternate representation'.

What we're actually doing in current hardware is more to do with saving bandwidth. I've already shown earlier in the thread that you have to allocate the full memory requirement if you want lossless compression, so there's no point in actually trying to 'save memory'. However, if we can find that, under many circumstances, instead of storing N bits of data we can store an alternate representation that is N/2 or N/4 and still have the same data then we do it to reduce the number of hits on memory.

It's why Z compression is so much more important than colour compression - it is fairly easy to find alternate representations for (conventional) Z as the function changes smoothly and predictably, while with texturing the same is rarely true for colour unless multisampling is enabled.
 
Althornin said:
disagree.

faster processors leads to better available compression.

eventually, data processing will be (could be) dirt cheap compared to bandwidth. The "geniuses" of the past 20-50 years have not had that to work with.

Faster processing only leads to better available compression if and only if better suitable compression is possible. As has been pointed out, the lossless compressability of a digital signal is often zero. Having infinite processing power won't change what is compressable and what is not.

However, changing how you encode your signals (abandoning binary and using some arbitrary n-state scheme) might allow you to compress what had been previously uncompressable. That is where the hope lies, because it is provable that lossless compression of a binary stream is not going to become any more feasible with more transistors or faster processing.

Unless we change how binary math works, I don't see how we can squeeze any more compression out of very heavily researched binary operations. (How many companies, grad students, and telecoms have all poured their resources into lossless compression for the past 10 years?)
 
Hellbinder said:
Just out of curiosity...

what is a

"hellbinder whogivesabeep mark"???

I am afaraid i am completely unfamiliar with that one.

Your old sig (about a year ago) used to annoy the hell out of me, because no-one except you cares how fast your computer is or what type of components are in it. Thus, I competed with your specs by making my sig the specs of my Exchange cluster, as a farcical method of showing that the computer specs listed in sigs are of absolutely zero value.
 
Dio said:
flf said:
If you or I can think of a scheme that would improve rendering or data speeds on a processor, you can be 99.999999999% certain that someone has already thought of it, discussed it, modeled it, prototyped it, and tested it. The chances of any of us dreaming up a unique refinement at this point is rather small.
On this point I might disagree. I think there are still 'obvious' improvements to be discovered. I have seen enough quality ideas come up in my time in this industry that prompt you to think 'Why the hell didn't I think of that!' to believe it won't happen again.

Of course, the majority of these are tiny little things, but there I'm sure there are a few big things out there.

One interesting thing is that sometimes these things come up in multiple different groups nearly simultaneously, which I think is usually because there's some seed idea in a previous architecture that prompts people to spot something extra for the next one.

Sorry, by "you" I meant Hellbinder, and the rest of us lesser phyles that aren't sitting on the forefront of the technology. There are a few (think about the number in relation to the total world population) who have enough time investment and knowledge and technical prowess that they can make good guesses at where possible optimizations lie, in the true nuts and bolts sense... not in a "if compression were better..." pie-in-the-sky sense.

For the rest who may know enough to understand in theory, but not enough to actually work in the field, the chances that we will come up with a technique that the specialists of the field haven't already considered is virtually nil. Even most ideas by the specialists themselves turn out to be unworkable.

That is why I didn't say 100%... there's always that slight chance.
 
Generally, the best new solutions result from taking a totally different approach to a problem. Like, for example, not focussing on the bandwith and memory footprint directly, but on something else, like scene management.

So, generally, the best new solutions don't come from the specialists, but from relative outsiders with just a decent knowledge of the problems involved.
 
Btw, the best way to save memory is using shaders instead of multiple textures for materials. Or possibly embedding shader hints in the textures for things like normal maps, or vice versa. Or use the LSB's in a texture for constants, or something like that.

If you see the amount of textures/maps used by the Source engine, you might very well come up with something really clever to reduce that amount. But it would only run on DX9 (or perhaps DX8.1) class cards.
 
flf said:
Althornin said:
disagree.

faster processors leads to better available compression.

eventually, data processing will be (could be) dirt cheap compared to bandwidth. The "geniuses" of the past 20-50 years have not had that to work with.

However, changing how you encode your signals (abandoning binary and using some arbitrary n-state scheme) might allow you to compress what had been previously uncompressable. That is where the hope lies, because it is provable that lossless compression of a binary stream is not going to become any more feasible with more transistors or faster processing.

And that is exactly the sort of thing that requires more processing power. Well until we get a n-nary transistor we're going to have to emulate n-states using binary. Which will take more instructions and more time, hence using more processing power.
 
Killer-Kris said:
And that is exactly the sort of thing that requires more processing power. Well until we get a n-nary transistor we're going to have to emulate n-states using binary. Which will take more instructions and more time, hence using more processing power.

Okay, I don't quite understand... it's already been pointed out that any binary substitution scheme suffers from the same limitations that all others do... and that is that certain bit patterns do not compress. Emulating n-states in binary is going to suffer from the exact same problems because it is not a different approach.

The point is to:

A) Compress data without introducing latency
B) Transmit data compressed smaller than original, thus saving bandwidth
C) Decompress data without introducing latency

The problem being that we cannot guarantee that B is going to be faster than uncompressed since not all bitstreams are compressable, with the pathological cases causing worse performance than no compression whatsoever.

I'm assuming that a n-state computer will also have a new n-state bus for passing said data back and forth between subsystems. We avoid the problems of binary compression by never encoding in binary at any point prior to DAC output (or somesuch... I'm winging it here since I have practically no expertise in this area.) Handling data in binary (or emulating using a binary system) doesn't gain you anything... either you work entirely with a new model, or you hamstring your new technology by packing along the old.

If you want to talk about lossy compression, then yes, more processing power allows you to work faster and pick better polynomials. However, the limitations on lossless compression have nothing to do with finding an approximating function... the limitations have to do with the fact that some bitstreams cannot be compressed. So if we can stop using bits and start using something else... a cleverly designed something else... perhaps better lossless compression is possible.

This is all wild supposition, of course. (Except for what is provably not feasible in binary currently... that is well known.)
 
flf:
I don't think you'll find many people calling that data compression. Using symbols that can store more than 1 bit might be good in some cases. It might result in smaller chips with the same memory/calculation capacity. But calling it data compression is almost like calling a switch from .15u to .13u data compression.

The basic rules of information theory won't change because you change the basic symbol you work with. You just store N bit in each symbol (where N might not be an integer).
 
flf said:
The point is to:

A) Compress data without introducing latency
B) Transmit data compressed smaller than original, thus saving bandwidth
C) Decompress data without introducing latency
Not really, because A and C are impossible. The point is to start considering compression when bandwidth is significantly limiting performance, moreso than latency. For a large portion of processing, latency is the larger concern.

Here's what I envision as being optimal for compression in the future:

Imagine a compression/decompression co-processor. This co-processor could be attached to any other processor (CPU, GPU, whatever). It has a series of different compression schemes, as well as an instruction set to deal with them. When it is signalled to compress a data stream, it will typically be also signalled to use a specific form of compression on said data stream.

In this way, developers who have information about what form of compression will best work on their data will be able to sacrifice a little bit of latency while increasing bandwidth and keeping all other processors from having to deal with the compression/decompression.
 
flf said:
So if we can stop using bits and start using something else... a cleverly designed something else... perhaps better lossless compression is possible.
Won't bits always be the best building block, no matter what? The reason being, that bits will always have better noise to signal tolerance than other "more analog" data representations.
 
Squeak said:
Won't bits always be the best building block, no matter what? The reason being, that bits will always have better noise to signal tolerance than other "more analog" data representations.
One has to wonder. Obviously going for more possible values per signal clock means you can't clock the device as highly, but what if using simple bits isn't the most efficient way to do this? Apparently for many signalling technologies people have come to the conclusion that using bits isn't the most efficient thing to do.

Also, it may be useful to go ahead and make the signalling intolerant of noise, and just send error-correcting signals that allow reconstruction/detection of lost data. If the data rate gained outweighs the data rate lost from the additional error-correcting signal, then it'll be an overall gain in bandwidth (though you'd want the frequency of uncorrectable errors to be very low, so that data typically doesn't have to be sent more than once: you'd want most errors to be corrected on the fly).
 
Back
Top