Why not Compression to cut board cost?

flf:
1st §: yes, 2nd §: not really, 3rd §: no

There's still more work to do with compression. But there's no use in searching for the holy grail that can compress random data. Since it's possible to theoretically disprove the existence of it.

Work should insted be directed at finding the typical redundancy on the data type you want to compress, and find fallbacks that are acceptable for the cases where you can't compress.

You might find a system that is lossless in most cases, but breakes and becom lossy for unusual special cases. And when it becomes lossy, it should be lossy in a way that damage the data as little as possible. Or you could have fallbacks that still are lossless, but use slower memory for the excess data.

Examples are Z3, or 3Dlabs SuperScene antialiasing.


Simon F:
No, I wasn't refering to any ideas at all. :D
I just left the possibility open. While theories about data compression and N-value systems are well known, I think flf is right that there's little work done in actually implementing it in hardware. So I still hold it possible that someone can find a way to do N-value operations more efficient than binary ops.

Ie, if someone make a trit memory cell or adder that is less than 58% larger per symbol than its binary counterpart, it's a win. Same with a trit multiplyer that is less than 150% larger.
 
Really no reason to go with higher base systems till we expend all possible improvements on binary systems though. Higher order systems always have the possibility for a constant speed performance boost but the additional system complexity isn't worth it. As performance boosts from tech improvements continues to decrease prolly a worthy option of considering.

But more on topic sure won't gain you any increases in compression (well besides a denser signal over a single wire that can't be clocked as high)
 
Simon F said:
Incidentally, Turing considered that e would be optimal and that's good enough for me. :)
Well, considering that Einstein spent years attempting to find an alternative description of the small world than Quantum mechanics because he felt God didn't "play dice with the universe," you can't just take anybody at their word.

But the argument of holding r^w constant while minimizing rw does make sense (r is the number of possible values a storage symbol has, w is the number of symbols stored, so this is basically holding the possible number of unique storage values constant while varying the size of each symbol to figure out how many symbols are needed).
 
3dilettante said:
If a DRAM were made trinary, it would either have to partially charge capacitors, or double up the number of capacitors per bit, so that there would be a both off, one on/one off, both on states.

The second option seems a little self-defeating capacity-wise, while the first sounds pretty hard to get right consistently.

Why would you want to try to make a contemporary binary storage system emulate a higher order?

Obviously, if there were 3-state processors and 3-state data busses, we would also have 3-state memory. Quit trying to examine how this will not work with today's data handling and storage model -- start thinking in terms of how it works within a completely new model built from the ground up to be based upon n-state bits rather than binary bits.

Hmmm... perhaps I should go trademark the name "3DRAM", eh? (Probably already been done.)
 
The best way I could think of to make a trinary system would be a system comprised of spin-1 particles. This could, potentially, be a magnetic material or a quantum computing device. In reality, I think it's just much easier to build logic based upon binary systems, even in quantum computing. Yes, you do lose a bit on storage space, but since it's easier to build the basic components, you can spend a lot more time figuring out how to pack more things in a smaller space.

As a side comment, it looks like one possible method of future computing is spintronics, which makes use of half-metals (metals which are only conductive for electrons with a particular spin).
 
In reality, I think it's just much easier to build logic based upon binary systems, even in quantum computing.

Hmmm... perhaps you feel confidant that you can predict what will be easy and what will be difficult in fifty years, but I'll admit that I am completely unsure what direction computing will take within even the next ten years.

It only take one little discovery (e.g. the semiconductor) to completely change what is feasible and what is outlandish.
 
Well, any quantum-computing based system would be based upon the behavior of angular momentum. Essentially, there are no other quantum numbers that are nearly degenerate, take a limited number of discreet values, and can be set easily.

Now, if you choose spin, you don't have to select states with spin 1/2 (which have two possible states), since multi-electron states can have spins that are higher. One could, for example, select a spin 1 particle (3 states), and devise trinary math. The problem is in setting the third state. Spin up is easy. Spin down is easy. Spin-somewhere-in-the-x-y-plane is not.

Edit: note that I am being kind of carefree with the distinction between spin and angular momentum. Sorry about that.
 
I just returned from an affiliates program at Rice University in Houston, Texas.
One speaker dealt with ccompression - started off with current jpeg2000 wavelet work, and then looked for room for improvement.
Claims up to 90% additional compression by using an interesting algorithm.
I only barely get what he said, and I don't have any robust data,etc, so i won't bother to post some crap explaination of mine here.

But I'd recommend watching that space, cause this is cool stuff.
 
flf said:
3dilettante said:
If a DRAM were made trinary, it would either have to partially charge capacitors, or double up the number of capacitors per bit, so that there would be a both off, one on/one off, both on states.

The second option seems a little self-defeating capacity-wise, while the first sounds pretty hard to get right consistently.

Why would you want to try to make a contemporary binary storage system emulate a higher order?

Obviously, if there were 3-state processors and 3-state data busses, we would also have 3-state memory. Quit trying to examine how this will not work with today's data handling and storage model -- start thinking in terms of how it works within a completely new model built from the ground up to be based upon n-state bits rather than binary bits.

DRAM is one of the cheapest means of volatile storage available, and I thought the point of the thread was discussing a way to lower board costs. Costs are a non-theoretical concern, which I was addressing.

The reason DRAM is cheap is because you can cram a lot of bits onto a chip easily and reliably.
Theoretically, current DRAM cells could be used to store trits, if one is willing to revamp the sense amps and seriously sacrifice reliablity.

If a capacitor in a dram cell is only charged halfway, it could be sensed as a third state with the appropriate equipment. It's just that it would be hard to get right inexepensively, and thus reduces the utility of changing a representation system to lower board costs.

Having an n-state processor with n-state signaling and memory won't magically make things cheaper or more reliable. There's a reason why designers have stuck with binary systems for so long, because of the much wider margins of error permitted by a two-state system and their simplicity.

A trinary transistor would have two regions of metastable behavior, compared to one with a binary one. The noise margins would have to either be half as large between each trinary state as compared to a binary one, or the trinary system will have to double the voltage range between lowest and highest states to match.

Depending on where you define "ground-up" building of system, a larger number of states can be without any such drawbacks.

However, if you stay within the bounds of the thread, which centers on lowering the cost of boards with real signal, heat, reliability, and implementation concerns, then the problems remain.

If you decide that the theoretical n-state system won't have to concern itself with these problems, then the argument is safely beyond the scope of the thread.
 
Well, I would contend that most any n-state system will still have to contend with essentially identical problems, for the simple reason that you're almost certainly going to end up using some sort of electric wave to send the signal.

For example, imagine a 3-state MRAM system. MRAM stores magnetic bits in small magnetic domains. Bits are written by the use of a write wire. Send current one way, and the bit is positive. Send current the other way, and it's negative.

Bits are read through the use of some sort of giant magneto-resistance material, where the resistance of the material depends upon the orientation of the magnetic field. So, the bit that's read off is actually a resistance in the bit. So, to produce a data stream you'd simply continually switch which bit in memory is receiving current, and then send the signal through an appropriate amplifier.

Now, it is fundamentally possible, as I stated earlier, to have a three-state magnetic system. You could set the third state by having a second write wire perpendicular to the first two. Any current sent through this wire would polarize the magnetic material in such a way that when the read wire has a signal sent, it would think that there was no magnetic moment at all, leading to a resistance in between having a magnetic moment totally positive, and one totally negative.

But here's the problem: though you have an inherently 3-state system, and an excellent way to read the state of the system, you still need to get an electric signal out of the system. That signal is what will be prone to noise. Even then, you're still going to have problems: for the MRAM idea, the resistance won't necessarily be linear, so unless you tune your substance just right, this third value won't be exactly inbetween the other two. Capacitors in DRAM storage have a similar problem: a half-charged capacitor will need to be recharged much more often than a fully-charged one (it's an exponential decay, so it's easy enough to figure out....I just don't feel like doing it right now).
 
What about the following idea.

You have a few different compressor implementations in hardware that is transparent to the drivers, so the developer can pick whichever one he wants at any given scene. That way they can arrange data flow in order to optimize the needs of the scene and the compressor.

It costs more, and forces developer intervention, but in principle no one is a better predictor of any given scene than the developer himself (at least one hopes)
 
The problem with wavelet compression, AFAIK is the amount of work the decompression stage would need to carry out. Not to mention it would be very expensive in transistor counts. I don't think its ideally suited for graphics cards at this time.

Still they are by far the coolest algorithms that I know off just from an aesthetic point of view.
 
I thought the topic was lossless data compression since the original quesiton was about saving bandwidth. Last time I checked it wasnt acceptable to have data loss when moving data across a bus.

There are certainly a lot of possibilities for compressing data with approximating functions, but the question is are there any new ways to compress data more effectively with exacting functions.

Yes, it's probably not a real word: exacting == lossless, and approximating == lossy.
 
Well, it really depends upon what's being compressed, doesn't it? Some amount of loss may be okay for, say, vertex data. But you'd definitely want to give developers control over such things, as they do texture compression.
 
The obvious problem with giving developers control, is that anytime they switch compression schemes the differences in visual artifacts will be highly apparent to the eye.

Which means you can't be switching too often.. And the longer a scene is, the more randomness is involved (which would mess up the devs statistics for predicting) and the point might be lost.

Its probably more suitable for certain types of games (doom3 room by room sort of thing) and consoles
 
I think the idea is that you'll only need to use one compression scheme for a given type of data. For example, objects which move every frame typically exist in object space before being passed to the graphics card, and as such they will typically be very small in size. So, one type of compression may work very well on them that would not work on, say, terrain.

But terrain tends to be very smoothly-rolling, so some sort of HOS implementation or such may work very well in compressing this data.

And so on. I don't think you'd be switching compression schemes over time, but rather between data sets that have different characteristics.
 
Back
Top