flf:
1st §: yes, 2nd §: not really, 3rd §: no
There's still more work to do with compression. But there's no use in searching for the holy grail that can compress random data. Since it's possible to theoretically disprove the existence of it.
Work should insted be directed at finding the typical redundancy on the data type you want to compress, and find fallbacks that are acceptable for the cases where you can't compress.
You might find a system that is lossless in most cases, but breakes and becom lossy for unusual special cases. And when it becomes lossy, it should be lossy in a way that damage the data as little as possible. Or you could have fallbacks that still are lossless, but use slower memory for the excess data.
Examples are Z3, or 3Dlabs SuperScene antialiasing.
Simon F:
No, I wasn't refering to any ideas at all.
I just left the possibility open. While theories about data compression and N-value systems are well known, I think flf is right that there's little work done in actually implementing it in hardware. So I still hold it possible that someone can find a way to do N-value operations more efficient than binary ops.
Ie, if someone make a trit memory cell or adder that is less than 58% larger per symbol than its binary counterpart, it's a win. Same with a trit multiplyer that is less than 150% larger.
1st §: yes, 2nd §: not really, 3rd §: no
There's still more work to do with compression. But there's no use in searching for the holy grail that can compress random data. Since it's possible to theoretically disprove the existence of it.
Work should insted be directed at finding the typical redundancy on the data type you want to compress, and find fallbacks that are acceptable for the cases where you can't compress.
You might find a system that is lossless in most cases, but breakes and becom lossy for unusual special cases. And when it becomes lossy, it should be lossy in a way that damage the data as little as possible. Or you could have fallbacks that still are lossless, but use slower memory for the excess data.
Examples are Z3, or 3Dlabs SuperScene antialiasing.
Simon F:
No, I wasn't refering to any ideas at all.
I just left the possibility open. While theories about data compression and N-value systems are well known, I think flf is right that there's little work done in actually implementing it in hardware. So I still hold it possible that someone can find a way to do N-value operations more efficient than binary ops.
Ie, if someone make a trit memory cell or adder that is less than 58% larger per symbol than its binary counterpart, it's a win. Same with a trit multiplyer that is less than 150% larger.