Yes. My point was that the program had not been written previously. Someone went through all the prime numbers trying to find one that corresponded to some runnable program, that violated a patent. The resulting program was most likely completely useless. The prime number sequence is no more useful than the sequence 1, 2, 4, 8, .... for example.
Edit: on second thought, the program could just as easily have been found by other methods. The point was that it was likely not simply applied to an existing program.
That one example is not to say this can be done with all programs, but that it may be done with some. There's no disproving the fact that a particular program can be sent in such a small footprint.
A more general alternative, would be to have highly modular programs that use a standard very large common set of functions and algorithms, a larger more diverse standard library. And simply send the higher organization, with appropriate code. This way once a series of programs was transformed and coded appropriately, you could send 100s or 1000s or millions of different programs with a very small memory size, given most code already resides in the recipient computer(just send parameter adjustments, hierarchy info, and necessary code to tie together the algos and functions.)
As for your second claim, as I tried to explain in my previous post, compression of random data is impossible. For every bit you save on one possible input, you lose at least one bit on another possible input.
This is regardless of what algorithm you are using. You can google the counting argument, for example.
Googled it but couldn't find the relevant info, maybe it addresses the following.
There are infinite deterministic random sequences of numbers available. The statistical properties mean, that any specific sequence of numbers will be found within this sequence an infinite number of times, only thing being the larger the sequence the rarer it becomes.
Now it will definitely be prohibitive computationally at present, but it stands to reason that once found you only have to say. Start at digit number Xxxxx and end say 1 trillion~ digits to the right. Presto, you've just sent one Terabit.
Now the problem is obviously given a certain numerical sequence with these properties say pi, it is very likely the first digit of such sequence is very very far. If it is say exactly digit number one quadrillion. It's easy to process and extract with the formula for extracting digits. If the first sequence lands in a rougher numerical spot, it might be tough.
But given the sequences repeat infinitely, it is a certainty it will eventually begin in an easily summarizable number say number 1 quadrillion quadrillion quadrillion. How would that affect the performance of the pi individual digit calculation formula I don't know. But it seems very possible to send arbitrarily large sequence of numbers in a very small number of bits, only computationally extremely demanding to do so.
I've heard quantum computers can be exponentially faster in search problems, but not sure if it would help here. If it does, you'd search pi for the sequence you're after, and simply give the starting digit and give 1M or 1B or 1T as the size or number of digits to acquire after it.