You still haven't defined what such synchronization is. To say that my Pentium CPU can't run a general purpose simulation of itself that runs faster than itself is patently obvious, the existence of such a simulation could lead to infinite computation power (keep running the sim recursively). However, there is a difference between asserting this, and asserting that X "takes no time" in the universe.
Synchronization, the act of establishing relative order or simultaneity. If five billion events occur at the same time, one after the other, or the events occur in a mathematically complex situation, the flow of time does not slow down to indicate additional processing is going on to figure out how they are placed relative to each other.
For a system running a simulation, establishing that order exists in the context of the simulation takes additional time. It does not take additional time for a given reality.
This trivializes the very real problem of information loss in physics, namely -- Do Black Holes Erase Information? (convert pure states into mixed states and violate unitary evolution) This is an intense debate in physics today. You simply do not know whether the universe has exactly the amount of storage it needs to store an ever increasing amount of entropy, and maybe black holes are the natural garbage collection mechanism. And as I said, on the other side is the holographic principle which implies it has too much storage for what it does.
Black holes are an imperfect and non-permanent garbage collector, if Hawking radiation is substantiated.
I don't even care if no causal connection can be made between the information that went into the black hole and what came out, just whether the quantity is larger. By measure of mass and energy, it will be exactly the same. I do not know whether the minimum representation will be larger or not.
I only care about the sum total of the symbols needed to represent the state of the universe.
I don't believe that's what I claimed. I claimed only that the time needed to compute step n, call it T(N) can grow smaller. This is true for the majority of the inputs. In fact, for some inputs, it doesn't even need to keep growing space.
Define exactly what you mean by that. T(N) for HashLife can grow smaller versus T(N) for a standard version, or T(N) for HashLife versus T(N/2) for HashLife?
The way I read one of your previous posts, it sounded like you were saying it got faster and faster the larger the problem size got. That would be wrong.
However, we have been talking of game simulations, and I'm not sure I need my game simulation to exhibit universality or correctly deal with it without loss.
Unverality not so much, that any loss is managed or controlled is more important.
I disagree with your definition of a random outcome, which sounds suspiciously like bogus definitions of free will in philosophical arguments. It can't even be converted into a formal mathematical logic.
No, it's an argument for the desired kind of randomness for a simulation.
That if something is based on random chance, nothing in the state of the system in time t will indicate how it will turn out in t+1.
Since the examples of randomness you showed are non-computable, I do not see how well they could be applied to a simulation that must transition from t to t+1 in finite time via some form of calculation.
That is of course assuming such completely random and totally independent phenomena exist. I didn't say it was proven that they do, only that if they do we lack the means to accurately make it so.
You can't rule out, for example, that a given outcome is simply the result of an unknown cellular automaton and unknown seed (which could in fact, be the sum total of all information in the simulation). For any "random outcome" by your definition, I can simply propose an underlying mechanism responsible coupled to inaccessable state.
The problem is that if the unknown seed per outcome is not totally random, then some pattern will emerge. Seed values are used by psuedorandom generators, so there is a way to figure out how the input is mapped to output.
If we're in the business of simulating the world, that might indicate another metaphysical restriction:
Nothing random in a given reality is random to an external reality.
Randomness has nothing to do with determinism. It has to do with whether the sequence of output can be compressed smaller than the sequence itself.
I admit I'm not up on algorithmic definitions of randomness.
But if a random number generator outputs a string of ten binary zeroes, it's compressible.
But how does that make the generator non-random? The probability that any given combination of outputs is the same, even if it can be run through a compressor to produce a smaller representative string.
If randomness is not just a backward-looking measurement of compressibility and there exist ways to produce it, then why is it that no random number generators exist?
The applications for it exist: cryptology, simulation, etc.
When this occurs, we throw up our hands and say "I can't seem to find a mechanism to compute the series of outcomes other than just to list them".
In which case, it may mean that nothing in any of our simulations will ever meet our criterion for randomness, since we have the entire state and can see how the outcomes come about. An execution trace would show how every outcome was computed.
Try reading Hans Moravec's _Mind Children_ in the chapter "Newway and the Cellticks" for a gedanken experiment, or more recently, the discovery of a possible mechanism in the cosmic microwave background for about 10k bits to have been encoded by a creator, or otherwise, as input from the external system.
Couldn't the external system have just wiped any such signature away as a matter of basic function?