Of course, and I already gave the example of a monster knocking down trees in the forest. But AI observers need not have a completely accurate and rendered view of the local geometry in their area. I mean, come on, if a tree falls on the horizon, and an AI actor is coming towards me, should he have to execute a "jump" over the tree? I won't see it, and such waste of resources adds nothing to the gameplay nor believability of the unseen AI actor. The AI actor should work with a gross representation of the world when it is pathfinding and too far away for small scale obstacles to matter.
We're crossing arguments here. I agree with the idea that AI agents don't require perfect or wide-ranging observability.
I was just commenting that the proper way to implement imperfect perception is to filter how data is read by the agent, not have the actual data state be inconsistent. If a person misreads a map and thinks Alburquerqe is on the next left, it doesn't make the city fly off into space, as much as some would like it to.
I really don't understand your argument. You're trying to claim that AI in games should be written so that AI will comprehend each and every property of everything of the world, from tree leaf color and position of every leaf, to semantic concepts like whether the tree was "big" or not? I mean, we are talking about CPU design in this forum, and as such, we are talking about near term, what's achievable, not blue-sky "if we had a positronic CPU and Dr Noonian Soong, then..."
I am arguing what's practical and realistic to implement and still maintain a reasonable air of realism. You are countering that I am wrong because it won't work with some hypothetic and amazing AI that needs to perceive every last bit of foiliage state.
I'm not arguing that AI should have perfect perception. I am stating that in a digital simulation, they will have perfect perception unless the simulation intervenes. Unless the designer can manage an explicitly indeterminate data value in a register or memory location, there is no question as to what a read to that location will produce.
It's the difference between thinking crazy and the world actually going crazy.
HashLife only derives the local values it needs. You can ask for the value of cell @ x,y at time t. It may only calculate that value, and may not calculate the value of any other cell at time t.
The local values are the neighboring cells. The cellular automaton simulation is based on a very simple interaction that is evaluated for every generation. This implementation is just a lot smarter about not repeating redunant work. The data is fully consistent in that no calculation for a given generation pulls data from the results of another generation.
It looks like a kind of pipelining of cell generations, paired with the memoization and canonization. I can't speak with true certainty because I haven't worked through the last code listing entirely yet. I'll try to fully comprehend it and correct myself if I have parts of it backwards.
If you're talking about physics absent EM and gravity, then the only relevance of stale data is collision detection. And this is only relevent if you can't rule out a collision. But it is unlikely you're going to be running the collision detection before all threads which are updating salient objects have finished first (e.g. the constraint solving and integration steps)
I also mean things like decision making, AI communication, and player interaction. If there isn't something that maintains rough consistency, all of these start to mess up. Whenever something perturbs the simulation away from business as usual, it helps if the important parts don't continue as if it were business as usual.
It's not usually fatal to a game if the goofs are minor, and in most cases nobody so sloppily designs a game that the various parts are allowed to be out of phase with IO.
My original point was that some form of synchronization or coordination must be implictly or explicitely maintained. If there is no synchronization at all, you cannot assume anything about what will be finished before another. You can't have a precondition if there is no boundary to define what is the common before and after.
As for rendering, people play online multiplayer FPSes which inaccurate and delayed positions and firing cues to no ill effect, as long as the inaccuracy is only a few frames off. So I see no reason that both player and AI need to have frame accurate consistency of information.
If I get lagged as a matter of routine on a local machine running single-player, I'm going back to playing minesweeper. People log off if the lag is too horrible.
Even net-based play has rough boundaries on how far one portion of the engine can outstrip the other. It's more relaxed than maintaining frame by frame accuracy, but it's not like the renderer or physics portions will over time accumulate such a disparity that one will be minutes ahead of the other in the case of a complete lack of synchronization.
Even if laggy, the game is usually smart enough to not to create the impression it has data more recent than the last few frames displayed, because it will simply redisplay the same buffer over and over until the rest of the engine has caught up. Besides cosmetic effects, it's no good to render too far ahead in a dynamic simulation anyway.
No, gradient pathfinding makes no such assumption. In real world biological gradient pathfinding, the actual movement is stochastic. You may take a step, take a step backwards, or take the wrong step. All that matters is that the overall sum total of all movements adds up to progression along the gradient. Biology is not deterministic, but stochastic and error prone. Computer scientists are used to programming under assumptions of determinism and correct functioning of their computations, but that is not the *only* way to compute.
I had the wrong objection, you are correct that as long as the approximation process is given enough time, it will reach the goal it was given.
The only question is, if the pathfinding algorithm is told by stale and incorrect data to go to point C, will it still go to point B like I wanted?
Can I have the secret to "garbage in/puppies,kittens, and unicorns out"?
It's far more often the case that AI acts like this because it is given complete omniscience about the world. Sure, there are cases you don't want to occur, but in my experience, these problems are not caused by synchronization issues but by AIs knowing the player's position and being able to see without occlusion. Sure, if your screen is updating at 1hz and the AI is running at 200hz, he may run rings around you and kill you before you see one update.
The case I outlined doesn't happen often because nobody throws out synchronization away completely. AIs don't update irrespective of the rest of the game engine. Since their phases are kept in synch with the rest of the game, they don't get the chance to overshoot the current visible state.
But ensuring that doesn't happen does not imply the whole world's simulation must be run in lock-step, or that everything must be in a consistent state prior to render.
I never said things needed to be in lockstep. I said things shouldn't be allowed to go so far apart that it leads to an end result that should not be allowed in the rules of the simulation.
I don't mind minor inconsistencies if I do not see them all that often.
Rendering inconsistencies are usually only a frame or two in duration, so I don't mind a glitch in one frame in one level.
Beyond that, especially with mismanaged AI or physics, the state of the game-world becomes populated bad data and it ceases to be a simulation and becomes a garbage producer.
This is not neccessarily a requirement of parallel algorithms. Some algorithms work fine with concurrent read and write of stale data. Whether or not serialized access is needed depends on the algorithm.
I was overly broad in my demand for order. If an algorithm has no issue with indeterminant write order, I have no issue. It's when the algorithm needs it and it isn't delivered that I worry.
Also, saying one moment follows another needs clarification.
As I gave in the HashLife example, one moment does not neccessary follow another in strict monotonically increasing step-at-a-time order.
Yes it does. The rules of the life simulation state that the calculations are applied instantaneously with respect to a given cell generation. That means that the state of a given generation only affects the calculations of the generation that immediately follows it.
There is time, the generations do not interfere with each other's calculations. It's just not our time.
If HashLife didn't do that , it would be wrong, not efficient.
Correctness is after all the largest obstacle to performance.
In fact, HashLife has profound philosophical ramificatons for the nature of time and perceptions of our own universe if you mull over it. HashLife simulates incredibly large universes, in fact, 20 years ago, they were simulating 1billion by 1billion cell grids on ancient computers.
Outside time is meaningless to a properly constructed simulation. If someone built a mechanical HashLife simulator that took 10000 years to produce three generations, the time flow would still be three generations to the simulated cells.
It's when outside time concerns filter into the simulation that things go wrong.
In any system with "local" physics where effects propagate at a maximum speed (speed of light), a HashLife like algorithm can calculate the correct state of a subregion of space at arbitrary time in the future, even if the universe being simulated is huge. Moreover, the more time steps, the faster it works, leading to fact that one can in fact, skip over steps without even calculating them based on recombinations of memoized partial results.
Memoization speeds things up, it doesn't give a problem an asymptotically negative performance slope.
Don't forget that as far as we know, the world isn't perfectly determinate, at least not in the subatomic realm.
In such case, random chance is one thing we'll never get an algorithm to do.
All that is required is a proper hash function and partition algorithm. Given a big enough lever, one can move mountains, given a good enough hash function, one can simulate the world.
Aside from the fact that every hash has some set of inputs it sucks at, it's also unlikely we'll find a satisfactory way to properly capture values that aren't fully representable.
With error being inevitable, we could simulate a world, it just won't be ours.