Killer-Kris
Regular
Most methods that are used to run multiple tasks in parallel therefore use locks and synchronization mechanisms. They layer the workload in time, executing sequences of tasks in lockstep.
But, why would you do so? Just because a processor is essentially a state machine, doesn't mean you have to use a similar model to describe your tasks. Because, essentially, that Turing machine can solve any task in a predictable time frame, as long as it isn't a hard problem.
So, instead of trying to come up with better locking mechanisms, a better task division or whatever as long as it is still using the state machine paradigm, we need to come up with something that is free running.
Or, in other words: we need a paradigm that doesn't switch between predefined states that represent fixed lengths of time, but instead average new states from multiple asynchronous occcurences.
That way, you need no synchronization, and you don't require atomic operations either.
Simply gather what data is available, make a best guess for the current time, and update.
An additional bonus of such a model is, that you can steer it solely by priority, and aren't limited by having to process all objects each timeframe.
I've often thought about a parallel processing simulation that relies on current time and interpolation/extrapolation, only problem is that it still relies on synchronization it just attempts to work to reduce it; each object's state is essentially double-buffered, the front buffer is the object's state at a specified time and is freely readable, the back buffer is the object's state during processing and the object locks the front buffer and swaps when it's ready for an update.
And I imagine that in most game type simulations the object threads would likely be sleeping the majority of the time and not constantly locking and swapping buffers, which means that most of the active threads(ie. objects) will have free access to the data when ever they need it.