State machines.

Frank

Certified not a majority
Veteran
As we all know, given (close to) infinite time, a Turing machine, the simplest logical device ever devised (a one-state machine), can solve any solvable problem. We classify a problem as "hard", when there doesn't exist a solution to it that runs within a specified time.

Processors are about maximizing throughput, and so we require everything to finish as fast as possible. GPUs specifically like their problems small and fast. And while CPUs don't mind larger problems, that run longer, both require execution to be atomic, or at least predictable in time.

Amdahl is famous for putting this into writing. And, as it turns out, the main bottleneck isn't so much the amount of calculations required (as you can always use a faster approximation if needed), but the amount of interaction.

Essentially, that is because the processors are still state machines, executing a multitude of actions that might depend on each others results to continue.


Most methods that are used to run multiple tasks in parallel therefore use locks and synchronization mechanisms. They layer the workload in time, executing sequences of tasks in lockstep.

But, why would you do so? Just because a processor is essentially a state machine, doesn't mean you have to use a similar model to describe your tasks. Because, essentially, that Turing machine can solve any task in a predictable time frame, as long as it isn't a hard problem.

So, instead of trying to come up with better locking mechanisms, a better task division or whatever as long as it is still using the state machine paradigm, we need to come up with something that is free running.

Or, in other words: we need a paradigm that doesn't switch between predefined states that represent fixed lengths of time, but instead average new states from multiple asynchronous occcurences.

That way, you need no synchronization, and you don't require atomic operations either.

Simply gather what data is available, make a best guess for the current time, and update.

An additional bonus of such a model is, that you can steer it solely by priority, and aren't limited by having to process all objects each timeframe.
 
Amdahl is famous for putting this into writing. And, as it turns out, the main bottleneck isn't so much the amount of calculations required (as you can always use a faster approximation if needed), but the amount of interaction.

While this thread is still young, I'd like to point out that Amdahl's law does not exclusively apply to computation versus communication. To steal straight from wiki...
...used to find the maximum expected improvement to an overall system when only part of the system is improved.

This could even be between subsets of computation, ie. which nets you the larger gain for a given workload; optimizing the multiplication or the addition instruction?
 
Last edited by a moderator:
We need Reverse HyperThreading

:cool:
General case is theoretically impossible due to side effects in code (mutation, etc). There are some languages that are aiming towards auto-parallelization, but they are not your standard C/Java/C# things at all. (hooray functional programming)
 
Well, I had trouble understanding what he was trying to get at, the Turing model comments are not really relevent, since Turing models determine what is computable, not how fast it runs. Computational complexity classes (problem difficulty) are a whole 'nother concept, and even there, we are not truly sure if scalable specialized machines can be built which can solve problems asymtotically faster than a TM (although we hope and suspect that quantum computers can solve a subset of problems)

Anyway, isn't Folding@Home more or less, what you're talking about? Not the fact that it's distributed, but the mechanism by which is decomposes simulations of kinematic state in which a way as to allow reconstruction of global solutions from distributed microsimulations -- as opposed to a straighforward divide-and-conquer parallelized dynamics simulation.
 
As we all know, given (close to) infinite time, a Turing machine, the simplest logical device ever devised (a one-state machine), can solve any solvable problem.

A Turing machine can have more than one state (I'm surprised noone caught this before me... :) ). Although I guess you can get by with only one state by extending the alphabet it uses.
 
Multistate, multihead, multitape turing machines are all isomorphic to a single state turing machine.


By "single state", I don't mean that its state table has only a single entry. I mean, it can have two separate state tables, two heads, and two tapes. (think concurrent TM)

Basically, nothing is more powerful (can compute functions which a TM can't) than a single tape, single head TM. Atleast, that's the Church-Turing hypothesis.

Now, Roger Penrose conjectured than if Quantum Gravity is based off of 4D manifolds, you might end up with a physical computation capability that can decide the 4D-manifold classification problem (*undecidable by TMs*), but not classically solvable. Of course, he was mentally masturbating to try and find a justification for how human consciousness could not be computable by a TM.
 
Well, the future lies in functional programming and there are very effective models for concurrent programming being developed (see join-calculus).

But I must also admit that I don't really understand what Frank wanted to say. How are you going to average states? It sound a bit too non-deterministic to me :)
 
As we all know, given (close to) infinite time, a Turing machine, the simplest logical device ever devised (a one-state machine), can solve any solvable problem. We classify a problem as "hard", when there doesn't exist a solution to it that runs within a specified time.
The overall state of the system is not one state. The theoretical "endless tape" that serves as memory allows for infinite states.

Processors are about maximizing throughput, and so we require everything to finish as fast as possible. GPUs specifically like their problems small and fast. And while CPUs don't mind larger problems, that run longer, both require execution to be atomic, or at least predictable in time.
I don't see why a CPU or GPU should care whether a problem is predictable in time.

Most methods that are used to run multiple tasks in parallel therefore use locks and synchronization mechanisms. They layer the workload in time, executing sequences of tasks in lockstep.

But, why would you do so? Just because a processor is essentially a state machine, doesn't mean you have to use a similar model to describe your tasks. Because, essentially, that Turing machine can solve any task in a predictable time frame, as long as it isn't a hard problem.
I do not think there is a requirement that a Turing Machine solve anything in a predictable time frame.
It may also be considered impossible.

If such prediction were possible, then the halting problem would be solvable. You could just predict the running time and see that a problem does not halt when the prediction is infinite.
 
"Frank" defined a "hard problem" to be one that cannot be solved on a Turing machine, i.e. non computable.

He has therefore stated that a Turing machine can solve any problem that can be solved on a Turing machine, QED. :rolleyes:


This, and a few other posts, might be he is now banned.
 
Uhm, what do complexity classes have to do with this? Basically, if a problem is in P, then it will be solvable in P on single-tape TM.

But I just don't see what the thread starter is trying to suggest. Somehow it sounds as a non-deterministic machine to me (so no real practical use). Maybe a small explanation, Fred?
 
Uhm, what do complexity classes have to do with this? Basically, if a problem is in P, then it will be solvable in P on single-tape TM.

It has nothing to do with complexity classes beyond the original poster probably just learned about them and is working on integrating that knowledge and expanding on it.

Try rereading his post from the midway part where the discussion on better locking and synchronization begins and just forget about the turing machines and complexity class babel. It might make a little more sense what they're trying to get at (or at least how I interpreted it).
 
Last edited by a moderator:
Back
Top