The Turing Test

ShaidarHaran

hardware monkey
Veteran
This article entitled Blue Gene Might Manage Turing Test
inspired the following:

I think we currently lack the capability to properly express our intentions to a machine to create a candidate A.I. that could pass a Turing Test. This is both a syntactical problem, in that we lack the programming language that would be as flexible, robust, and intuitive enough for any attempting to create a candidate A.I. to actually do so, and that we as finite beings also may just simply inherently lack the capability not only to design such a programming language, but also to design an A.I. that could pass a Turing Test.

Some may say this is simply a matter of intelligence and in order to reach this level of intellect we simply need to evolve. I believe in order to achieve this we would need to reach another level of consciousness or become "Super-Aware". In order for humans to accomplish such a feat, we would have to be able to solve the “Entscheidungsproblem.” This actually parallels the barrier to passing the Turing Test in that any A.I. which could actually pass a Turing Test would itself need to be Super-Aware and solve the Entscheidungsproblem to pass the Test!

So until we solve the Entscheidungsproblem, we cannot create a candidate A.I. which could pass the Turing Test.
 
Nah, I don't see that. Even commercial applications are currently already starting to pass the Turing Test - what with Russian chatbots extracting personal information from unsuspecting skype or messenger users.

All that is required is:

- sufficient (not perfect) grasp of a language
- a sufficiently convincing default data set (i.e. have a personal history)
- capacity to learn through talking
- human motivations
- personality
- self-awareness

Of course, how good the AI has to be largely depends on how smart or suspecting the human being on the other side of the chat window is.

I've actually been working on this a little, and thought about it intermittently ever since I studied AI. I don't even think the bottleneck is processing power. Give me a capable team and about a year, and I think I could fool 90% of people. ;)

If you want something near perfect, then you need to primarily understand the learing process and get that down to a very efficient level.

There were projects going 15 years ago already that were quite advanced in terms of learning and knowledge modelling that have since gone off the radar, presumably for military and/or intelligence purposes.

Now, the last thing that people will wonder about is the self-awareness bit. But I've had discussions on that plenty of times. I think that with what we now know of the human brain, we have a pretty good picture of how it works, but a lot of people aren't ready to accept anything too mundane - it damages their world view. So in that sense, the Entscheidungsproblem exists, but not quite how it is described above, and certainly not for all people. ;)
 
This is both a syntactical problem, in that we lack the programming language that would be as flexible, robust, and intuitive enough for any attempting to create a candidate A.I. to actually do so, and that we as finite beings also may just simply inherently lack the capability not only to design such a programming language, but also to design an A.I. that could pass a Turing Test.
The assumption is that a programming language for creating an AI is necessary, possible, or even appropriate.

Some may say this is simply a matter of intelligence and in order to reach this level of intellect we simply need to evolve. I believe in order to achieve this we would need to reach another level of consciousness or become "Super-Aware".
There are biological organisms that can likely beat the Turing test in some limited capacity without a hyper-intelligence's software dictating that they be able to do so.
Various animals such as african gray parrots, elephants, dolphins, and so on have demonstrated a significant number of traits in behaviorable tests--including language comprehension, the formulation of a sense of other, and recognition of self--formerly reserved for the intelligence of humans.

Unless we assume the "software" for them already had the Turing Test in mind, we can see that the programs have been repurposed and modified by a less than hyperintelligent humanity.
It could simply mean that a computer language is actually unnecessary or inappropriate to describe what encouraging intelligence requires.
 
The assumption is that a programming language for creating an AI is necessary, possible, or even appropriate.


There are biological organisms that can likely beat the Turing test in some limited capacity without a hyper-intelligence's software dictating that they be able to do so.
Various animals such as african gray parrots, elephants, dolphins, and so on have demonstrated a significant number of traits in behaviorable tests--including language comprehension, the formulation of a sense of other, and recognition of self--formerly reserved for the intelligence of humans.

I don't believe humans can be associated with animals in this regard. We can create candidate A.I.'s, they cannot. They have no concept of A.I.

Unless we assume the "software" for them already had the Turing Test in mind, we can see that the programs have been repurposed and modified by a less than hyperintelligent humanity.
It could simply mean that a computer language is actually unnecessary or inappropriate to describe what encouraging intelligence requires.

That was the road I was headed down.
 
"We can create candidate A.I.'s, they cannot"

Hmm, can a child create a candidate AI? What about an average human removed from civilization for the entirety of his life?
 
I don't believe humans can be associated with animals in this regard. We can create candidate A.I.'s, they cannot. They have no concept of A.I.
Well, maybe with time and a million monkeys on a million keyboards...

The point is that humans, who are not the hyper-intelligences required in the original post, have modified and amplified behaviors that comprise part of what it would take to beat the Turing Test.
 
Nah, I don't see that. Even commercial applications are currently already starting to pass the Turing Test - what with Russian chatbots extracting personal information from unsuspecting skype or messenger users.
Chat is not the turing test. An unsuspecting party in an equal power position is a lot easier to fool than someone who is explicitly put in place as a questioner, free and encouraged not to allow the conversation to be steered.
 
True enough. But what's the control group? Are you going to compare results with real people? And what kind of people will you use? What age, intelligence, experience, conversation skills? The truth is, the Turing test is a nice concept, but it's not very precisely defined. Of course my example was a bit facitious, but that's basically all that I was getting at.
 
IMHO the meaning of Turing test is, when a machine, either over an IRC or through a human, is not questioned by most people as having self-consciousness (like most people don't question other people having self-consciousness), then it passes Turing test. By this standard I really doubt that any machines within 10 years can do that. Not just hardware performance problems, but also about softwares.
 
A turing test is really not a practical test, but rather a theoretical gedanken on how to define consciousness.

It relies on having the questioner actively probing. It also relies on having an infinite amount of questioners and an infinite amount of questions.

If the computer achieves the same success rate as (enter human with x age, y iq, etc) than we say the objects are in the same equivalency class and the program is a candidate AI.
 
So does that mean you are religious and believe god could create you because he could solve it? (Nature isn't "super aware" obviously, just highly parallel.)
 
Last edited by a moderator:
The only logical conclusion from something like the Entscheidungsproblem, and really the whole Skeptics issue with the whole brain-in-a-vat construct, is that you cannot draw any conclusions of 'absolute' truth. Therefore, any choice between god, aliens, evolution, big bang, and so on is arbitrary.

However, if you look at what we collectively agree on what absolute truth means (which also brings in the whole Derridaean symbol-signifier language issue), and you consider truth finding through scientific methods to be the most reliable (as happens in most courts and laboratories in the Western world as well as many non-Western worlds) then certainly evolution is 'more' true than most alternatives. Only if you believe there has to be a starting point for existence (which I consider extremely illogical incidentally) then right now at this point, the absolute beginning can still be filled in in a variety of ways. Some of which make more sense, some less, in the context of how we normally look at our world, but that is at this point still fairly irrelevant.

However, as a coping mechanism with the fact that people can foresee their own non-existence, I respect almost everything that works for people to alleviate this burden. In that context, all religions, beliefs, and so forth are equally valid. I have more trouble with when they mess with the more easily scientifically known, empirical truths here on this earth.

Above all, though, I think it's a huge mistake to think that the Entscheidungsproblem has anything to do with whether or not the Turing test could ever work. Skepticism as well as most language related philosophy proves as much as anything that the Entscheidungsproblem is anything but relevant to the Turing test. For the Turing test, all that matters is what the human who is taking the test believes what is human and what is not human. Fred is completely right in that respect.

And so is Davros. Nobody would pass this test eventually, because all of us are machines. We may be biological machines, but we are machines nevertheless. If you recognise that, it's easy enough to see that an AI is not a matter of possibility, but a matter of time (developing the software even moreso than the hardware) and of choice (wanting to create a human like AI in the first place).
 
Last edited by a moderator:
Can we build a program that can pass the Turing test? Definitely.

Can we build one that can do it repeatedly with the same subjects? Perhaps.

Would that program be intelligent or self-aware? Extremely unlikely.
 
Back
Top