Joe DeFuria
Legend
/ listens to sounds of crickets chirping....
I started this thread
AIs can have emotion.
lots of other assertions about "Strong AI"
And now you reference as your authority Kurzweil, who publishes popular books written for a layman audience.
Funny you should mention Kurzweil, since I am a member of the Extropian community of which Kurzweil, Moravec, Minsky, Dennett, and Kosko frequent, I had personal copies of the Age of Spiritual Machines manuscript for review before publishing, and ditto for Moravec. Moravec, by the way, preceded Kurzweil many many years before in Mind Children I've had dinner table discussions at Extropian conferences with Kurzweil, Moravec, and Minsky.
But for all of your name dropping, you are still ignorant of the term "Strong AI"
nothing about self-modification
All Strong AI says is that it is possible to build an AI which duplicates human consciousness.
Weak AI says that although we can build such a machine, it will only "appear" to be intelligent and conscious, but it isn't really conscious, and doesn't really understand the things it appears to.
Now that I've embarrassed you sufficiently
On the contrary, there are AI programs. There are no programs with human level intelligence, or programs that will pass the Turing Test, but there are plenty of AI programs that do useful things., the first of which were written in 1952.
I understand your confusion, since you are not well read on the subject of AI, and have not taken any courses in AI, you are belaboring under the false assumption of what AI is. Play semantic games all you want.
You contradict yourself.
First of all, you assert something is "not at all equally plausible" (therefore, you think you can predict the distribution of what is plausible and not plausible with respect to AI), and then you follow that up by saying that there is no way to predict how AI would behave. Well, if there is no way to predict how it will behave, how can you assert anything at all about what the probability of something being plausible is?
Sorry son, I never said "WILL". As shown in the beginning, I merely said "AI can have emotions"
CosmoKramer said:AIs can have emotion.
That's what so moronic. They can have anything. Occam's razor applies.
lots of other assertions about "Strong AI"
Lol, not at all. You keep showing how incredibly dense you are. Ok, Let's spell it out one more time: My assertion about strong AI is that no assertions can be made. Beacuse a strong AI will be able to modify its own source code. It is silly to believe that a strong AI will aquire/keep mammalian instincts, no? Now, boy, did you understand that one now?
Impressive. Then it is really strange that you didn't understand the distinction between strong and weak AI...
But for all of your name dropping, you are still ignorant of the term "Strong AI"
Nope. A strong AI is very well defined. Hint: Look up Turing...
All Strong AI says is that it is possible to build an AI which duplicates human consciousness.
Not quite. It says nothing about consciousness. See the Turing remark above.
The only one you (constantly) embarras is yourself. All you ever manage is to show how a person can appear to be so intelligent while obviously being so far removed from this reality.
First of all, you assert something is "not at all equally plausible" (therefore, you think you can predict the distribution of what is plausible and not plausible with respect to AI), and then you follow that up by saying that there is no way to predict how AI would behave. Well, if there is no way to predict how it will behave, how can you assert anything at all about what the probability of something being plausible is?
..because it is not equally plausible at all that a strong AI would choose to be a slave of mammalian instincts rather then be in control of its own behaviour from situation to situation.. You are compairing one possibility against all other possibilities while saying they are equally possible. Get real.
You said Merovingian was interested in sex and implied therefore he was not AI. I merely stated that desire is not mutually exclusive with AI, which ledd to your embarassment now.
"We simply do not know about what instincts AIs will have or choose to keep"
Another false name drop. You really are a moron. Turing never defined Strong AI or Weak AI.
Turing never even used the term Artificial Intelligence[/b]. He was found out to be a homosexual and committed suicide before the term came into use.
Let me try to put this into a simple form where a two year old can understand
I've written theorem provers in LISP which are less dense than you.
I'm conscious, and I can't arbitrarily modify my own source code
CosmoKramer said:Nope. A strong AI is very well defined. Hint: Look up Turing...
CosmoKramer said:Actually, I never said he did.
Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves.
Nope, you said they can have feelings. I say it is impossible to predict the behaviour of a true sentient AI, therefore it is impossible to make any statements concerning their behaviour. That said I think it is ludicruos to think that an AI would consider something so primitive as mammal instincts something worth having/keeping.
'Nuff Said. Weasel out all you want.
Any rational person reading your above sentence (e.g. witness Dave H) would assume that the "Strong AI" definition you allude to is found looking up Turing.
Strong AI is not defined as "passing the turing test".
.Searle's whole Chinese Room argument was that a program could pass the Turing test and still not understand or be conscious anything, that was his attack on Strong AI
I said:All Strong AI says is that it is possible to build an AI which duplicates human consciousness.
Not quite. It says nothing about consciousness
Quoting the AI FAQ
Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves.
Please do not continue to compound your errors.
Interesting how you reserve the right to make assertions and predictions about what AI would consider worth having, yet you continue to criticism my statement of what is possible (not what will be).
CosmoKramer said:In your dreams Mr DemoCoder
The only one you (constantly) embarras is yourself. All you ever manage is to show how a person can appear to be so intelligent while obviously being so far removed from this reality.