Matrix Reloaded Must Read Threads

I started this thread

Nope, you didn't. You responded to me, remember?

AIs can have emotion.

That's what so moronic. They can have anything. Occam's razor applies.

lots of other assertions about "Strong AI"

Lol, not at all. You keep showing how incredibly dense you are. Ok, Let's spell it out one more time: My assertion about strong AI is that no assertions can be made. Beacuse a strong AI will be able to modify its own source code. It is silly to believe that a strong AI will aquire/keep mammalian instincts, no? Now, boy, did you understand that one now?



And now you reference as your authority Kurzweil, who publishes popular books written for a layman audience.

So what? What is your point? Oh, there is no point. Figures.

Funny you should mention Kurzweil, since I am a member of the Extropian community of which Kurzweil, Moravec, Minsky, Dennett, and Kosko frequent, I had personal copies of the Age of Spiritual Machines manuscript for review before publishing, and ditto for Moravec. Moravec, by the way, preceded Kurzweil many many years before in Mind Children I've had dinner table discussions at Extropian conferences with Kurzweil, Moravec, and Minsky.

Impressive. Then it is really strange that you didn't understand the distinction between strong and weak AI... :rolleyes:

But for all of your name dropping, you are still ignorant of the term "Strong AI"

Nope. A strong AI is very well defined. Hint: Look up Turing...

nothing about self-modification

I've seen good arguments that a binary machine intelligent enough to fool humans to believe it is human is already way past our own intelligence. It would use that intelligence to keep it alive. Meaning...

All Strong AI says is that it is possible to build an AI which duplicates human consciousness.

Not quite. It says nothing about consciousness. See the Turing remark above.

Weak AI says that although we can build such a machine, it will only "appear" to be intelligent and conscious, but it isn't really conscious, and doesn't really understand the things it appears to.

Nope, a weak AI won't have to appear conscious. Your VCR is a weak AI.


Now that I've embarrassed you sufficiently

In your dreams Mr DemoCoder :LOL:
The only one you (constantly) embarras is yourself. All you ever manage is to show how a person can appear to be so intelligent while obviously being so far removed from this reality.




On the contrary, there are AI programs. There are no programs with human level intelligence, or programs that will pass the Turing Test, but there are plenty of AI programs that do useful things., the first of which were written in 1952.

Irrelevant. The discussion is about strong, real AI. Sentient programs.

I understand your confusion, since you are not well read on the subject of AI, and have not taken any courses in AI, you are belaboring under the false assumption of what AI is. Play semantic games all you want.

:LOL: I'm gonna save this thread. We all need a good laugh now and then.

You contradict yourself.


No, but you still fail to see (or perhaps you don't want to see?) my point. Polar minded people often work that way, much to the chagrin of the world.

First of all, you assert something is "not at all equally plausible" (therefore, you think you can predict the distribution of what is plausible and not plausible with respect to AI), and then you follow that up by saying that there is no way to predict how AI would behave. Well, if there is no way to predict how it will behave, how can you assert anything at all about what the probability of something being plausible is?

..because it is not equally plausible at all that a strong AI would choose to be a slave of mammalian instincts rather then be in control of its own behaviour from situation to situation.. You are compairing one possibility against all other possibilities while saying they are equally possible. Get real.




Sorry son, I never said "WILL". As shown in the beginning, I merely said "AI can have emotions"

Well, son, God can be a turtle. :rolleyes: Non-falsifiable statements are useless. Try them on a crowd more easily impressed.
 
CosmoKramer said:
AIs can have emotion.

That's what so moronic. They can have anything. Occam's razor applies.

You said Merovingian was interested in sex and implied therefore he was not AI. I merely stated that desire is not mutually exclusive with AI, which ledd to your embarassment now.




lots of other assertions about "Strong AI"

Lol, not at all. You keep showing how incredibly dense you are. Ok, Let's spell it out one more time: My assertion about strong AI is that no assertions can be made. Beacuse a strong AI will be able to modify its own source code. It is silly to believe that a strong AI will aquire/keep mammalian instincts, no? Now, boy, did you understand that one now?

I'm still trying to figure out how if no assertions can be made, you continue to make assertions. If you follow your own chain of reasoning, you would have to say "We simply do not know about what instincts AIs will have or choose to keep"


Impressive. Then it is really strange that you didn't understand the distinction between strong and weak AI... :rolleyes:

But for all of your name dropping, you are still ignorant of the term "Strong AI"

Nope. A strong AI is very well defined. Hint: Look up Turing...

Another false name drop. You really are a moron. Turing never defined Strong AI or Weak AI. That was John Searle. Why oh why do you continue to cite authors to whom you know nothing about as if I am suppose to be impressed?

If you really think Turing had anything to do with Strong AI, why not tell me the name of which paper he published which defines it. Here's a hint doofus: Turing never even used the term Artificial Intelligence. He was found out to be a homosexual and committed suicide before the term came into use.


All Strong AI says is that it is possible to build an AI which duplicates human consciousness.

Not quite. It says nothing about consciousness. See the Turing remark above.

Turing "remark"? Haha, you mean you heard the name Turing somewhere and tried to use it, without having any knowledge.

The only one you (constantly) embarras is yourself. All you ever manage is to show how a person can appear to be so intelligent while obviously being so far removed from this reality.


Challenge to you buddy: Produce a link to a paper published by Alan Turing that defines Strong AI Time for you to "Look up Turing" Too bad your google search will never finish.

First of all, you assert something is "not at all equally plausible" (therefore, you think you can predict the distribution of what is plausible and not plausible with respect to AI), and then you follow that up by saying that there is no way to predict how AI would behave. Well, if there is no way to predict how it will behave, how can you assert anything at all about what the probability of something being plausible is?

..because it is not equally plausible at all that a strong AI would choose to be a slave of mammalian instincts rather then be in control of its own behaviour from situation to situation.. You are compairing one possibility against all other possibilities while saying they are equally possible. Get real.

Let me try to put this into a simple form where a two year old can understand:

Each possible Strong AI scenario is a colored ball of candy. You put these colored balls of candy into a bag and shake them up. You asserted that you can't know anything about the distribution of these colored balls. Hence, there is no knowledge about how many colored balls would represent an AI with emotions or instincts vs those without. Now choose a ball at random, what is the probability you get one with emotion?



To summarize: You have google. I expect either you to produce a link to a Turing paper which defines Strong AI or to apologize for trying to appear intelligent.

I've written theorem provers in LISP which are less dense than you.
 
rockem.gif
 
Joe, that picture is inaccurate since it shows him still standing. :)

Let's see if he can produce that Turing paper that "very well" defines Strong AI. :)
 
Cosmo-

I won't pretend to follow AI as closely as Demo apparently does, but I have taken a course in it and have some familiarity with the literature. And, on a straight definitional basis, he's right, you're wrong.

The "strong"/"weak" AI dichotomy was, as Demo says, coined by Searle in his 1980 Chinese Room paper. The argument sought to prove that even a computer that could pass the Turing test would not be "conscious"; according to Searle's terminology, then, an AI that passes the Turing test is a "weak AI", i.e. only appearing to act consciously, whereas an actual conscious machine would be a "strong AI". Searle claims to prove the latter cannot exist. (Through a falacious argument that a) misunderstands the way computers work and b) relies at its base on an appeal to intuition.)

In any case, neither "weak" or "strong" AI has anything whatsoever to do with being capable of arbitrary self-modification. I haven't read Kurzweil (who is, as Demo points out, a popularizer of AI, not a serious figure in the AI research community), but if he claims "strong AI" must be self-modifying, he is either basing that on some argument I've never seen (and almost certainly an incorrect one; after all, I'm conscious, and I can't arbitrarily modify my own source code) or appropriating the term for a completely different meaning than its traditional usage in the AI community.

On the other hand, the theorem prover I wrote in LISP was dumber even than the average poster in the [H] forums...

PS - Even though you won't find anything in it to support your claim, you really should read Turing's 1950 paper; one of those documents that's so perceptive and ahead of its time it's astonishing. (Moore's "Moore's Law" paper is fun along the same lines.)
 
Even more amazing was at the time Turing wrote it, he was also hypothesizing about digital computers themselves.

He does make one mistake however. He mentions in one part of the paper on how computers can be made to unpredictable so that, like people, it is not possible to predict their future state with certainly. He created a pseudo random number generator to demonstrate this and he would defy anyone predict the next number in the sequence. Turing was a good cryptographer and he should have known how easy it is to crack pseudorandomness.

On a similar note, Lady Lovelace was also very very perceptive for her time. She could rightly be called not only the first programmer, but probably the first to remark on limitations of computing.
 
You said Merovingian was interested in sex and implied therefore he was not AI. I merely stated that desire is not mutually exclusive with AI, which ledd to your embarassment now.

Nope, you said they can have feelings. I say it is impossible to predict the behaviour of a true sentient AI, therefore it is impossible to make any statements concerning their behaviour. That said I think it is ludicruos to think that an AI would consider something so primitive as mammal instincts something worth having/keeping.

"We simply do not know about what instincts AIs will have or choose to keep"

Yes! Finally! Almost anyhow. Take, for instance, vertigo. That is an instinct located in the older parts of the brain (like feelings and other instincts). Do you expect an AI to find vertigo useful? On a somewhat kinder note than before, do you understand what I'm trying to say here?


Another false name drop. You really are a moron. Turing never defined Strong AI or Weak AI.

Actually, I never said he did. I just didn't want to do your homework for you, especially since you claim to know so much about AI. To spell it out, a Strong AI is an AI that has passed the Turing test. Agreed?


Turing never even used the term Artificial Intelligence[/b]. He was found out to be a homosexual and committed suicide before the term came into use.

Yes, I know that. It is somewhat puzzling why you try to make his sexuality an issue, though :?:


Let me try to put this into a simple form where a two year old can understand

Listen, dude, I understand perfectly your little statistical distraction. It is you who don't understand my point, yet you seemed to do so a few lines above. What gives? Your little experiment is invalid because (for the thousandth time) I'm not asserting that AI won't have "feelings". Look above where you seem to understand my assertion.

I've written theorem provers in LISP which are less dense than you.

Then you are the living proof that lesser beings can create superior ditos ;)
 
Dave H:
I'm conscious, and I can't arbitrarily modify my own source code

Let's assume that is true, what does it prove about binary AI? Nothing.

...but we are beginning to be able to control our own source code. DNA. :!:

On a more primitive level we are already meddling with our behaviour systems through drugs. For instance, in Sweden many of the worst kind of criminals use a drug called Rohypnol in order to shut down their feelings. Useful for certain types of crime... :(

Now imagine the possibilites a sentient binary program will have.
 
CosmoKramer said:
Nope. A strong AI is very well defined. Hint: Look up Turing...

CosmoKramer said:
Actually, I never said he did.

'Nuff Said. Weasel out all you want.

Any rational person reading your above sentence (e.g. witness Dave H) would assume that the "Strong AI" definition you allude to is found looking up Turing. Moreover, your explanation is doubly wrong. Strong AI is not defined as "passing the turing test". Searle's whole Chinese Room argument was that a program could pass the Turing test and still not understand or be conscious anything, that was his attack on Strong AI.

Quoting the AI FAQ
Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves.

Please do not continue to compound your errors.

Such as

Nope, you said they can have feelings. I say it is impossible to predict the behaviour of a true sentient AI, therefore it is impossible to make any statements concerning their behaviour. That said I think it is ludicruos to think that an AI would consider something so primitive as mammal instincts something worth having/keeping.


Interesting how you reserve the right to make assertions and predictions about what AI would consider worth having, yet you continue to criticism my statement of what is possible (not what will be).

I made no predictions. I merely stated the fact that AI is not incompatible with emotion or instinct. I merely stated that what could potentially exist.

From that, follows a bizarre list of claims and assertions, most of which are incorrect, about the nature of what Strong AI is, who to lookup to find out what it is, and what Strong AI will consider worthy.

Please go back to studying for your mechanical engineering exams, judging by your performance here, you need alot more study.
 
'Nuff Said. Weasel out all you want.

The word Turing is part of "Turing test", no?

Any rational person reading your above sentence (e.g. witness Dave H) would assume that the "Strong AI" definition you allude to is found looking up Turing.

Still, I never said Turing defined strong AI. See above.
Strong AI is not defined as "passing the turing test".

Then what is the definition? Link.

Searle's whole Chinese Room argument was that a program could pass the Turing test and still not understand or be conscious anything, that was his attack on Strong AI
.

..which is what I implied when I said your definition of Strong AI was wrong: You said
All Strong AI says is that it is possible to build an AI which duplicates human consciousness.
I said:
Not quite. It says nothing about consciousness

Damn, you must have put some real stress on Google considering your turnarounds... :rolleyes:

Quoting the AI FAQ
Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves.

Yep. Who knows, but I don't see why not. After all, we humans are conscious, ergo it can be done.

Please do not continue to compound your errors.

Sugar, just beacuse you say so doesn't make it so. :rolleyes:

Interesting how you reserve the right to make assertions and predictions about what AI would consider worth having, yet you continue to criticism my statement of what is possible (not what will be).

The thing is I'm not arguing about an either/or situation. I can't seem to get through to you on that one, though. I believe a true strong AI will be able to modify its own source code - just like we are beginning to. Only it will do so much more easily.

Thus it will be able to modify its behaviour systems on-the-fly. If that means mammal instincts will serve it best in one situation it will have them. In other situations that may no longer be true and it will change. Take the simple example - vertigo. FYI vertigo may have a good preventative function but it is a very dangerous instinct. Answer me this - would you not rather be in control of your instincts (such as vertigo or fear) rather than a slave to them?

That's why I'm arguing with your assertion that they "can" have emotion. Sure they can...if it suits them...if the situation so demands. Thus, its actions will be impossible to foretell.
 
CosmoKramer said:
In your dreams Mr DemoCoder :LOL:
The only one you (constantly) embarras is yourself. All you ever manage is to show how a person can appear to be so intelligent while obviously being so far removed from this reality.

07-minister.jpg


I sincerely apologize for injecting such an infertile comment, normally I would have commented on Cosmo's wrong idea of the role of 'emotions' play, their ancestral lineage, and 'controlling them' (which is self-contradictory to what he said earlier about Swedish Criminals) - but at this particular time, in this particular state... this is much more amusing.

BTW: Excellent responces DemoCoder, most impressive. I'm tempted to ask for a list of reading material on this topic, as I have little present knowledge of this.
 
Back
Top