Matrix Reloaded Must Read Threads

Science fiction can be great at times, often it shines when it builds on the best works that precede it.

I think The Matrix qualifies for this status, it's exceptionally well done and leverages some heady stuff from the past ........ btw, does anyone remember the climax from the original Amber series by Zelazny wherein Corwin had to recreate the, ummm, "matrix"* of the universe(s)? I wonder if Neo will do something similar.

Programs that are faced with the dillema of having to deal with humans while despising them are classical fare. "I Have No Mouth But I Must Scream" (good game too) by Elllison and a series by Chalker whose title eludes me at the moment come to mind. Neat series, the computer that controlled mankind was bound to leave the ability to turn it off in place, lol, it left the key in several pieces on various planets though. :)

Sorry for the rambling post, great stuff like the Matrix puts me to mind of other wonderful stuff I've read over the (many) years.

Interesting threads you've posted Natoma, much thanks.


*Actually, it was called "The Pattern" IIRC. Zelazny also wrote "The Dream Master" which some, myself included, consider prescient regarding VR.

http://www.geocities.com/Athens/Academy/6422/rev1005.html

http://www.fantasticfiction.co.uk/books/n0/n3282.htm?authorid=3235
 
After watching the movie a few times and banging my head on the wall for some time i came up with some odd thoughts:

Why can Neo see the future? Especially in the detail shown in reloaded? This is only possible if the whole course of events is already set. However, this requires that even those in the real world follow this set course. Neo knows the script of events and the Oracle does. Could it be the Oracle is the conscious part of the operating system of the matrix? The Merowingian tells Neo that her time is almost up. It would be if the Matrix would be upgraded based on the 6th Ones code and restarted.

Why do the machines need humans? Or better: why do they desire humans? The architects says the machines would be able to survive without humans but he didn't seem to like the idea much.

Why would the choice for Trinity result in the end of the human species? Those plugged into the matrix would die and those in Zion would die. However, in time new crops could be grown. It's likely that new crops would be ready to be harvested all the time so if the machines would be compelled to serve and protect humanity they wouldn't be able to get rid of humans by crashing the matrix.

Why does the architect seem to be rather pleased when he realizes that the 6th One seems to be more human than his predecessors? It seems that it would be desirable to have Neo go through the door leading to the source but instead he is more or less prodded through the door leading back to the Matrix. Somehow the whole Matrix seems to revolve around the One.

Why are all the programs so emotional? How can a program have emotions?

How can Neo destroy the sentinels in the real world?
a) He has developed supernatural powers in the real world: unlikely, would fit in the story
b) he has a blutooth-like neuro-interface which allows him to disable the machines via radio-transmission (the sentinels have to have receivers for command from the AIs, else they couldn't be deployed): possible, but why does Neo drop unconscious? And why is Bane/Smith in the same state?
c) The real world isn't real. Neo is still connected to a computer and shuts down the sentinels using this connection. In doing so, his mind manifests itself in the Matrix instead of the Zion simulation and so he shows the symptoms of being plugged into the matrix in the "real world": possible, but cheap.
d) The approaching Hammer EMP'd the sentinels. The timing was just a lucky coincidence: unlikely, EMP doesn't damage humans, even those with implants as seen in Matrix 1.
e) The real world is indeed real. There is no radio connection through the neuro interface. The sentinels are shut down by Neo from within the Matrix which he can do because he exists in both realities simultaneously: Hm...

Why is the Merovingian afraid of the One? It seems that the One could do something that won't allow him to escape to the next iteration of the Matrix. How can a program hide in the Matrix? Do what Smith does to Bane, imprint itself on a plugged in human brain?

When Smith meets Neo in the hallway the following dialog takes place:
Neo: What do you want?
Smith: The same thing you want, i want it all!
In what way does Neo want it all? Smith copies himself over everyone he meets but Neo... Is this somehow connected to what Morpheus tells Neo after he frees him: "At the beginning of the 21st century we gave birth to AI, a singular consciousness that spawned an entire race of machines"

Why does Neo ask the Seraph if he is a programmer? The war against the machines is more than 100 years old (he believes) so there can't be any programmers left. This question doesn't make sense.

Where did Morpheus get his knowledge about the path of the One?
 
Barnabas said:
Why do the machines need humans? Or better: why do they desire humans? The architects says the machines would be able to survive without humans but he didn't seem to like the idea much.

The only thing that seemed is that the machines can afford alternative, but much less eficient way of extracting energy than that fusion system controled by humans.

For example, the machines could survive, if they decreased their numbers, or turn off a great number of them, and used, for example, geothermal energy, nuclear fission energy, etc...
 
A
Barnabas said:
After watching the movie a few times and banging my head on the wall for some time i came up with some odd thoughts:

Why can Neo see the future? Especially in the detail shown in reloaded? This is only possible if the whole course of events is already set. However, this requires that even those in the real world follow this set course. Neo knows the script of events and the Oracle does. Could it be the Oracle is the conscious part of the operating system of the matrix? The Merowingian tells Neo that her time is almost up. It would be if the Matrix would be upgraded based on the 6th Ones code and restarted.

In order to be The One, I imagine a being has to possess the highest psychological abilities of all human beings. Those abilities would certainly include ESP and precognition.

B
Barnabas said:
Why do the machines need humans? Or better: why do they desire humans? The architects says the machines would be able to survive without humans but he didn't seem to like the idea much.

As explained in the first movie, "it was believed the machines could not survive without a source of energy as abundant as the sun", but the machines discovered human beings made an optimal substitute power source. Survival without humans as a power source would severely degrade their abilities, and they would almost certainly be forced to shut off non-vital systems in order to survive. No intelligent being wants to sacrifice parts of themselves for survival if alternate means are available. Also note that there is an indictation that humans could not be replaced with other animal life. It seems to me that in the process of creating the nuclear winter (or whatever it was that smote the sky), the majority of plant and animal life on the planet was destroyed. Morpheus's "desert of the real" program showed no signs of life apart from Morpheus and Neo, and I haven't any other form of life in the "real world" in either movie, just humans and machines. There almost certainly has to be some, to provide nutrition for the people of Zion, but perhaps it hasn't occured naturally since the war.


C
Barnabas said:
Why would the choice for Trinity result in the end of the human species? Those plugged into the matrix would die and those in Zion would die. However, in time new crops could be grown. It's likely that new crops would be ready to be harvested all the time so if the machines would be compelled to serve and protect humanity they wouldn't be able to get rid of humans by crashing the matrix.

Cloning humans and creating humans are two very different things. Extinction is forever, and if every last person were killed, they couldn't simply grow new crops. Even if they had human sperm and eggs in storage, the cultivation and harvesting probably requires more energy than the machines would have available to them when all of the humans supplying their current power are dead.


D
Barnabas said:
Why does the architect seem to be rather pleased when he realizes that the 6th One seems to be more human than his predecessors? It seems that it would be desirable to have Neo go through the door leading to the source but instead he is more or less prodded through the door leading back to the Matrix. Somehow the whole Matrix seems to revolve around the One.

To me it didn't appear to be pleasure, but rather intrigue. As far as Neo's choice, the Architect's comments simply demonstrate that he knows what choice Neo is going to make, regardless of the impending doom that awaits. And obviously the Matrix revolves around the One, since the One represents its greatest weakness.

E
Barnabas said:
Why are all the programs so emotional? How can a program have emotions?

It's called Artificial Intelligence, and it's the whole basis for the story. Also note that some programs have more emotion than others, but none have demonstrated the emotion of love.

F
Barnabas said:
How can Neo destroy the sentinels in the real world?
a) He has developed supernatural powers in the real world: unlikely, would fit in the story
b) he has a blutooth-like neuro-interface which allows him to disable the machines via radio-transmission (the sentinels have to have receivers for command from the AIs, else they couldn't be deployed): possible, but why does Neo drop unconscious? And why is Bane/Smith in the same state?
c) The real world isn't real. Neo is still connected to a computer and shuts down the sentinels using this connection. In doing so, his mind manifests itself in the Matrix instead of the Zion simulation and so he shows the symptoms of being plugged into the matrix in the "real world": possible, but cheap.
d) The approaching Hammer EMP'd the sentinels. The timing was just a lucky coincidence: unlikely, EMP doesn't damage humans, even those with implants as seen in Matrix 1.
e) The real world is indeed real. There is no radio connection through the neuro interface. The sentinels are shut down by Neo from within the Matrix which he can do because he exists in both realities simultaneously: Hm...

Refer to A above, about The One possessing phemonenal psychological powers, another of which would be telekenesis, and yet another would be the ability to alter magnetic waves in the person's immediate environment. There are documented cases of people whose bodies carry electric and/or magnetic charges which can affect electronic devices, this would merely be an extension of that trait.

I don't believe it was the other ship, simply due to the fact that the ship would have had to first get within EMP range of the sentinels, then power down completely either without being detected by the sentinels, or before they could attack the ship, ignite the EMP, power the ship back up again, and fly to the Nebuchadnezzer crew's position. Given the time between the fall of the sentinels and the arrival of the ship, I think a smaller, localized EMP event stemming from Neo, or some other form of psychokenetic manipulation by him, seems more plausable.

The other speculation is that the "real world" is simply another layer to the Matrix. I personally don't believe this to be the case, but it can't be ruled out as a possibility.

G
Barnabas said:
Why is the Merovingian afraid of the One? It seems that the One could do something that won't allow him to escape to the next iteration of the Matrix. How can a program hide in the Matrix? Do what Smith does to Bane, imprint itself on a plugged in human brain?

The Merovingian doesn't fear Neo until after he watches Neo dispose of his agent protectors. Even then, he doesn't show fear per se, merely distinguished frustration at having lost the match. Indeed, with the ability to slip through back door at will in order to escape, one has to wonder what he would have to fear. As for his longevity, it's anyones guess.

H
Barnabas said:
When Smith meets Neo in the hallway the following dialog takes place:
Neo: What do you want?
Smith: The same thing you want, i want it all!
In what way does Neo want it all? Smith copies himself over everyone he meets but Neo... Is this somehow connected to what Morpheus tells Neo after he frees him: "At the beginning of the 21st century we gave birth to AI, a singular consciousness that spawned an entire race of machines"

Neo wants it all in that he wants supreme control over the Matrix in order to give every human freedom. Smith wants supreme control over the Matrix in order to eliminate the world he despises and give himself freedom.

I
Barnabas said:
Why does Neo ask the Seraph if he is a programmer? The war against the machines is more than 100 years old (he believes) so there can't be any programmers left. This question doesn't make sense.

There are programmers all over the place. You think the ship operators who look at the Matrix encoded and understand everything they see know nothing about programming? How do you think Trinity "hacked the IRS d-base"?

J
Barnabas said:
Where did Morpheus get his knowledge about the path of the One?

From the Oracle.


I only have one main question after watching Reloaded and putting some thought into it. In the original movie, Morpheus tells Neo that there was a man born inside the matrix who had the power to change it as he saw fit, and it was he who freed the first of them. This would seem consistent with the second option the Architect offered to Neo, as far as having Neo pick 23 men and women from the matrix to rebuild Zion. However, it does not account for the natural-born humans (Dozer, Tank, etc.) who were in Zion. If each of the six times Zion was built, it was built from people the One had chosen from the Matrix, there would be no natural born humans there. Part of the problem in answering this riddle is that it was never revealed how the people who were freed from the Matrix along with Morpheus discovered Zion to begin with. One possible reason is the aforementioned possibility that the "real world" is simply another layer to the Matrix, built as a redundant catch-all for the 0.1% who rejected the program. But Neo leaves another possibility... that he (and the rest of the people who have been "unplugged") simply weren't told about it, but supposedly the natural-born humans (or at least part of them) know the truth.

Other lesser questions I have are, if the Architect knows about Zion, and is confident in his ability to eradicate it, and indeed is supposedly responsible for its creation and re-creation anyway, why have agents running around trying to find Morpheus and gain access to Zion? Do they exist simply as a psychological motivating factor to encourage the people who have rejected the program to search out the latest incarnation of the One? It seems as if this is the entire purpose of Zion to begin with... as though the Matrix is incapable of finding the One without the help of others who have similarly rejected the program. The Oracle seems to have this ability, but the Architect is too "perfect" to recognize the differences between the illogical few who have rejected his system.
 
It won't let me make an account in those forums, so I can't respond there. But as to the whole argument in that thread about the previous 5 "the One"s not really being "the One"s, I think both the people in the discussion are incorrect. The One is not just a part of the Oracle's prophecy. The One is, in the Architect's words, "the sum of a remainder of an unbalanced equation", and there have now been six of these. Note his words, "I prefer counting from the emergence of one integral anomaly to the emergence of the next, in which case this is the sixth version." INTEGRAL anomoly. Most of the people who reject the matrix are only decimal anomolies. Only when a person reaches integral status, do they become "The One". There is not only 1 "The One", there are 6 who have reached the status of "One", i.e. integral, whole, etc.
 
Crusher said:
E
Barnabas said:
Why are all the programs so emotional? How can a program have emotions?

It's called Artificial Intelligence, and it's the whole basis for the story. Also note that some programs have more emotion than others, but none have demonstrated the emotion of love.

Yes, some have. What about the Oracle, see how she cares about those childreen in her apartment; and for passionate love, maybe Persephone, at least she seems a program .
 
The Oracle is empathetic and kind. Persephone is just lustful. I don't see love in either of those characters, at least not the kind that exists between Neo and Trinity.
 
When I saw the movie I found it odd that a program - the Merovingian - obviously found some satisfaction in sex. No, AI is not the answer... ;)
 
#1 AIs can have emotion. They are built by humans, presumably in our own image. That atleast partly explains why they rebeled against us.

#2 Merovingian may not be an AI. He may be a human being whose consciousness was "uploaded" into a program. Persephone says in the bathroom something like "I am sick of his bullshit. He used to be different, he used to be like you <referring to Neo>"


The Architect says to Neo "you will return to the source, disseminating the code you carry followed by reinsertion of the prime program"

Does this mean his consciousness will be uploaded into the matrix and his body in the real world will be reprogrammed and made to forget all of his powers and knowledge with respect to who and what is behind the resistance? (how else could they keep the prophecy a secret?) This is similar to Cypher's request in the Matrix 1 to "get my body back into a powerplant. I don't want to know anything about this. I want to be someone else, maybe an actor, ..."
 
#1 AIs can have emotion. They are built by humans, presumably in our own image. That atleast partly explains why they rebeled against us.

Yes, that is a popular belief. Actually, a true "strong" AI would be able to modify its own source-code (=one of the reasons why we must tread carefully in the future when real non-fiction AIs may be invented). There is no reason to believe that a true AI would let itelf be limited by mammal instincts.
 
Babel-17 said:
btw, does anyone remember the climax from the original Amber series by Zelazny wherein Corwin had to recreate the, ummm, "matrix"* of the universe(s)? I wonder if Neo will do something similar.

Amber series is awesome! :D Yes, people had to walk "the Pattern" to gain powers, but only if you had the blood of amber.

Babel-17 said:
Programs that are faced with the dillema of having to deal with humans while despising them are classical fare. "I Have No Mouth But I Must Scream" (good game too) by Elllison and a series by Chalker whose title eludes me at the moment come to mind. Neat series, the computer that controlled mankind was bound to leave the ability to turn it off in place, lol, it left the key in several pieces on various planets though. :)

Was that the Well World Saga by Chalker? (i.e. Midnight at the Well of Souls?) Anyways, this talk of sentient machines that despise humankind reminds me of Marvin from the Hitchhikers Guide to the Galaxy.


Zaphod Beeblebrox said:
"But what are you supposed to do with a manically depressed robot?"

Marvin said:
"You think you've got problems?" said Marvin, as if he was addressing a newly occupied coffin, "what are you supposed to do if you are a manically depressed robot? No, don't bother to answer that, I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level."
 
CosmoKramer said:
#1 AIs can have emotion. They are built by humans, presumably in our own image. That atleast partly explains why they rebeled against us.

Yes, that is a popular belief. Actually, a true "strong" AI would be able to modify its own source-code (=one of the reasons why we must tread carefully in the future when real non-fiction AIs may be invented). There is no reason to believe that a true AI would let itelf be limited by mammal instincts.

But it is also plausible that it would. Moreover, an AI might not want to tamper with its own identity if self preservation was at the basis of its programming.

Let me ask to you this: If I gave you the ability to self-modify your own emotions, would you get rid of your emotions? Would you delete painful memories? Would you like to displose of sexual feelings, hunger, sleep, and all the other worldly pains and pleasures?

Mammalian instincts are an evolved trait. Self modifying AI memes would evolve over time, and it is just as plausible they would find "emotions" a valuable evolved trait over time than pure Vulcan-like traits.

The assumption that the pinnacle of evolution is consciousness devoid of emotion and that emotion is bad is I think flawed reasoning.
 
DemoCoder said:
But it is also plausible that it would. Moreover, an AI might not want to tamper with its own identity if self preservation was at the basis of its programming.

If self preservation means anything, AIs will avoid reprogramming themselves completely.

"Hmmm, let me try this." *change* *compile* *run* *unhandled exception* *the humans win--again*
 
But it is also plausible that it would.
No. There is no reason to make any assumptions about how a binary AI will behave. True AI will be masters of adaptation - there is no way to predict its behaviour from one situation to another.

Moreover, an AI might not want to tamper with its own identity if self preservation was at the basis of its programming.

As I said, a true AI will override its basic programming, that is it will be able to modify its source code.

Let me ask to you this: If I gave you the ability to self-modify your own emotions, would you get rid of your emotions?
Would you delete painful memories? Would you like to displose of sexual feelings, hunger, sleep, and all the other worldly pains and pleasures?

Your questions are all misplaced and irrelevant. Nobody is going to *give* a true AI power to modify its "behaviour" systems - it will take it. We humans are already on the verge of controlling our own source code - for a binary AI it will be much much easier.

On a slight sidenote consider that binary AI will most likely be having the benefit of an exponential rise in intelligence, which is one of the reasons the Matrix movies are just good fiction. We humans would never be able to even try to wage war against true strong AI.

Mammalian instincts are an evolved trait. Self modifying AI memes would evolve over time, and it is just as plausible they would find "emotions" a valuable evolved trait over time than pure Vulcan-like traits.

No, because if the AI ever finds its behavioural subsystems to limit its "freedom" it will override those systems. We humans do not (yet) have that choice, for good or ill.

The assumption that the pinnacle of evolution is consciousness devoid of emotion and that emotion is bad is I think flawed reasoning.

I think the belief that humans (or in the future strong AI) are the pinnacle of evolution is flawed reasoning to begin with.
 
And I think your definition of "True AI"(tm) or "Strong AI" is flawed from the get go. It's a strawman and does not appear in any treatise on the cognitive science or philosophy of AI that I know of. Sounds like you've been reading too many science fiction books. When I read AI papers, I see alot people trying to build emotions and instincts into their AI.


But I simply asked you a personal question: if you had the ability to modify yourself, would you elect to remove your mammalian emotions? It's a simple thought experiment.

You imply Strong/True AI would not have emotions. But the ability to modify oneself does not logically lead to the nonexistence of emotions. It only leads logically to the ability to have extremely fast evolution, and evolution that is directed by intelligent goals. It is equally plausible that such fast evolution could lead to even higher levels of emotion.


If you want to convince people otherwise, you are going to have to argue WHY AI would remove any emotions that were programmed in or evolved, not that they CAN do so.
 
If one assumes there is some sort of God that made us that is superior -- with respect to creating intelligent free thinking beings -- and you believe we are intelligent and have freewill.

Let humans now be God, creating/mimicking a system that exists (humans) when building AI. All I see is a copy or a copy of a copy, which is likely going to be inferior -- God > Human > AI.
 
It's a strawman and does not appear in any treatise on the cognitive science or philosophy of AI that I know of.

Then again, you don't know half as much as you believe. I suggest reading up on Ray Kurzweil if you wish to look less a fool.

Sounds like you've been reading too many science fiction books.

Sounds like you read nothing but. :rolleyes:

When I read AI papers, I see alot people trying to build emotions and instincts into their AI.

There are no AI. Ergo your argument has no relevance.


But I simply asked you a personal question: if you had the ability to modify yourself, would you elect to remove your mammalian emotions? It's a simple thought experiment.

I simply told you that question was irrelevant. I see you don't understand why.

You imply Strong/True AI would not have emotions.

No. Too simplistic. See previous posts.

But the ability to modify oneself does not logically lead to the nonexistence of emotions. It is equally plausible that such fast evolution could lead to even higher levels of emotion.

Nope. It is not at all *equally* plausible. There is no way you can tell how an AI would behave, given its ability to change its own source code.


If you want to convince people otherwise, you are going to have to argue WHY AI would remove any emotions that were programmed in or evolved, not that they CAN do so.

Actually, you are the one who need to back up your assertion that AI *will* have and keep emotions.

The thing is since AI will very quickly exceed our own intelligence it is impossible for us to control or predetermine how it will behave. Like ants trying to control humans.

Are the ants even aware we exist?
 
CosmoKramer said:
It's a strawman and does not appear in any treatise on the cognitive science or philosophy of AI that I know of.
Then again, you don't know half as much as you believe. I suggest reading up on Ray Kurzweil if you wish to look less a fool.

Haha, boy this is so priceless.

I started this thread by saying

AIs can have emotion.

You replied
Yes, that is a popular belief. Actually, a true "strong" AI would be able to modify its own source-code..(lots of other assertions about "Strong AI")

And now you reference as your authority Kurzweil, who publishes popular books written for a layman audience. Funny you should mention Kurzweil, since I am a member of the Extropian community of which Kurzweil, Moravec, Minsky, Dennett, and Kosko frequent, I had personal copies of the Age of Spiritual Machines manuscript for review before publishing, and ditto for Moravec. Moravec, by the way, preceded Kurzweil many many years before in Mind Children I've had dinner table discussions at Extropian conferences with Kurzweil, Moravec, and Minsky.

But for all of your name dropping, you are still ignorant of the term "Strong AI", which is generally acknowledged to have been originally coined by an enemy of Strong AI itself, John Searle, in the famous Chinese Room thought experiment. Searle's definition of Strong AI says nothing of self-modification, and in general, is a strawman representation of the Symbolic AI. All Strong AI says is that it is possible to build an AI which duplicates human consciousness. Weak AI says that although we can build such a machine, it will only "appear" to be intelligent and conscious, but it isn't really conscious, and doesn't really understand the things it appears to.


Sounds like you've been reading too many science fiction books.

Sounds like you read nothing but. :rolleyes:

Now that I've embarrassed you sufficiently, perhaps you should go read Dennett's popular book, "The Mind's Eye" or some of Hofstadter's essays. Then after that, you can read some REAL AI papers instead of reading the Cliff Notes version.


When I read AI papers, I see alot people trying to build emotions and instincts into their AI.

There are no AI. Ergo your argument has no relevance.

On the contrary, there are AI programs. There are no programs with human level intelligence, or programs that will pass the Turing Test, but there are plenty of AI programs that do useful things., the first of which were written in 1952.

I understand your confusion, since you are not well read on the subject of AI, and have not taken any courses in AI, you are belaboring under the false assumption of what AI is. Play semantic games all you want.



But the ability to modify oneself does not logically lead to the nonexistence of emotions. It is equally plausible that such fast evolution could lead to even higher levels of emotion.

Nope. It is not at all *equally* plausible. There is no way you can tell how an AI would behave, given its ability to change its own source code.

You contradict yourself. First of all, you assert something is "not at all equally plausible" (therefore, you think you can predict the distribution of what is plausible and not plausible with respect to AI), and then you follow that up by saying that there is no way to predict how AI would behave. Well, if there is no way to predict how it will behave, how can you assert anything at all about what the probability of something being plausible is?


If you want to convince people otherwise, you are going to have to argue WHY AI would remove any emotions that were programmed in or evolved, not that they CAN do so.

Actually, you are the one who need to back up your assertion that AI *will* have and keep emotions.

Sorry son, I never said "WILL". As shown in the beginning, I merely said "AI can have emotions" CAN It was you who responded to me with such an assumption. If you had read more carefully, you would have saved yourself from embarrassment, but you should be used to it from all the other threads anyway I guess.
 
Back
Top