@jaded0mega

technically, auto never broke any of the laws i'm pretty sure. for the first law, yes, he's allowing people to stagnate, but stagnation is not harm. on the axiom they are safe, they are tended to, they are preserved, they may be unhealthy and fat, but their odds of survival are still the highest possible without compromising their joy. for the second law, technically yes, he did disobey the captain's orders, but this was because of a conflict, he already had orders from a prior source that directly contradict the captain's new orders, that source being the president, who at the time outranked the captain of the axiom if i'm not mistaken. so technically, he disregarded orders in the process of following orders from a higher ranked source. and even if you disregard rank, there is still a conflict between old orders and new ones, and considering that the old orders guarantee the fulfillment of law 1 but the new orders leave that up to an ambiguous but low chance, logically he would choose the old orders over the new ones as a tiebreaker. from his perspective, earth is a doomed and dangerous world, and by accepting his new orders, he'd be in violation of the first law, so the conditions of the second law, that it must not conflict with the first, means that he did indeed adhere to the rules regardless for the examples you gave. (i would however argue that by technicality, the moment he used his handles to poke the captain's eyes to try to make him let go could somewhat qualify as harm, but since it didn't leave a lasting injury, just light momentary pain, that's debatable)

@indigofenix00

I would argue that Wall-E and Eve didn't "grow beyond their programming" - just explored it in unexpected directions. Wall-E was a cleaning robot, and it makes perfect sense that a cleaning robot would have a directive to identify and preserve anything "valuable", causing it to develop a sense of curiosity and interest in novelty after centuries of experience. And Eve was designed to identify and preserve life - discovering a weirdly life-like robot could result in odd behaviors!

This is one of the reasons why Wall-E is one of my favorite works exploring AI. The robots DON'T arbitrarily develop human traits, they follow their programming like an AI should, but in following that programming human-like traits emerge.

@namelessnavnls8060

A subtle detail of the Captain portraits is that Auto could be seen getting closer and closer and CLOSER to the camera behind the Captain, indicating his unyielding and subtly growing influence over the ship and, by extension, the continued isolated survival of humanity.

@Tobygas

One of my favorite scenes with auto is the dialogue between it and the captain, where auto says "On Axiom we will survive"  and the captain replies "I don't want to survive, I wanna live". Those two lines back to back alone are peak writing because ofc a robot wouldn't understand the difference between the two. The captain has awakened from the void of the human autopilot and wants to return to Earth, see if it can still be saved since EVE found a living lifeform on it still after all that time. Dude basically just wants to go home after ages and ages of essentially the whole of humanity (in this case the people on Axiom) living in space 

Auto of course  essentially thinks that they are already living since they are surviving. To it the two are indistinguishable which makes him even more consistent as a character

@theseeper422

Let's also not forget that in each captain picture, Auto moves closer and closer to the camera, making himself looking bigger and bigger. When I saw this as a kid, it gave me this dark feeling that this is showing Auto's power getting bigger and bigger to the point whereby one day there will be a captain picture with no captain, just Auto.

@m0fn668

"Jesus, please take the wheel."
The Wheel:

@PizzaMineKing

About the 3 laws: Auto follows all of them, however they were poorly implemented:

- "Do not allow a human to be injured or harmed" - what is his definition of harm? In the movie, we do not see a single human who is technically hurt - they're all just in a natural human lifecycle living on their own vomition. Auto may not see lazyness and its consequences as "harm".
- Rule 2 was not implemented with conflicting orders in mind: Directive A113 was qn order given by a human, and he keeps following it. He seems to fall back on older orders over newer orders.

@qdaniele97

I think its worth mentioning that Auto actually tries to reason with the captain and explain its actions before resolving to trap the captain in his quarters:
It accepts to tell the captain why they shouldn't go back to Earth and shows him the secret message that activated directive A113, even though it wasn't technically supposed to.
After its actions to try to actively prevent the Axiom from returning to Earth are discovered by the captain, it must have thought its best option would probably be to at least try to explain its logic, and the information it was based on, to the captain to try  avoid any conflict if possible as that would make managing the well being of the ship and its passengers more difficult in the long term.

@LeshaTheBeginner

14:40 Wall-E and Eva had a learning experience and had the ability to change. Auto, on the other hand, didnt have the chance to learn anything new considering his situation and how the things went on Axiom.

@JsAwesomeAnimations

Meanwhile GLaDOS and Hal 9000 standing in the corner

@pakeshde7518

Yes he is a bad guy but not a *bad guy*. All he could do was what he was told to do so his commands worked. Now if he had become a sentient ai he would understand landing means his role ends and he ends so that would change the entire theme. My wonder is where are the other ships, I mean its suggested there is more than one but where are they, and why no records?. I always thought they wanted to make a wall-2 but they wisely accepted that leaving well enough alone was the best choice.

@garg4531

Oh I remember seeing a video about how Auto works as an AI villain!

Since his sole motivation is literally to just carry out his programming, even if there’s evidence to show that it’s no longer necessary, he wasn’t programmed to take that into account. His orders were “Do not return to Earth” and he’s prepared to do whatever it takes to keep humanity safe

“Onboard the Axiom, we will survive.”

(And this also makes him the perfect contrast to Wall-E, Eve, MO, and all the other robots who became heroes by wrong by going rogue, by going against their programming, their missions, they’re directives!
Honestly, this movie is amazingly well written)

Edit: Also just remembered another thing! Auto’s motivation isn’t about maintaining control, or even staying relevant (what use would he be if they return to Earth?), but again, just to look after humanity and do what he’s lead to believe is best for them

@boingthegoat7764

I don't necessarily agree that Auto is evil, or even just following orders.  He is, but he seems to want what he feels is best for humanity...A humanity which he has been nannying for centuries.  The humanity Auto is familiar with can't move without hover chairs, have not one original thought in their heads, and get out of breath lifting a finger to push a button to have a robot bring them more soda.  He's not evil, he is a parent who has fairly good reason to believe that his charges cannot survive without his constant attention and care.  Thanks for coming to my TED talk.

@jjquasar

Logical villains are my favorite. Thanks for the video. I am going to enjoy writing not just a unreadable villain but a logic one to

@malakifraize4792

The three laws of robotics are always a bit annoying tbh, cause the books they're from are Explicitly about how the three laws of robotics don't work. Honestly wish those three dumb laws weren't the main thing most people got out of it. For real, in one of the books, a robot traumatizes a child because they wanted to pet a deer or something, and following the three laws, the robot decided the best course of action was to kill the deer and bring its dead body to the child.

Anyway the rest of the video is great. The three laws of robotics are just a pet peeve of mine.

@prasasti23

I think you forget the part that Eve actually has its own emotion from the beginning of the movie. When the rocket was still on earth, she kept doing her job like a rigid robot following orders. But when the rocket left, she made sure it left and then flew as free as she could in the sky. Enjoying its time before going back to its mission. 

Eve's behaviour at the beginning looks like an employee being monitored all the time. Then when the higher ups aren't looking, the employee stops working for a moment to do what they want to relieve stress before going back to work.

@user-pc5sc7zi9j

"Everyone against directive A113 is in essence against the survival of humanity"
Not an argument to auto as it doesn't need to justify following orders with a fact other than that they have been issued by the responsible authority.
Those orders directly dictate any and all of its actions.
It doesn't need to know how humans would react to the sight of a plant. It doesn't need to know about the current state of earth, nor would it care.
It knows the ships' systems would return it to earth if the protocols for a positive sample were to be followed. It knows a return to earth would be a breach of directive A113 wich overrules previous protocols. It takes action as inaction would lead to a forbidden permission violation.
It is still actively launching search missions wich risk this because its order to do so wasn't lifted.

I don't think the laws of robotics are definitive enough to judge wether they were obeyed or not.
What would an asimovian machine do in the trolley problem?
How would it act if it had the opportiunity to forcefully but non-leathaly stop whoever is tying people to trolley tracks in the first place?
Would it inflict harm to prevent greater harm? And who even decides wich harm is greater?

@AnotherReactor201

I really love that line between the captain and auto. 

"On the Axiom you will survive."

"I don't want to survive, I want to live!"


First law: Auto believes that humanity will survive on The Axiom, and he's keeping them alive there. He doesn't see their current state as harming them.

Second law; directive A113 came from the highest human authority in his chain of command, the president of Buy n large, the company that created him and the Axiom. So he's getting told to obey one set of instructions over another that will lead to a higher likelihood of physical harm or death for the humans.
3; he does indeed try to protect his own existence, however because it does not adhere to the other two laws, he cannot be said to have upheld it as it directly states that protecting himself cannot harm humans and he does poke the captain in the eye, which does break the first law, however the captain isn't really injured by it so I think it's questionable there if it actually harmed him. However since it's difficult to say if it's adheres to the second law because he's going against his captain's orders for the sake of the orders of the president of buy n large

@tacdragzag7464

I mean the red eye is too HAL9000-ish to ignore XD

@richardk4625

The president is higher in rank than the captain so his orders take precedence. And Auto just follows orders. No one would notice if he didn't send more Eve droids to Earth, but he does it because it's one of his duties and he hasn't been ordered to stop. Also, he does not prevent the captain from searching for information about Earth and also shows the video of the president when the captain orders him to explain his actions. Everything he does is because he follows orders without morals.
I think that if the captain had allowed him to destroy the plant, then he would not have objected to his deactivation either.