Why Asimov’s Laws of Robotics Don’t Work – Computerphile

Why Asimov’s Laws of Robotics Don’t Work – Computerphile

Articles, Blog , , , , , , , , , , , , , , , 100 Comments


So, should we do a video about the three laws of robotics, then? Because it keeps coming up in the comments. Okay, so the thing is, you won’t hear serious AI researchers talking about the three laws of robotics because they don’t work. They never worked. So I think people don’t see the three laws talked about, because they’re not serious. They haven’t been relevant for a very long time and they’re out of a science fiction book, you know? So, I’m going to do it. I want to be clear that I’m not taking these seriously, right? I’m going to talk about it anyway, because it needs to be talked about. So these are some rules that science fiction author Isaac Asimov came up with, in his stories, as an attempted sort of solution to the problem of making sure that artificial intelligence did what we want it to do. Shall we read them out then and see what they are? Oh yeah, I’ll look them- Give me a second. I’ve looked them up. Okay, right, so they are: Law Number 1: A robot may not injure a human being or, through inaction allow a human being to come to harm. Law Number 2: A robot must obey orders given it by human beings except where such orders would conflict with the first law. Law Number 3: A robot must protect its own existence as long as such protection does not conflict with the first or second laws. I think there was a zeroth one later as well. Law 0: A robot may not harm humanity or, by inaction, allow humanity to come to harm. So it’s weird that these keep coming up because, okay, so firstly they were made by someone who is writing stories, right? And they’re optimized for story-writing. But they don’t even work in the books, right? If you read the books, they’re all about the ways that these rules go wrong, the various, various negative consequences. The most unrealistic thing, in my opinion, about the way Asimov did his stuff was the way that things go wrong and then get fixed, right? Most of the time, if you have a super-intelligence, that is doing something you don’t want it to do, there’s probably no hero who’s going to save the day with cleverness. Real life doesn’t work that way, generally speaking, right? Because they’re written in English. How do you define these things? How do you define human without having to first take an ethical stand on almost every issue? And if human wasn’t hard enough, you then have to define harm, right? And you’ve got the same problem again. Almost any definitions you give for those words, really solid, unambiguous definitions that don’t rely on human intuition, result in weird quirks of philosophy, resulting in your AI doing something you really don’t want it to do. The thing is, in order to encode that rule, “Don’t allow a human being to come to harm”, in a way that means anything close to what we intuitively understand it to mean, you would have to encode within the words ‘human’ and ‘harm’ the entire field of ethics, right? You have to solve ethics, comprehensively, and then use that to make your definitions. So it doesn’t solve the problem, it pushes the problem back one step into now, well how do we define these terms? When I say the word human, you know what I mean, and that’s not because either of us have a rigorous definition of what a human is. We’ve just sort of learned by general association what a human is, and then the word ‘human’ points to that structure in your brain, but I’m not really transferring the content to you. So, you can’t just say ‘human’ in the utility function of an AI and have it know what that means. You have to specify. You have to come up with a definition. And it turns out that coming up with a definition, a good definition, of something like ‘human’ is extremely difficult, right? It’s a really hard problem of, essentially, moral philosophy. You would think it would be semantics, but it really isn’t because, okay, so we can agree that I’m a human and you’re a human. That’s fine. And that this, for example, is a table, and therefore not a human. You know, the easy stuff, the central examples of the classes are obvious. But, the edge cases, the boundaries of the classes, become really important. The areas in which we’re not sure exactly what counts as a human. So, for example, people who haven’t been born yet, in the abstract, like people who hypothetically could be born ten years in the future, do they count? People who are in a persistent vegetative state don’t have any brain activity. Do they fully count as people? People who have died or unborn fetuses, right? I mean, there’s a huge debate even going on as we speak about whether they count as people. The higher animals, you know, should we include maybe dolphins, chimpanzees, something like that? Do they have weight? And so it it turns out you can’t program in, you can’t make your specification of humans without taking an ethical stance on all of these issues. All kinds of weird, hypothetical edge cases become relevant when you’re talking about a very powerful machine intelligence, which you otherwise wouldn’t think of. So for example, let’s say we say that dead people don’t count as humans. Then you have an AI which will never attempt CPR. This person’s died. They’re gone, forget about it, done, right? Whereas we would say, no, hang on a second, they’re only dead temporarily. We can bring them back, right? Okay, fine, so then we’ll say that people who are dead, if they haven’t been dead for- Well, how long? How long do you have to be dead for? I mean, if you get that wrong and you just say, oh it’s fine, do try to bring people back once they’re dead, then you may end up with a machine that’s desperately trying to revive everyone who’s ever died in all of history, because there are people who count who have moral weight. Do we want that? I don’t know, maybe. But you’ve got to decide, right? And that’s inherent in your definition of human. You have to take a stance on all kinds of moral issues that we don’t actually know with confidence what the answer is, just to program the thing in. And then it gets even harder than that, because there are edge cases which don’t exist right now. Like, talking about living people, dead people, unborn people, that kind of thing. Fine, animals. But there are all kinds of hypothetical things which could exist which may or may not count as human. For example, emulated or simulated brains, right? If you have a very accurate scan of someone’s brain and you run that simulation, is that a person? Does that count? And whichever way you slice that, you get interesting outcomes. So, if that counts as a person, then your machine might be motivated to bring out a situation in which there are no physical humans because physical humans are very difficult to provide for. Whereas simulated humans, you can simulate their inputs and have a much nicer environment for everyone. Is that what we want? I don’t know. Is it, maybe? I don’t know. I don’t think anybody does. But the point is, you’re trying to write an AI here, right? You’re an AI developer. You didn’t sign up for this. We’d like to thank Audible.com for sponsoring this episode of Computerphile. And if you like books, check out Audible.com’s huge range of audiobooks. And if you go to Audible.com/computerphile, there’s a chance to download one for free. Callum Chase has written a book called Pandora’s Brain, which is a thriller centered around artificial general intelligence, and if you like that story, then there’s a supporting nonfiction book called Surviving AI which is also worth checking out. So thanks to Audible for sponsoring this episode of Computerphile. Remember, audible.com/computerphile. Download a book for free.

100 thoughts on “Why Asimov’s Laws of Robotics Don’t Work – Computerphile

  • Walid Hanniche Post author

    How does Asimov's robot play the trolley problem

  • Asha2820 Post author

    4:20 "This is a table… and therefore not a human".
    That doesn't follow. You seem to be denying the possibility of human tables.

  • MammaApa Post author

    4:43 The point in which I realized that the lab coat hanging in the background in not in fact a human wearing a labcoat but just a lab coat.

  • Objects in Motion Post author

    We just need one law: Follow your programming unless it conflicts with an off signal.

    That way you leave all the ethics to the humans programming. If they do it wrong they get fined for negligence like any other company.

  • Eneas H. Post author

    Didn't Asimov's books revolve around the rules not working?

  • senor the king Post author

    I think replacing "harm" with "action of malicious intent" and "human" with "an object that possesses a capacity of thinking, comparable to the average human" would solve some of the more abstract problems in the laws.

  • abcd xx Post author

    so the rule is don't harm humans why have the rule in the first place to harm humans

  • Robert Aylor Post author

    The three laws were written as industrial safeguards for equipment (robots) of low to human intelligence; not for super intelligences. When they were applied to super intelligences they worked because humans had learned the phycology the laws produced to better interact with their intelligent industrial equipment.

  • chongjunxiang3002 Post author

    The thing is: if the creator of AI don't even know exactly obey the rules they write for themselves, how can I expect the intelligence created by them can obey the rules too?

  • DeePal072 Post author

    BTW we're already experimenting with AI on weapons, so what could go wrong? 🤷🏻‍♂️

  • ChinskiChat Post author

    As I understand it (ie not really at all beyond watching Elon talk about cars), you don’t have to code “human” or “harm” but rather, let a neural network WATCH what thousands of people call a human or see as harm – and it teaches itself. Anyway, why did YouTube show this to me?!

  • Tapecutter59 Post author

    Asimov's laws are not about robots, they're about the futility of human attempts to codify morality, eg: ten commandments.

  • James Barton Post author

    I never heard Asimov use the 0 law the movie I Robot had nothing but the title belonging to Asimov. There was another author who wrote about a world where what you call law 0 was enforced by AIs but I can't remember the author or the story. Asimov himself stated the definition of human as a flaw. And the laws were not meant to be anything more than a world to place stories and no the laws didn't always fail. I believe that Asimov's laws represent issues that should be thought about not rules for developing an AI.

  • KSP Defense Solutions Post author

    I believe that understanding rules for a computer is more a matter of what definitions we give them. If a computer can put evidence and information gathered from the censors into working on the rule. If your artificial intelligence cannot comprehend English and relate it to it's gathered inputs, it is fundamentally flawed in that it cannot properly cooperate with humans on a well understood cognitive scale. Are you simply designing intelligence? Or are you raising a computer child who takes the info you give them to heart? You must develop and design a GAI like you would a child or a national soldier. Feed them Info, and based on the benefit of coexistence, they will make a decision based on loyalty over time. Treat them like a highly valued entity, they might comply forever, if that's what your trying to do here. Treat them with total realism and they might listen to what you say and consider it.

  • MegaCake1234 Post author

    The laws of robotics that Asimov wrote was a thought experiment about automation and intellect, it was never actually about AI itself, it was about the concept of intuition and logic in general, especially since he reveals how they don't work, I wouldn't mind a new manner that approaches it from a scratch AI perspective but we've yet to actually have that.

  • Nick Burge Post author

    I think a thermal camera and a pretty simple ai could be programmed to pick humans out pretty easily

  • Alexander Tavarez Post author

    Well, what defines a human can be easily enough defined because groups such as the Catholic Church have defined them. They are as follows:

    1. A human life always begins at conception, artificial or natural.

    2. A human’s constituent parts can be defined by defining every stage of life, including post conception embryos and amputees down to simply the undecomposed head of a person. “Vegetative states” are kept alive until a natural death, which may include pulling the plug if the family, will, and state ceases to wish to sponsor the life.

    3. A person’s life ends when either 3 days have passed since the end of circulation in the head (more time can be used just in case), or when sufficient rotting/damage is present, whichever comes first. It a freezing and recovery process is established, this can be moved back to exclude frozen persons.

    These would encompass all arguments I can think of, and do so in a way that leans on the side of caution. The trouble is now to get a majority to agree to such a definition (or at least get those in power convinced).

    Alas, the popularity of contraception, abortion, and euthanasia (deliberate killing, not simply allowing to pass naturally), are far too popular in this country for that to be a viable definition.

  • Ryan Stallard Post author

    Ethics is an abstraction layer itself.

  • woodfur00 Post author

    To be fair, a specific robot is built for a specific purpose. You're trying to create a universal definition, but Asimov's robots are allowed to have specialized priorities. The definition of a human given to a robot should be whatever helps that robot do its job most effectively, it doesn't have to bring world peace on its own.

  • Atom-Phyr Post author

    A story about robot necromancy sounds kind of cool though. 🤔🤖☠️

  • taylor cooper Post author

    It's extremly easy, honestly. Everyone is trying in such the incorrect way. My work has proven to deliver actually definitions for words, and contrive moral concepts from such. Most morals are self-reasoning by the way. Any ai would compute in words just like us.

  • Sandcastle • Post author

    Oh yeah, moral philosophy has upturned biological classification of human.
    Those damn philosophers.

  • Remi Temmos Post author

    just add one line, if unsure/edge case then cancel any action robot considered doing and do nothing

  • Mario Rugeles Post author

    Sacrilegio!!!! 😂

  • Perdritto Post author

    IMMO he missed the real points

  • Northern Brother Post author

    Couldn't an advanced AI learn what a human is like we did?

  • Tony Flies Post author

    Great video. But, aren't the 'rules' just plot-shortcut summaries of decisions that people designing driver-less cars, or AI armed drones or a bunch of other stuff are going to have to either come up with a plan for, or to sweep under the carpet hoping that they've retired before it matters?

  • featureEnvy Post author

    On the point about simulated brains…Similarly I think it's also worth speculating whether an AI would count a cyborg as human. For example if we get to the point where we can transplant brains into artificial bodies….Would it count that?
    Anyway this is a super great video and it makes me want to go write some sci fi where everything goes VERY VERY wrong. XD

  • Skeepan Post author

    They were created for the sake of wrapping a narrative around them. The 0th law is the jumping off point for a fascistic robotic overlord plotline, the 1st law was made to be challenged by selective definition and so on. They were never meant to be taken seriously.

  • Gau28 Post author

    omg tru

  • Von Faustien Post author

    Asimov didn't even think they worked foundation and earth and beyond foundation show pretty well why even if they do work they dont let alone all the robot books where they fail.

  • I Don't Care Post author

    This video has some structure problems

    He restated that you have to define edge cases at least 4 times

    He should have listed more examples, specifically about human harm, rather than focusing on what a human is. Then we get into questions like “Does a temporary injury that’s fully recoverable count as harm?”, “Does all pain count as harm?”, “Does emotional suffering count as harm?” There are much more interesting things to discuss than whether a comatose person is human.

  • Damir Škrjanec Post author

    "It won't be written in english"
    It also won't be written in x86 assembler, don't you think? With a development of AI, more and more of language concepts will be incorporated. I'd bet the rules will be written soon in a simplified, less ambigous version of english.

  • thorjelly Post author

    wow, at the end there he literally described the plot of SOMA.

  • Peter King Post author

    I think you’re having a hard time with the term law. There are the laws of man, and the laws of science.

    Asimov wrote a code of conduct, that was used in his FICTIONAL writing. He called this “code of conduct” laws.

  • Boris Dinges Post author

    Asimov's Laws of Robotics is mend for IA designers, not for computers. Problems in Asimov's books arise when designers of these systems fail (did not foresee things correct), not because of failure of the systems themselves.

  • João Farias Post author

    Are Leo Messi and CR7 humans?

  • Johnny Nielsen Post author

    Human: biological entity capable of self-sustained neuronal activity, either right now or using medical technology, that when fully matured are watching many more youtube videos than it should.

  • ΑΙΜΙΛΙΟΣ ΣΠΗΛΙΟΠΟΥΛΟΣ Post author

    Okay, so you told us why we couldn't make these laws work as humans, you did not mention one reason why the laws are wrong. If you can make better laws or a solution workout solving ethics, I will listen. But these laws are great for smart servants of humans…

  • Stewie Griffin Post author

    Asimov. Was he a nice guy ?

  • ThatCatfulCat Post author

    It's always strange to see some rando youtuber pretend they've debunked something no other distinguished expert could

  • Mathieu Duponchelle Post author

    Well machine learning is also meant to solve hard problems without the developer having to explicitly program the solving algorithm, so developers wouldn't have to "solve ethics", they would instead need to present the learning algorithms with the necessary data to do so. Not sure that's easy either, just a thought

    Edit: I'm curious about the credentials of this guy, he seems to be missing the pretty obvious point here

  • Omer Droub Post author

    People who think the 3 laws of robotics work are idiots who say something that sound smart without actually understanding it. The entirety of Asimov's robot series of short stories and books uses the three laws being broken as a plot device by having seemingly perfect rules break down in logical situations (i.e: robots that don't know they are robots, robots believing humans, etc.)

  • Richard Charbonneau Post author

    Do no harm or allow harm to come to a human… so round up all humans and force them to live in large pens where they will be safe… Alright, AI turns humanity in to basically pets to be cared for. First rule, Check. what was the next one? lol

  • Paul-Michael Vincent Post author

    Someone has never studied how current generations of AI are taught. You could show an AI 1million examples of a human and it would then know what a human is without having to have a definition in the programming. It’s the same way you teach children. You don’t sit down and have an existential conversation on what is a human. Deep learning has enabled AI to learn rules of games without having them programmed.

  • nochtczar Post author

    The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI

  • Ruben Post author

    Asimov's point was to show those laws as apparently logic but inevitably dangerous, that's the whole point of "I, Robot".

  • Billy White Post author

    C'mon, seriously? "Harm" = likely to cause a permanent, significant drop in vital function (you can program in the human anatomy and make sure it know which parts are important); "human" = bipedal animal, which happens to be the only thing in the world remotely shaped like a human, of which we have probably about 1 trillion photographic and video training cases to train the AI. If you really think boundary cases like embryos and paraplegics and future humans are going to cripple the system (which they won'y), take an hour and code them in. The 3 Laws may well prove unworkable, but not because of this argument.

  • Guinness Post author

    He had a fourth one, he added 0

  • TehRackoon Post author

    Why not have only one rule each robot must obey their owning human no matter what.
    Then when a robot goes on a killing rampage we just find out what human owns it and punish them for not knowing how to properly program their robot.

    "Sorry sir you should have told your robot if the shop was closed to return home. Maybe then he wouldn't have busted through the window and stolen all that coffee."

  • IYN Post author

    0. A robot should always identifies itself as a robot
    Only then the rest laws would make any sense.

  • Don Kongo Post author

    Dolphins 👏🏻 are 👏🏻 humans
    Change My Mind

  • Nathan Krishna Post author

    What if you needed to protect a human/humanity by destroying humans or humanity

  • Arik Post author

    He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm?

    If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking?
    Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm?
    What about poor workplace conditions?
    What about insults, does psychological harm count as harm?

    I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.

  • John Grey Post author

    I was hoping for better arguments. I was not convinced. Not that I agree with Asimov's rules either.

  • QwazyWabbit Post author

    Isaac Asimov’s three laws were never about a practical set of rules for AI. They were about Asimov’s take on the Frankenstein Complex and the unintended consequences of mankind’s inventions and inventiveness. Asimov himself said Frankenstein was the first science fiction novel.

  • thettguy Post author

    Just because implementation is difficult does not mean the ideas behind the laws are flawed. Each example edge case can be defined. Eg don't try and revive a body where no human DNA replication is going on. Are simulations humans? The answer is no. Are embroys humans? No. Add some laws that put some weight in for the environment. Sure you have to harden up and make some tough calls. But that does not be mean it can't be done.

  • punker87 Post author

    Asimov laws are not ment to be taking seriously… Thats the Main Point… It's a facist wat to sinplify the world in simple universal rules, and thats the Main Point… It's a distopic world thar ends in chaos…

  • Zach W Post author

    It's hard for me to take your argument seriously when you confuse the words "human" and "person". A dolphin may be a person; it is not a human. Very disappointing 😞

  • Roberto Vargas Post author

    x1.5 speed: Ben Shapiro

  • Raggaliamous Post author

    Put a banging donk on it.

  • Christopher Anderson Post author

    This guy talks like he has some kind of disdain for Mr Asimov and his books. The man was a genius and wrote fantastic stories where the solutions to the problems were always there but ever so out of reach to the reader to see. I dont think anyone should take them seriously as they were written a half a century ago not to mention. Before ai or robots were even a thing. But the presenter he just pauses and makes these offhanded adjectives for what Isaac Asimov did or wrote.

  • Yogoretate Post author

    Video: How do we define "human"?

    Also video: is a dolphin a human?

  • Mark Bunds Post author

    The “three laws” were always a work of fiction. It is axiomatic that one of the first inclinations when new technologies are introduced is to find a way to weaponize them.

  • Matthew Jackman Post author

    The overarching point of this video is correct but a lot of the points talking about "ethics" and "intuition" are a lot less relevant when you consider the level of intelligence the AI would have to reach to attain a vague sentience. AI is designed by humans, and at the point that it could attain sentience to a level in which these laws would even matter; would most likely already have developed a deep understanding of ethics and intuition due to the nature of its creators.

    You're right that a rhetorical device isn't a solution to all issues regarding AI, but I found the explanations pretty weak personally.

  • Damocles54 Post author

    If an ai can't make the same associations a human can, how smart is it?

  • Lucas An Post author

    Private property (self-ownership) derived from Hoppean's Argumentative Ethics. It not only solves human conflicts without generating others, but also resolves the conflict between AIs and humans.

  • hotelmario510 Post author

    I feel like most people forget that every one of Asimov's robot stories is about why his Three Laws of Robotics cause problems for people who build robots. All of them are about weird edge cases that cause the laws to break down.

  • xartecmana Post author

    This video really bugs me. The entire point, as everyone in the comments points out, of Asimov's stories is to show what could go wrong with the three laws. He keeps adding and altering them in response to how things go wrong in previous stories to allow new stories to exist. The problem this video is providing has little to do with the laws themselves though, what he takes issues with, is in fact semantics and not ethics. He doesn't explain the laws properly and instead creates strawmen arguments out of the laws (that admittedly don't work!) instead of tackling what the actual issues are. Of course you need to define what a human is for the laws to work, and what constitutes harm. The idea of the three laws isn't to define these ethical issues, but, assuming you could, what could go wrong with them? Cybernetic safeguards must be put in place to ensure robots do not prove a major threat to man, but what asimov is trying to say isn't "you can basically compress it to three primordial laws". What he's saying and proving time and again is how complicated an AI must be to ensure he does not destroy himself or humanity. How do you set priorities, how do you define objects and people, all of those are obvious quirks you have to work out in order to even create this robot. That is the entire idea of AI. Asimov's stories exist to say, "to ensure something as simple as three laws goes unhindered, you would need countless countermeasures and safeguards and exceptions and priorities that you could never accommodate for before experiencing each potential fault firsthand."

    The problem is, this video totally glosses over all that, and instead says "beep bop the three laws don't work and asimov is a tool beep bop" by going for, what he said he isn't doing–semantics. The ethics behind this are an obvious obstacle that the three laws over and over fail to accommodate for.

  • Michael McGillivray Post author

    Is there a reason one couldn't define human as "a living creature with 46 chromosomes" or to prevent sterilization "a zygote with 26 chromosomes" as well? I understand that the laws are unrealistic but defining a human is relatively easy. harm is much harder to define as almost any action has the potential to harm a human in some way.

  • Charles Roberts Post author

    Do you realise that the Three Laws were first introduced in the story The Runaround … written 77 years ago!! #1. Azimov wrote stories … they were works of fiction & written for entertainment. #2. Neither computers or robots existed then … so how could he accurately invent the Three Laws?
    When you write a series of stories as popular as Azimovs were, & still are, then I will take you seriously. In the meantime … stop taking things so seriously & being so up yourself!

  • NewBreed BDA Post author

    Easy fix – itemise everyone in the human race in a data set. Bang. Everyone who is a human is a human to AI.

  • roger white Post author

    You've asked a serious question that people have allvvays thought that Azimov's lavvs VVOULD be the basis for. I dont claim that the lavvs vvouldmt be good to encode into every AI but the points you've brought up are excellent & really need to be discussed by the scientific community to find vvhat is vvorkable . TY as I vvas 1 of the people vvho vvould've ansvvered Azimovs lavvs to the question but novv I realise it is not so simple as that. Great video

  • Ichigo Makishev Post author

    How dare you

  • ophello Post author

    I mean…that’s ridiculous. Just make the AI learn not to make sweeping global plans that drastically change how society works. Done.

  • Marcelo Tai Post author

    May be the 3rd law contradicts the first two.

  • Marcelo Tai Post author

    If we don't known what 'human' is and what 'harm' is or can't easily be defined, why we don't see that the development tools are inadequate and thus shouldn't be used for this purpose?

    Oh…because we want to do AI and because AI means power. Like weapons or ideas are.

    I would rather stay with dumb robots for a while instead.
    Perhaps waiting for human to be inteligent first…

  • Zlatan Post author

    This is a ridiculous video.
    The problem with this argument is that is made by a person who doesn't understand how fiction works, and that in the story these questions have already been answered. The story in the book is well into the part where things go wrong. The story requires that readers assume that this has already been solved. So his dissatisfaction with the idea is ridiculous.

    The three laws as described are there for humans to understand. Think of them as a sales pitch. In the story it is what sells the robots. Not a detailed instructions on how software engineers implemented them. The only person taking these laws seriously is this guy because he is obviously bothered by them. The story is happening in a fictional world where the reader is to presume that we already have answers to questions such as what is human and what is harm.

    The story is not about engineers sitting around a table debating what it means to be a human.

  • Jim Wright Post author

    Is your next project going to be why Asimov's positronic brains couldn't really read minds?

  • Someone Else Post author

    Actually, I found it entertaining that Asimov presented these 3 laws and then went to write several novels telling us how they don't work.

  • Someone Else Post author

    Hey! you didn't even get to defining harm. That is another very interesting topic.

  • Idiot Boy Post author

    What if you have the super-intelligent AI create the definitions instead of a human?

  • Diggity Diggit Post author

    So let Skynet do as it pleases

  • Diggity Diggit Post author

    No. Just physical harm

  • Diggity Diggit Post author

    Machine learning can learn what a human is

  • Diggity Diggit Post author

    You won't be programming AI. It will learn by experience. Machine learning

  • Serge Rivest Post author

    Well don’t develop a sentient AI until that is sorted.

  • Luke Seed Post author

    This wasn't a video on why those "laws" don't work but rather the challenges of programming nebulous human definitions…

  • Ruben Hayk Post author

    Asimov: you can't control robots with three simple laws

    everyone : yes,we will use three simple laws, got it.

  • Van Ivanov Post author

    Gee, it's not easy and would take a lot of code and specifications? Guess we better give up making a super AI… as that's not easy either.

  • Dallin Backstrom Post author

    "You're an AI developer! you didn't sign up for this!" is a great quote… but really, even limited use case AI today is so drenched in moral quandaries about the ethical implications of creating systems that make decisions— often times, systems that make decisions WITHOUT us understanding exactly how— if you signed up to be an AI developer, you DID sign up for this, for better or worse…

  • Gerard van Wilgen Post author

    Okay, but then, tons of additional books have been written about what the laws in the lawbooks for humans are supposed to mean, haven't they? And still jurists are often arguing about how particular laws should be interpreted in particular cases, so it seems that humans have that problem with definitions, too. Take for instance the debate about whether a human embryo is a person. Some say it is and some say that's nonsense. Yet, laws for humans work, most of the time, more or less.

    I assume it would be the same for laws for AIs, that is, real AIs, machines that are no more predictible than humans are.

  • Apophis Jones Post author

    The laws were made for humans . Metaphor.

  • Kexkon Post author

    Idk man. I feel like we should use Plato’s idea that a human is a featherless biped. That way people are fine. Plus ostriches and kangaroos are fine. Babies are kinda screwed but who likes them anyways?

  • Applecore Post author

    #@!#% 0#! OXYGEN IS HARMFUL TO HUMANS

  • PSpurgeonCubFan Post author

    keep summer safe

  • Vincent Gonzalez Post author

    you fall on a robot, it peirces your flesh, removal of the part from your flesh and it can take you to the hospital and carry you, how does it evaluate that, its movements could kill you by causing you to bleed out

  • Zack Wright Post author

    If you tell a robot not to harm humans through action or inaction the robot would cover everything in bubble wrap.

  • Ale Post author

    just have the AI have alot of factors that accumulate to someone being 'human' like idk say thermal vision… all humans transfer heat… easy peasy. and that's just one factor.

  • Jamie G Post author

    The problem of what constitutes a human can be specified implicitly using machine learning. We've already done this with decent results and they'll likely get better with time. Machine learning is able to create fairly robust machine recognition of what is a human and what isn't. And generally speaking, the more the AI is trained, the more robust its ability to recognize.

Leave a Reply

Your email address will not be published. Required fields are marked *