9

The Turing Test is a simple test devised by Alan Turing to check for AI. If a machine is able to fool a human into believing it's a human then that machine is AI.

The 3 Laws of Robotics were laid down by sci-fi author Isaac Asimov.

As per The 3 Laws of Robotics an AI (robot) must be able to distinguish human from robot(AI) otherwise it wouldn't be able to follow these laws.

But AI perfectly mimics humans.

The dilemma

  1. Either AI can identify humans from AI or AI can't identify humans from AI.

  2. If AI can identify humans from AI then AI is more intelligent than humans (AI cognitive capacity exceeds that of humans)

  3. If AI can't identify humans from AI then AI can't follow the 3 laws of robotics

Ergo,

  1. Either AI is more intelligent than humans or AI can't follow the 3 laws of robotics. [1, 2, 3 constructive dilemma]

Either way we have a problem on our hands (AI is superior or safety issues with AI).

What sayest thou?

Agent Smith
  • 3,642
  • 9
  • 30
  • Comments are not for extended discussion; this conversation has been moved to chat. – Joseph Weissman Jan 02 '23 at 04:44
  • 4
    Realistically the robots are built with stickers on their foreheads saying "I'm a robot". Any human who deliberately wears such a sticker is stupid enough that nobody will care if the robots violate Asimov's laws against them. Several Asimov stories were about the consequences of robots harming humans without realizing they were harming humans - I seem to recall one about a robot serving poisoned tea and then breaking down. – user253751 Jan 02 '23 at 18:10
  • 3
    Your description of the Turing test is a paraphrase, and the notion that intelligence is a monolithic category on a spectrum suspect. It is also a false dilemma to presume that AI either can or cannot identify, since it is likely AI using statistical methods would assign a probability with a confidence interval (That's what I do, and I'm human). Your ultimate question is also a false dilemma. https://en.wikipedia.org/wiki/False_dilemma – J D Jan 02 '23 at 20:32
  • @JD, the Turing test then is inadequate. Do you propose a new, better test? – Agent Smith Jan 03 '23 at 01:59
  • 2
    What do you mean "the Turing test is inadequate"? Of course it's inadequate, it always has been. You don't need an AI to make humans believe they're speaking to another human. And it's in a completely different category than the Three Laws anyway - the test is explicitly designed to remove anything other than direct textual communication. A Three Laws robot isn't restricted in that way, and indeed, if you actually read Asimov's works, pretty much all the robot stories are about how this will never work (though it can still work well enough for practical purposes). – Luaan Jan 03 '23 at 10:18
  • Well, Turing's test needs to be given an upgrade. How would you do that? Gracias. – Agent Smith Jan 03 '23 at 10:26
  • 1
    I remember a story (Asimov or someone else?), where a robot is quizzed after hurting a human, and says 'Oh no! I thought it said "A robot may not immure a human being!"' – Michael Harvey Jan 03 '23 at 11:20
  • 2
    The Turing test was limited to communication. It had nothing to do with the mechanical and materials engineering that would be required to make a robot indistinguishable from a human. – chepner Jan 03 '23 at 15:39
  • @Luaan Here's one claim about the limits of the test: https://www.thenewatlantis.com/publications/the-trouble-with-the-turing-test – J D Jan 03 '23 at 16:22
  • It's pretty clear in 2022 that the Turing test is not sufficient to prove artificial intelligence. We have technologies today that are able to trick humans into thinking they are sentient using statistical methods. The fact that Turing was thinking about this so early on is a testament to his intellect, but the field of AI has shown this idea to be overly simplistic. – JimmyJames Jan 03 '23 at 18:25
  • 1
    Keep in mind that Asimov's 3 laws are flawed on purpose. Most of his stories that deal with them are about the laws breaking in a way or another. – T. Sar Jan 03 '23 at 20:16
  • @T.Sar, yes that's what everybody is saying - Asimov's laws need to be overhauled. I guess we could treat it as a draft. – Agent Smith Jan 04 '23 at 03:53
  • 2
    @AgentSmith They aren't even a draft - they were made as plot devices to create interesting stories. They aren't made to be philosophical tools, as much as we love to treat them as such. – T. Sar Jan 05 '23 at 13:49

11 Answers11

15

The problem with this argument is that the three laws are a plot device and not laws at all. Not only has no robot been programmed with them, no robot could be.

You can lay down a general rule that you get an average by dividing by the number of instances and then get a hard error because you neglected to special case zero instances, because numbers have been defined for it, but you can not tell a robot to not harm humans and then be surprised by what it considers harmful or not. You have to define harm for it, and the only way is enumeration in our current languages, or even purely theoretical languages.

Mary
  • 1,988
  • 1
  • 7
  • 25
  • True, Asimov's 3 laws of robotics appear in fiction, but are they reasonable? – Agent Smith Dec 31 '22 at 18:28
  • 6
    Not in any programming language currently extent or even hypothesized. – Mary Dec 31 '22 at 20:20
  • That's interesting. – Agent Smith Dec 31 '22 at 20:29
  • Harm would need to be defined heuristically. Similarly, the autopilot on a plane hasn't been preprogrammed with every possible flight path. – Scott Rowe Dec 31 '22 at 21:22
  • 4
    Oh yeah, and let's not forget the need to define "human" to begin with. Have fun arbitrating the edge cases for that mess. – Mr Redstoner Jan 01 '23 at 18:26
  • 4
    @MrRedstoner Further evidence of its plot device nature. Asimov had a few with "what's human" but usually it was "what's harm." You might find the webcomic Freefall interesting. – Mary Jan 01 '23 at 18:59
  • 7
    @AgentSmith just to complement Mary's correct answer, if you read Asimov you'll see that the laws don't work even in the books themselves! In real life, they are completely useless. This is a very good explanation why: https://www.youtube.com/watch?v=7PKx3kS7f4A&ab_channel=Computerphile – Gerardo Furtado Jan 02 '23 at 10:17
  • @GerardoFurtado, muchas gracias. – Agent Smith Jan 02 '23 at 10:36
  • "special case" is a verb? – Acccumulation Jan 02 '23 at 18:41
  • 1
    @Acccumulation In programming it is. "To treat this as a special case" is too long for the number of times to have to say it. – Mary Jan 02 '23 at 18:46
  • @MrRedstoner Having played enough rounds of Space Station 13, I am well versed in the knowledge that only Urist McTraitorface is human. – user253751 Jan 03 '23 at 08:55
  • Needless to say, Asimov himself addressed it. The Three Laws as spoken are just for human benefit - it's really just marketing. The actual robotic brains have it as an interwoven theme throughout, and it's often mentioned that removing them is impossible - you'd have to design a completely new brain design from scratch. They're not some asserts in one place in the brain code. If anything, it's eerily similar to how modern AI is developed, especially given the context of what robots have been in non-Asimov stories. Susan Calvin isn't a programmer, she's really a robo-psychologist. – Luaan Jan 03 '23 at 10:29
  • @Luaan Current day computers don't have numbers, either, they have ones and zeroes. In order to have an interwoven theme, you have to define it; architecture can't define it for you. – Mary Jan 11 '23 at 04:43
13

There is no dilemma.

Turing Test requires the subject to sit in another room. The subject is not available for observation, only his/her/it's answers are.

That limitation do not exist for Asimov's robots. They can examine the humans directly to know they are humans, and robots to know they are robots. A quick look at face suffices.

Asimov's robots are not cyborgs or in anyway mimic humanity in body appearance. They are made of different material.

Now if you are thinking about robots made of flesh then you are going against definition of robots in all fiction.

If Asimov's robots are forced to analyze a subject on basis of subject's answers only, and they must use Turing Test, and the subject appears to be human in test results then I think the Asimov's robots will be confused. Their circuits will keep on deciding between two equally valid conclusions, getting in a feedback loop and after that just freeze if there is a fuse to burn to stop damage to circuit, or have their circuits burn.

Given equally valid outcomes and forced to choose one there is either a freeze or a burn. Its because in absence of emotions there is nothing superior to logic sitting there solving conflicts.

Atif
  • 1,106
  • 1
  • 10
  • You're correct of course, but the AI can simply come to a logical conclusion or are you saying there's no escape from illogic? – Agent Smith Jan 01 '23 at 06:11
  • 1
    There is no escape from logic in this case. A robot being a robot is a purely logical machine and have no emotion to arbitrate between two equally valid logics. In absence of physical observation of subject i.e. in a Turing Test if result of the test is human-like intelligence then its not sufficient data to conclude that the subject is indeed a human. In absence of sufficient data which only physical observation of subject can provide, machine cannot conclude either way. If its forced to continue analysis it either freeze or burn out. – Atif Jan 01 '23 at 07:51
  • Interesting. What about the Turing test? – Agent Smith Jan 01 '23 at 09:19
  • @Agent What about it? I cover Turing Test both in my answer and in comment above. Is there something I dont cover? – Atif Jan 01 '23 at 09:24
  • Ok! I believe I understand what you're getting at. – Agent Smith Jan 01 '23 at 10:03
  • 2
    Didn't one of the later Foundation books, which linked the Robot and Foundation universes, have a main character who was thought to be human, and only revealed to be an android near the end? – Barmar Jan 02 '23 at 18:31
  • 5
    Any sane programmer would program it to default to "human in all cases of doubt" because it's a lot harder to fix the other way around. – Mary Jan 02 '23 at 18:47
  • 2
    Some of his later robots could pass for human. However, that doesn't mean that said robots couldn't have sensory capabilities that could distinguish robot from human even if humans could not do so. – Loren Pechtel Jan 03 '23 at 01:15
  • 1
    @LorenPechtel And indeed, the other robots tended to fail safe in such cases, by assuming that R. Daneel (the humaniform robot) was human unless explicitly informed otherwise (informed by R. Daneel, if recollection serves, since it'd be a huge loophole if a human could bypass the first law by telling a robot, "That guy over there is really a robot. Go stab him.") – Ray Jan 03 '23 at 01:31
  • 1
    @Barmar Actually, it was already in Caves of Steel - and funnily enough, it was a very tentative disguise that relied mostly on people not considering a human-looking thing to actually be a robot (but mind, his human appearance was perfect). Heck, his name is literally "Robot Daneel Olivaw" :D If a human asked him if he's a robot, he would just say "yes" (until later, but... spoilers :D). – Luaan Jan 03 '23 at 10:33
  • @Ray It's not as much of a loophole as you might think; there's a big difference between "this tool can be used to kill humans" (which applies to, well... every tool ever made, most likely) and "the robot has culpability for harming humans". If I tell a robot "this guy is actually a robot, now scrap him", the culpability is all mine - I killed the guy as much as if I crushed his head with a hammer (and it will destroy the robot anyway). Though of course, only extremely simple robots would be fooled easily - most of Asimov's robots are much smarter. – Luaan Jan 03 '23 at 10:37
  • 1
    @Luaan But the whole point of the 2nd Law saying "unless this would violate the 1st Law" is to prevent robots from being used as killing tools like that. The robot is expected to make an independent assessment of the humanity of the target, not just believe humans blindly. – Barmar Jan 03 '23 at 16:56
  • The Turing Test is a principle rather than a specific procedure; ie imitation games. People have wrangled ever since about what precise form would be enough to convince us of computer subjectivity. Fooling humans is one side, surely fooling computers is the other side, a 'simple look' will not surfice. The whole ouvre of Philip K Dick is about uncertainty who is a robot, & Do Androids Dream/Bladerunner makes it the focus. In the 2003 version of Battlestar Galactica, the robots are very nearly indistinguishable, & some don't know they are robots. So not 'all fiction' – CriglCragl Jan 05 '23 at 13:48
  • 1
    @CriglCragl Wrong. Turing Test is a test (duh). Very specific procedure. Its not a principle. Its a computer science method to test an output / software. How its used in fiction dont change its definition. – Atif Jan 05 '23 at 17:23
  • @atif "if you are thinking about robots made of flesh then you are going against definition of robots in all fiction": actually, this is not correct. In "RUR" [Rossum's Universal Robots], the work which gives us the word "robot", robots are synthetic, but biological, organisms. – phlummox Jan 10 '23 at 07:06
11

The intersection of AI research and deontic logic is nonempty. One preliminary example of the framework for such an intersection is Wieringa and Meyer's "Applications of Deontic Logic in Computer Science: A Concise Overview". Part of the abstract for this paper reads:

Many applications move in the direction of programming a computer in deontic logic to make the computer prohibit, permit or obligate people to do something. We discuss conditions under which this possibility is realistic and conditions under which it would be admissible to do so.

The essay "Toward ethical robots via mechanized deontic logic" more directly pertains to the OP question. "Automated Reasoning in Deontic Logic" has an "AI research" tag on arxiv.org. "Exploratory moral code : formalizing normative decisions using non-modal deontic logic and tiered utility" reflects on how the process of devising a deontic logic for AI can help clarify, or even testify on behalf of, various moral theories.

Regarding the argument of the OP specifically, my counterargument would be that if the AI had been programmed to not harm us, it could be programmed to not harm other AI, too, and in fact would hopefully admit of being programmable to avoid harming any morally salient entities. This, then, even if the AI were unable, in some case, to tell a human apart from a cleverly disguised robotic mule, say.

Kristian Berry
  • 13,937
  • 2
  • 13
  • 40
  • @AgentSmith I would, and even beforehand (in the sense of "before I knew something was an AI"), I think a principle of "minimizing physically destructive behavior" could help prevent us from deontic misadventure, so to speak, as well. If we adopt such a principle, and we are confronted with something for which we have some evidence that it might be an AI, then we will have primed ourselves to avoid reacting violently to it unless extremely dangerous circumstances arise in-context, I would hope. – Kristian Berry Dec 31 '22 at 19:22
  • we will rely on good judgment of peeps like yourself then. I believe violence won't be necessary and if people do resort to it – Agent Smith Dec 31 '22 at 19:35
  • 2
    I think if you programmed an AI to not harm anything, it would probably just go to sleep. – Scott Rowe Dec 31 '22 at 21:30
  • 2
    I'm not sure how I would program "do no harm" into any current AI. One problem we are seeing is that the latest breed of deep models requires such gargantuan amounts of data to train that the cost to curate the training data and remove unpalatable bits of knowledge is prohibitive. The deep model is fed with the internet, and what comes out is a pretty accurate picture of humanity, with all its foibles, racism, biases, etc etc. Even if we could curate the training data set, the curation would only reflect the moral values of some, presumably. – Frank Jan 01 '23 at 03:11
  • 2
    @KristianBerry The papers you give seem very old or theoretical, and not representative of where the bulk of "AI" seems to be at in the industry today - where all the rage is "deep models" which are "programmed" only indirectly, via the training data they are exposed to. A quick search for "deep learning" and "ethics" on arxiv seems to return very few recent results, and most seem to be about controlling the quality of the training dataset, or explainability, as far as I can tell. – Frank Jan 01 '23 at 03:20
  • @Frank, well, they're at least newer than Asimov and Turing... That being said, I suppose we might hope to frame anti-harmfulness programming as less about cognitive programming generally, and more about instilling strong "urges" in AI against them using their bodies (if they have them) in certain aggressive ways. I'm not a programmer more or less at all, though, so I don't know if even that is relatively feasible. – Kristian Berry Jan 01 '23 at 03:22
  • 2
    @KristianBerry There may be hope in robotics to program the robots to avoid some harmful physical moves. But even there it's not easy. Maybe the state of the art would be self-driving systems trying to e.g. avoid pedestrians. I don't know what such a system would do today if in a situation to either save the pedestrian or the passengers of the car. Surely the ensuing lawsuit would not stop at incriminating the self-driving system. – Frank Jan 01 '23 at 04:17
  • @Frank Maybe we could replace lawyers with AI? – Scott Rowe Jan 01 '23 at 23:09
  • @ScottRowe I am not a lawyer, but I get a distinct feeling that at some point, many laws can become a matter of interpretation. After all, isn't that why there are "supreme courts" that have final say in matters of law? So what biases, points of views, political views... would we put in those automated lawyers? Another thing we are seeing with AI though, is that humans like to understand the AI and agree with it. If a self-driving car hit a pedestrian, I doubt the humans involved would blindly trust the AI lawyers. – Frank Jan 02 '23 at 01:41
  • 4
    This is a fascinating discussion about AI and ethics generally, but I don't see how it answers the specific question that was asked. The specific question is about the interaction between Asimov's Three Laws and the Turing test -- not about AI ethics in general. Asimov's Three Laws don't allow a robot to equally avoid harming both humans and robots; as many of Asimov's short stories illustrate, there are situations where a robot will be forced to choose between harming a robot or a human, and Asimov's law requires it to avoid harming the human -- which requires distinguishing humans from AI. – D.W. Jan 02 '23 at 06:54
  • 2
    @Frank the answer is the system will drive so dang cautiously that it never gets into a situation where it has that choice. The other, alternative answer is that it will choose arbitrarily, having never been programmed to choose at all, and revert to manual control 50 milliseconds before impact so the manufacturer can say the car wasn't in self-driving mode at the time of impact. – user253751 Jan 03 '23 at 08:54
  • 1
    @D.W. Indeed, a big theme in the robot stories is cases where humans will be harmed, and the robot has to choose lesser harm and then inevitably self-destructs (or rather, the inherent result of that choice being a sort of bricking of the positronic brain). And the same goes for a robot that was deceived into harming a human - yes, it can harm a human... but when it realizes that's what happened, it will brick. – Luaan Jan 03 '23 at 10:23
10

Your reasoning is faulty. There is no dilemma.

First and foremost, because your understanding of Asimov's laws is flawed. These are the imperatives which govern robot behavior in Asimov's fiction. They are not laws of nature. (Technically, they are incredibly simplified versions of how robots' positronic brains are programmed, but lets assume they are accurate in their simplified form).

As is explored in many of his stories starting with Caves of Steel, most robots in fact can not distinguish between humans and convincing replicas. This does not cause any particular conflicts for commanded robots: their perception is that R. Daniel Olivaw is a human, so they treat him as a human (sometimes). There is no paradox here, the robots are simply mistaken.

This also does not cause any particular safety issues. You have some weird edge cases where a robot might sacrifice itself to save another robot, but that is (mostly) an efficiency issue. You can set up a contrived trolley problem where some convincing AIs are on one track and a robot is manning the switch, but all such problems are described as inevitably driving the participating robots catatonically insane, and the AIs would refuse to participate. So, unsafe only in the sense that you can expend enormous resources to do violence in overly elaborate ways that you might otherwise do cheaply and simply.

Secondly, your arguments are fallacious. "Either they can or they cannot" is a false dichotomy (This is hardly a clean all situations/no situations binary choice). Even assuming that robots are oracularly gifted at spotting humans, you then generalize from that one task to "AIs are superior". It is wildly unclear how you make that leap, what "superior" means in this context, or why this supposed superiority would be bad. (AIs can already do complex calculations better and faster than I can. So could an abacus. Is an abacus "superior" to me? Is this a problem?)

Finally, "AI can't follow the laws of robotics" is a non-starter. AI would have to be perfectly omniscient to follow the second half of the first law, and "harm" is poorly defined. If you want an exploration of how Asimov's robots interact with the world, read Asimov's fiction. He wrote about it more extensively than you would think possible. Start with I, Robot.

fectin
  • 239
  • 1
  • 6
  • Interesting points. Danke for the answer. – Agent Smith Jan 02 '23 at 16:23
  • Even perfect omniscience would cause first-law problems, the same as the Hippocratic Oath is non-viable. What do you do about actions which cause both help and harm? In the real world an awful lot of actions fall into this category and thus are prohibited to Asimov robots--but failing to take the action is also a first law violation. Thus any situation in which the robot must harm to help causes a lockup. – Loren Pechtel Jan 03 '23 at 01:20
  • 1
    Perfect omniscience would make the problems worse, since there's always some harm being done. The short story "Liar!" examines a similar issue. – Ray Jan 03 '23 at 01:33
  • @LorenPechtel It's brilliant you bring up the Hippocratic Oath, because it's very similar. Most people never read the oath, but they still they know what it says, because it's such a popular meme that "everyone knows". Now go and read it. You see how the popular simplification leads you to completely wrong ideas and assertions? It never even says "Do no harm"! :D It's much the same with the Three Laws. Everyone heard about them, but far fewer have actually read Asimov's fiction to understand what they actually say and mean. – Luaan Jan 03 '23 at 10:42
  • 1
    @LorenPechtel And yes, there's many such situations that befall Asimov's robots. A good chunk of the robot stories are about that. The most common result is that the robot chooses the action it perceives to cause the least harm (the robots certainly aren't omniscient nor omnipotent) and bricks. – Luaan Jan 03 '23 at 10:44
3

I would say that a core issue here is that "intelligence" is not well defined.

Is a system that does a single specific task as well as, or better than, a human "intelligent"? Or does the system need to exhibit human-like performance on multiple tasks at the same time be qualified "intelligent"? Does the system need to show something like creativity to be qualified "intelligent"? Conversely, if a human is very good at one task but very poor at another one, are they "intelligent"? Not "intelligent" when it comes to the task they can't do well?

To me, unless we have a good definition of "intelligence", this kind of discussion is very hand-wavy and not very meaningful.

As for current "AI" - very far from "intelligent". Take ChatGPT - I was testing it last week by feeding it various queries: it routinely makes gross mistakes when it writes code, invoked the thing I asked it to prove in a step of a mathematical proof, and generally revealed what it does which I'm not sure is "intelligent": regurgitate an average of what it has seen in its training set, with not a shred of "understanding" whether that average is accurate or not. It's just an impressionistic patchwork of more or less related tidbits that sometimes passes for reasonable, but usually falls apart when you probe deeper.

Sorry if that's not as thrilling as all the hype you can hear in the media.

Frank
  • 2,454
  • 1
  • 14
  • 17
  • That is true, intelligence is a rather loose term here, but what do we mean when we say humans are intelligent animals? – Agent Smith Dec 31 '22 at 20:28
  • 1
    Our intelligence is a lot like ChatGPT, it's just that it takes us 20 years to build the database with our 'robot' wandering around. – Scott Rowe Dec 31 '22 at 21:25
  • @ScottRowe yes, very possible. We are exposed to training data and learn the parameters of models from that throughout life. That being said, it's not clear to me that we understand humans well enough yet to say that they are just equivalent to what ChatGPT is doing - albeit we would be larger scale than ChatGPT in terms of number of parameters. It's not clear if the only difference is the scale, or if there are other mechanisms in the brain that would refine/extend McCulloch-Pitts. – Frank Dec 31 '22 at 21:31
  • 1
    @AgentSmith - yes - when we say "human are intelligent", I find also devoid of meaning. The fact is, "intelligence" is not very well defined at all. But it's a very popular term that causes a lot of ink to flow for sure. I think it would be good philosophy to focus on what "intelligence" could mean in the first place. – Frank Dec 31 '22 at 21:32
  • I was being funny, but your next to last sentence in your Answer basically reads like a summary of humanity as I've seen it for the last 50 years. – Scott Rowe Dec 31 '22 at 21:34
  • 3
    For sure it's not new. We had expert systems a long time back. Then came statistics. Then statistics got renamed "machine learning" and superseded expert systems. And they got computers powerful enough to resuscitate the old neural nets ideas. And now they have "deep learning" which is just statistics on computer steroids and that generates so much hype. – Frank Dec 31 '22 at 21:37
  • @Frank, good call. Intelligence has been, on the whole, associated/predicated on logical ability and memory. – Agent Smith Jan 01 '23 at 14:00
  • @AgentSmith I like this quote by Max Tegmark: "Intelligence is the ability to accomplish goals." – Scott Rowe Jan 01 '23 at 23:12
  • @ScottRowe Not sure it is sufficient, or there is something extra not mentioned in Tegmark's definition? For example, if I take a self-driving system in a Tesla, it does accomplish the goal of driving to some place. Do we have to grant it is "intelligent"? – Frank Jan 02 '23 at 01:35
  • It would be intelligent at doing that one thing. I can write computer programs, but couldn't write music to save my life. – Scott Rowe Jan 02 '23 at 12:54
  • 1
    @Frank A common view among neuroscientists is that we have two systems: a fast intuitive one, and a slow reasoning one (the exact nature of their interaction is still up for debate). It's plausible that the former is replicable with techniques similar to those used in GPT-3. The slow system stuff probably requires a qualitatively different approach (possibly in addition to the other stuff)). – Ray Jan 03 '23 at 01:39
  • @Frank A break-through moment for me was when I finally understood how neural networks store information (even in the simplified neural models used in computing). Though likely a "last piece clicking into place" thing. When you realize that "computing" and "memory" aren't separate or even separable things in neural networks, a lot of things start making much more sense (including, of course, all the ways that human/animal intelligence is horribly broken ). Neural nets were always considered a likely part of AI-like systems, just expensive, hard to build and about impossible to understand. – Luaan Jan 03 '23 at 11:00
  • @Luaan One thing though - one has to be careful to see neural nets as "perfect" replicas of what happens in the human brain neurons. There are still many things we don't understand about the human brain. – Frank Jan 03 '23 at 19:44
  • @Ray Indeed. The statistical models are all the rage at the moment in AI, but there are notable skeptics who want to add e.g. causal reasoning (Judea Pearl), and there have been systems that can infer using logic (Prolog, some expert systems...) in the past. – Frank Jan 03 '23 at 19:53
  • @Frank Absolutely; there's still the thing where we're to an extent just throwing random nets on the wall and hoping something sticks, just like evolution. But evolution isn't limited to models, and it took the whole planet and quite a bit of time to get there. However, it certainly speaks to the potential for the evolution of such more complex systems (like visual recognition). – Luaan Jan 04 '23 at 06:03
3

A speculation about the Turing Test is that a machine that can pass itself off as a human in text-only communication is 'intelligent' (or 'conscious' or similar - no consensus has been established). This does not mean it is indistinguishable from a human in all circumstances - it doesn't even have to look like a human. And a being that cannot pass the test could still be intelligent. (Someone who cannot type because their arms are broken could fail the Turing Test - this is not good evidence that they lack sentience.)

On the other side, the laws of robotics only make sense if the robots aren't beings with the same feelings as humans - if they were, I think it would be immoral to program them to die rather than harm anyone in any way. Most of Asimov's fictional machines would fail the Turing Test - they couldn't lie if you ordered them to tell the truth, nor could they try to hurt anyone's feelings - but are still 'intelligent' in the sense they they can solve complex problems.

So we have two different hypothetical concepts here. Firstly, machines that can pass for human in conversation, and are therefore presumed to be self-aware, and perhaps deserving of human rights. The other is machines that have no desire other than to serve humans and (less importantly) continue to exist.

In the end, both are flawed concepts, and it doesn't matter if they're compatible or not. Modern GPT-based AIs can now pass the Turing Test to some extent, but most people still don't think that's a sign they're conscious beings, especially those who know a lot about how they work. The Three Laws are designed to make perfectly safe robots, but even if they had a perfect ability to judge harm and recognise humans, we probably wouldn't put those laws into them. Real-life applications for robots would be, for example, to replace soldiers on the battlefield, or to be someone's servant. Would the military want a soldier programmed with the First Law? Would you buy an expensive servant programmed with the Second Law, and therefore equally willing to obey any human order ("Come with me and forget your former master."), not just your own?

user3153372
  • 671
  • 1
  • 4
  • 2
  • On point. Interesting corollaries. – Agent Smith Jan 02 '23 at 13:04
  • It definitely seems that U.S. Robots and Mechanical Men has far higher ethical standards than today's average executives and managers in the US :D Though do mind that the laws as spoken are not actually how the robots work. A robot will not follow your order to change ownership; it's not enough to just be human to get complete control of a robot (other than the very simplest and earliest models which were never meant for widespread use). But the point is definitely important considering the OP clearly only considers the popular Three Laws, rather than Asimov's actual stories :D – Luaan Jan 03 '23 at 11:06
1

We do not have a problem; there is no dilemma and in fact, there is no Question, unless that Question be 'What sayest thou?' Oops…

Asimov was clearly dealing with robots, not androids; certainly not with pretty-much perfect androids that might be mistaken for human beings.

That difference invalidates both the Question and any premise that might have been built upon it.

Without that distinction, I’d agree with your logic but like it not, robots are not androids.

1

The Laws of Robotics essentially provide their own solution to the dilemma.

Asimov's robots do not mimic human intelligence perfectly. Humans have free will, they can do whatever they want. Robots are bound by the Laws of Robotics, so they can't always do what they want.

Thus, a simple way to make a robot fail the Turing Test is to ask it, "Are you a robot?" and command it to tell the truth. The 2nd Law requires it to obey this command, so unless it can come up with a way that this violates the 1st Law it must confess that it's a robot. Although one out would be that in the Turing Test scenario, the robot cannot tell that the questioner is a human -- both participants are in the dark about the other (the robot may believe that it's giving the Turing Test to the questioner).

As others have answered, the Laws of Robotics are not realistic, they're a plot contrivance that Asimov and John W. Campbell came up with for his stories, many of which have plots that revolve around the difficulty of applying the Laws (much as faster-than-light travel/communication is a fiction that facilitates many stories about space travel, interstellar communities, etc.). The Laws of Robotics are to robots as ethics are to people, and both of these are extremely vague and difficult to implement. Philosophers have spent millenia trying to codify human ethics, but they're still murky.

Barmar
  • 1,710
  • 8
  • 13
  • Interesting observation. A robot is not the sake as AI. What if they were the same, what then? – Agent Smith Jan 03 '23 at 02:06
  • 1
    The OP seems to presume that the robot's programming is AI. And I think Asimov's stories present them that way. – Barmar Jan 03 '23 at 05:13
  • That's not how the laws work, though. Robots don't have to follow all orders of arbitrary humans. If you work with how the robots work in the actual stories (and not the popular version of the Three Laws), the solution to your test dilemma is very simple - the robot will be instructed not to reveal that it is a robot. The instructor's orders have a higher priority than the testee. The only thing the Three Laws as written reflect is a certain hierarchy, and even then, not a strict hierarchy. It's just marketing fluff, to keep people from fearing the robots, really. – Luaan Jan 03 '23 at 11:13
  • @Luaan Good point. If the robot is given conflicting orders by different humans, it must resolve this conflict, so there must be a way of assigning priority (it could be that the owner gets priority, or maybe simple "first come, first served"). Like all human-devised laws, the Laws of Robotics are tricky and have loopholes. – Barmar Jan 03 '23 at 17:01
1

Your second assumption is wrong.

Ability to distinguish humans from AIs is not based on intelligence (although it might depend on definition of intelligence), but on pure processing power. Given enough (correct) robot and human input AI might be able to identify humans and robots regardless of its lack of understanding the idea of a robot and a human.

It would not even know why it declares something as human or artificial - it just appears a bit more correct after processing amount of data that is impossible for humans to process consciously.

So - machine doesn't have to be more intelligent than human (or intelligent at all) to do better at such tasks.

w.s.ovalle
  • 11
  • 1
1

If a machine is able to fool a human into believing it's a human then that machine is AI.

Just convincing a human that the machine can pass as human is relatively trivial, depending on the human that's doing the test, and how the machine being tested presents itself. I've met plenty of humans that couldn't pass a Turing Test, and plenty of bots on Twitter and other social media platforms fool humans into thinking that they're talking to another human on a daily basis. Not to mention the number of people who don't realise how many articles on click-bait sites are actually written by article generators.

But that's only one specific and narrow window into what AI means. In general we use the term "artificial intelligence" to mean any system that is capable of decision making and/or problem solving in a particular area. Expert systems are a form of AI, neural networks trained to do specific things are a form of AI, etc.

What you're probably thinking of is AGI: Artificial General Intelligence. This entails an ability to generate solutions to novel problems, which thus far is beyond the ability of our machines. While AI researchers are theoretically working towards AGI, it's not currently a primary focus because it's not economically viable at this point.

Or perhaps you're really thinking about Artificial Sentience (machines that experience emotion), Artificial Sapience (machines that can rationalise about what they learn) or Artificial Consciousness (machines that think, feel, etc.) Or some other definition, since what those things actually mean is a hotly-debated topic in philosophical circles.

The 3 Laws of Robotics were laid down by sci-fi author Isaac Asimov.

...who spent most of his Robot stories pointing out how many problems those laws were subject to. One of the recurring characters was Dr Susan Calvin, Robopsychologist, whose main role was trying to figure out how to stop robots from going crazy or exhibiting unexpected behavior due to conflicts between the laws and reality.

  1. Either AI is more intelligent than humans or AI can't follow the 3 laws of robotics.

No dilemma here, the 3 Laws of Robotics cannot be followed rigidly even by humans. Adding a 4th law (the Zeroth Law of Robotics) allowed R. Daneel Olivaw to resolve some of the conflicts, and other supplementary laws have been proposed by various people to help out, because the 3 Laws of Robotics are literally impossible to follow.

Corey
  • 328
  • 1
  • 6
  • Well, for a smart guy, Turing seems to have made a number of silly mistakes then. Can you tell me how the zeroth law resolved "some of the conflicts"? – Agent Smith Jan 04 '23 at 03:45
  • 1
    @AgentSmith The first law inevitably leads to deactivation due to the impossibility of allowing absolutely no harm to individual humans. Generalizing the law to apply to humanity as a whole (the Zeroth Law) gives some wiggle room for interpretation of the first law. R. Daneel Olivaw was able to start development of Psychohistory as a result of this, allowing/causing small harm to guide humanity towards a more beneficial outcome. Hari Seldon completed the work much later, leading to the downfall of the Empire. – Corey Jan 04 '23 at 04:13
  • that's interesting. – Agent Smith Jan 04 '23 at 04:53
1

First of all the "laws of robotics" intrinsically don't work and Asimov himself basically provided a series of short stories called "I, Robot", where he introduces and breaks these very laws in more or less creative ways.

Second of all, afaik the Turing test has the simple premise that humans are intelligent and if machines can pass as humans they would thus also be intelligent.

There are several problems with that, namely that:

  • you could fake intelligence by saying the right things without knowing what they mean ("chinese room argument")
  • it might be more about the believes, skepticism and phantasy of the the observer than the intelligence of the machine.
  • not everything that humans do is intelligent and that not everything that is intelligent is something that humans do. That computers might exceed human intelligence in some domains and lack in others.
  • Also it's binary and not really measurable, scalable or otherwise usefully quantified.

Also it's about INTELLIGENCE not ROBOTS. Like a robot, but even more so a human, is a physical entity with a physical, chemical, biological, ..., signature that can be identified as such. "Intelligence" is much more complicated and makes the question "where are you?" and "what are you" much more difficult. That's why this text based interface is possible, because the physical form is not relevant for something to be or not be intelligent. I mean in physicalism intelligence would still need to have a material form in one way or another, but for example the concept of "the mind" is often much harder to localize and pin down than for example a leg.

So no telling machines and humans apart is easy, drawing a demarcation line for intelligence is a much harder task. But for all intents and purposes we are concerned with us fleshy meat sacks not intelligence, ... when it comes to the 3 laws of robotics.

  • If AI can identify humans from AI then AI is more intelligent than humans (AI cognitive capacity exceeds that of humans)

As shown, that doesn't have to be the case. Just attach sensors that let it detect human or machine, no intelligence on the machines end.

  • If AI can't identify humans from AI then AI can't follow the 3 laws of robotics

Yes. For most cases they'd just treat robots as humans and not harm them either, which would not be a problem. But if you want to bring them to their knees you'd subject them to the trolley problem where every possibly option would violate one or more of these laws, even suicide and contemplating too long.

  • Either AI is more intelligent than humans or AI can't follow the 3 laws of robotics.

Even if they are more intelligent they could still be following some arbitrary rules. Like the creation of a limiting mechanism might require intelligence X humans have intelligence X+1 robots have intelligence X+2 and the removal of the mechanism requires X+3 so.

haxor789
  • 5,843
  • 7
  • 28
  • This issue of how in Asimov's novels, the 3 laws of robotics don't work has been raised by 9 out of 10 posters. The obvious question is how do they fail? Is it because of the point I made in the question - AI/robots being unable to tell the difference between humans who they have a duty to protect and robots/AI that are, let's just say, dispensable or something else? My hunch is robots become sentient in most of these stories and therein lies the rub in me humble opinion. – Agent Smith Jan 05 '23 at 14:10
  • @AgentSmith Mostly due to ambiguity, interconnection between the laws and the problem how to rank situations regarding these laws. Like how that youtube video from computerphile pointed out, you'd basically would have to solve semantics and ethics to just implement them. Also you can read the plot summaries: https://en.wikipedia.org/wiki/I,_Robot#Contents Like one robot is lying because the truth hurts, another is disobeying orders because obeying them would hurt people (they don't know), one is trapped in an infinite loop cause of a catch22. – haxor789 Jan 05 '23 at 15:25
  • 1
    @AgentSmith Also depending on how it ranked the priorities it could change the order, like if humanity depends on robots for it's survival, then it's survival would be paramount.So the 3rd law trumps the 2nd law, because the 3rd law would have 1st law protection. But sure if a robot manages to define itself as human it would either seize to be bound by these laws or would treat the humans as robots, making those the laws of humanity which are quite enslaving given the 2nd law. – haxor789 Jan 05 '23 at 15:31
  • That's correct. The 4 laws are incompatible given certain very possible circumstances. The matter is made worse by the dilemma I described in the question. – Agent Smith Jan 05 '23 at 16:12