Skip to main content

DT Debates: Should robots be held to a human moral compass?

Robot revealing heart: Human moral compass
Image used with permission by copyright holder

The rise of A.I. is in our midst. Google Glasses, self-driving cars, and our continued attempts at robotic life are not to be ignored. And a recent study found that we’re beginning to think about the morals and ethics we humans will hold our droid friends to. So we had to pit staff writers Andrew Couts and Amir Iliaifar against each to ask… 

question
Image used with permission by copyright holder
 

Andrew

 

andrew-coutsThis is a highly complex question to which I do not claim to know all the answers. But I can definitively say that we must hold robots to at least the same moral and ethical standards that we hold ourselves. Obviously, we cannot let robots do things to humans that we do not allow amongst one another — that’s something science fiction god Isaac Asimov wisely concluded all the way back in 1942.

The big question here, I think, is whether we have a moral imperative to treat robots in the same way we treat people. Fortunately, some have already begun to answer this question. In 2007, the South Korean government drafted a code of ethics to prevent humans from “abusing” robots (and vice versa). This too follows in line with Asimov’s “Three Laws of Robotics,” the third rule of which states that “a robot must protect its own existence,” unless that means injuring a human or disobeying the orders of a human.

Of course, some might say that a robot, no matter how complex or life-like, is really nothing more than a fancy computer — which is technically true — and that there is no moral code that prohibits smashing your computer, or tossing it off a building, so why should there be a rule against damaging or destroying robots? That view is short-sighted. Once we reach a point where robots closely mimic the physical attributes and/or “mental” wherewithal of humans, it will become increasingly difficult to distinguish the two, so I believe it is important that we grant the same respect to these machines as we do our fellow man — if only to hold back the most savage instincts of human nature.

 

Amir

 

Amir-IliaifarWe meet again Mr. Couts. Last time you proved a more than worthy adversary and I imagine I can only expect more of the same this go-around. That being said, while I agree with your take on instilling a modicum of morality towards our eventual robotic overlords, I think you are missing the point: any question of ethics and morality should be the sole enterprise of the human beings behind the machines, and not the other way around. It may be that we one day inhabit a world where robots exist autonomously, but the reality is no matter how advanced a robot may become, they will never be considered “real” (and no, I’m not going to get into a metaphysical argument over this), and should always be the responsibility of their “creators.” Therefore any of the legal or ethical standards you are suggesting are moot, and need to apply to whoever develops, builds, and operates them rather than the robots themselves.

As for there being a “moral imperative to treat robots in the same way we treat people,” again, I don’t think that really matters. Sure we can pass laws and what not, it’s certainly doable, but I question whether that would be adequate enough given the increasingly destitute nature of people all over the world. Not to be glib, but we need to focus on the legal and ethical standards we place upon ourselves before we try and codify or promote any sort of “robotic equal rights.” There are plenty of living, breathing humans out there that don’t enjoy even the most basic of human rights, so I simply suggest we concentrate our attention towards that. On a side note: Asimov was a brilliant man, and I wholeheartedly agree with his Three Laws.

 

Andrew

 

While I completely agree with you that at the moment the problem of people treating each other badly is far more pressing than robot morality, we must accept that the day when robots inhabit every nook and cranny of our lives is quickly approaching. And there must be at least a minimum code of ethics that guide how we treat and interact with these mechanical “beings.”

I agree with you that the scientists, engineers, and corporations that create robots should be held responsible for the actions of their contraptions. Just because robot-makers must follow certain ethical guidelines, however, doesn’t mean that those of us who interact with robots cannot also follow a code. Of course, I don’t believe this code should be, or even can be, the same as the moral code that guides our interactions with fellow humans. But I do believe that it is possible for humans to act immorally toward robots, even if the robot can never be truly conscious of the actions in the same way a human (or even a dog, ape, or alpaca) is aware.

Imagine this scenario: Say you purchase a robot butler. Your robot butler serves you well, day after day. Then, one cloudy May afternoon, it accidentally trips on the carpet, and spills a giant glass of grape Kool-Aid all over your sheepskin R2-D2 rug. You lash out, and chop the head off your robot butler — lets call him “Chris” — rendering him, well, headless, and completely useless.

Now, Chris hasn’t the faintest clue about what just happened. But you do. You know you let your negative emotions get the better of you, and you acted out with violence. In my mind, that is morally incorrect simply because you had a violent reaction, and let the evil, wicked part of your soul get the better of your actions. That may not be the same as setting your step-brother on fire because he put snot in your comic books (yeah, I know all about that Amir, don’t try to hide it), but it is at least ever so slightly wrong. So, you know, there should be rules against that.

 

Amir

 

There is only one rule: There are no rules! Now that that’s out of my system… I see what you’re getting at, and I just don’t agree. Why would we need to treat these “mechanical beings” as anything other than property? Yes they might be intricate and infinitely cool, but other than mechanized components that make up their rusty innards, there is nothing about a robot that makes it real or intrinsically human. Unless it’s a living, breathing organism, I don’t really see the need to advocate for any sort of laws or code of ethics toward machines.

I’m glad you agree with me that the ultimate responsibility for a robot’s actions lie with its creators/owners, but that’s the only moral imperative I see here. A machine is a machine. It has no emotions or feelings. If I want to rough up my machine then so be it — I don’t see the problem. If I break it I will need to buy another one, if I can’t afford to, well then I’m up chocolate creek without a popsicle stick aren’t I? Now that doesn’t mean I wish to just go around decapitating the heads of my robotic man servant (who I would totally dress up to resemble you, glasses and all FYI) but who the heck cares? It’s a robot! If I let my emotions get the best of me I’m out an expensive robot and that is going to be more of an impetus for me not to mistreat my property than any sort of moral code being shoved upon me. We have plenty of machines right now that perform crazy awesome tasks, would you exclude them from your robo-crusade just because they don’t resemble a human? The fact is: there is no distinction, a machine is a machine. Now come with me if you want to live.

 

Andrew

 

I completely understand your logic, but I feel as though your argument is woefully short-sighted. At some point in the future, there will be machines capable of acting more human than some actual humans. Yes, it will still be a machine, technically, but that’s like saying humans are still just animals. Which, incidentally, we are.

Robots will not just be “machines” in the way that my smartphone is a machine, or my lawnmower is a machine. These will be fully-functioning members of society, some of them capable of amazing feats, both physical and mental. We will be able to talk to them, and confide in them, and perhaps even hang out and watch movies together. They will become companions, confidants, even friends. Just as Google is now capable of “learning” what types of information each of us is looking for in our searches, so too will these artificially intelligent beings be able to learn our wants, our needs, and our emotions. Or, at the very least, our most likely response to a piece of data or stimuli. I can all but guarantee that many of us will not view these next-generation robots as “just machines.” And when that happens, we will not be able to justify damaging them, or violently knocking them out of existence, without feeling as though we’re doing something wrong. Which is precisely why we need to decide on an ethical code of conduct now, before things get messy.

 

Amir

 

I empathize with your position Andrew, really I do, but while my viewpoint might be “woefully short-sighted” I happen to think you too are missing the bigger picture here. If you start passing laws protecting robot’s existence then you’re going to need to recognize them as fully-fledged members of society, cognizant of their rights and privileges within our social fabric. Of course that leads to the very tricky endeavor of actually getting people to recognize robots as legitimate beings in the first place, which I don’t think will ever happen, especially among religious folk.

But let’s say, for argument’s sake, we made that technological leap into the future and robots are barely distinguishable from humans. What happens if I accidentally run over a robot? Should I be charged with second-degree manslaughter? No, of course not, that’s absurd. If we are going to treat robots as living beings, what happens when they start demanding rights, or worse, taking them?

No matter how much technological wizardry you put into a robot, it’s going to be a robot. Everything else is veneer. If anything, I think the more humanity we instill upon a machine, the more we detach ourselves from our own.

I’ll leave you with this: Right now the U.S. military uses drones – highly sophisticated unmanned machines – to kill military targets. We happily program these machines to do our dirty work. Why? Because we can. They can’t feel and have no moral or ethical code to live by– they just obey. So much for Asimov’s laws… 

[Image courtesy of Dvpodt/Shutterstock]

Digital Trends Staff
Digital Trends has a simple mission: to help readers easily understand how tech affects the way they live. We are your…
Range Rover’s first electric SUV has 48,000 pre-orders
Land Rover Range Rover Velar SVAutobiography Dynamic Edition

Range Rover, the brand made famous for its British-styled, luxury, all-terrain SUVs, is keen to show it means business about going electric.

And, according to the most recent investor presentation by parent company JLR, that’s all because Range Rover fans are showing the way. Not only was demand for Range Rover’s hybrid vehicles up 29% in the last six months, but customers are buying hybrids “as a stepping stone towards battery electric vehicles,” the company says.

Read more
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more