Can We Teach Robots Ethics
Can We Teach Robots Ethics
Have you ever thought about a future with flying cars? What about cars that drive themselves? In this
informational text, the possibility and morals of self-driving cars are discussed. As you read, make note
of the details that support the author’s purpose throughout the text.
We are not used to the idea of machines making ethical1 decisions, but the day when they will routinely do this -
by themselves - is fast approaching. So how, asks the BBC's David Edmonds, will we teach them to do the right
thing?
The car arrives at your home bang on schedule at 8:00 am to take you to work. You climb into the back seat and
remove your electronic reading device from your briefcase to scan the news. There has never been trouble on
the journey before: there's usually little congestion.2 But today something unusual and terrible occurs: two
children, wrestling playfully on a grassy bank, roll on to the road in front of you. There's no time to brake. But if
the car skidded to the left it would hit an oncoming motorbike.
The year is 2027, and there's something else you should know. The car has no driver.
I'm in the passenger seat and Dr. Amy Rimmer is sitting behind the steering wheel.
Amy pushes a button on a screen, and, without her touching any more controls, the car drives us smoothly
down a road, stopping at a traffic light, before signaling, turning a sharp left, navigating a roundabout3 and
pulling gently into a lay-by.4
The journey's nerve-jangling for about five minutes. After that, it already seems humdrum.5 Amy, a 29-year-old
with a Cambridge University PhD, is the lead engineer on the Jaguar Land Rover autonomous6 car. She is
responsible for what the car sensors see, and how the car then responds.
She says that this car, or something similar, will be on our roads in a decade.
1. Ethical (adjective) relating to ethics, the moral principles that influence a person’s behavior
2. traffic
3. circular roundabout
4. a rest stop
5. boring
6. Autonomous (adjective) a device that can operate without human control
The dilemma prompted by the children who roll in front of the car is a variation on the famous (or notorious)
“trolley problem” in philosophy. A train (or tram, or trolley) is hurtling down a track. It's out of control. The
brakes have failed. But disaster lies ahead - five people are tied to the track. If you do nothing, they'll all be
killed. But you can flick the points and redirect the train down a side-track - so saving the five. The bad news is
that there's one man on that side-track and diverting the train will kill him. What should you do?
This question has been put to millions of people around the world. Most believe you should divert the train.
But now take another variation of the problem. A runaway train is hurtling towards five people. This time you
are standing on a footbridge overlooking the track, next to a man with a very bulky backpack. The only way to
save the five is to push Backpack Man to his death: the backpack will block the path of the train. Once again, it's
a choice between one life and five, but most people believe that Backpack Man should not be killed.
This puzzle has been around for decades, and still divides philosophers. Utilitarians, who believe that we should
act so as to maximize happiness, or well-being, think our intuitions7 are wrong about Backpack Man. Backpack
Man should be sacrificed: we should save the five lives.
Trolley-type dilemmas are wildly unrealistic. Nonetheless, in the future there may be a few occasions when the
driverless car does have to make a choice - which way to swerve, who to harm, or who to risk harming? These
questions raise many more. What kind of ethics should we program into the car? How should we value the life
of the driver compared to bystanders or passengers in other cars? Would you buy a car that was prepared to
sacrifice its driver to spare the lives of pedestrians? If so, you're unusual.
Then there's the thorny matter of who's going to make these ethical decisions. Will the government decide how
cars make choices? Or the manufacturer? Or will it be you, the consumer? Will you be able to walk into a
showroom and select the car's ethics as you would its color? “I'd like to purchase a Porsche utilitarian ‘kill-one-
to-save-five’ convertible in blue please…”
One way to approach these problems involves what is known as “machine learning.”
Susan Anderson is a philosopher, Michael Anderson a computer scientist. The best way to teach a robot ethics,
they believe, is to first program in certain principles (“avoid suffering”, “promote happiness”), and then have the
machine learn from particular scenarios how to apply the principles to new situations.
Take carebots - robots designed to assist the sick and elderly, by bringing food or a book, or by turning on the
lights or the TV. The carebot industry is expected to burgeon8 in the next decade. Like driverless cars, carebots
will have choices to make. Suppose a carebot is faced with a patient who refuses to take his or her medication.
That might be all right for a few hours, and the patient's autonomy9 is a value we would want to respect. But
there will come a time when help needs to be sought, because the patient's life may be in danger.
7. Intuition (noun) something that someone knows from instinctive feeling and not conscious reasoning
8. Burgeon (verb) to grow rapidly
A fundamental challenge is that if the machine evolves through a learning process we may be unable to predict
how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling
possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist
that if things do go wrong, we have a way to audit the code - a way of scrutinizing10 what's happened. Since it
would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing
a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's
bad actions.
However, one big advantage of robots is that they will operate in the same way in similar situations. The
autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more
than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a
big prize.
Amy Rimmer is excited about the prospect of the driverless car. It's not just the lives saved. The car will reduce
congestion and emissions11 and will be “one of the few things you will be able to buy that will give you time”.
What would it do in our trolley conundrum? Crash into two kids, or veer in front of an oncoming motorbike?
Jaguar Land Rover hasn't yet considered such questions, but Amy is not convinced that matters: “I don't have to
answer that question to pass a driving test, and I'm allowed to drive. So why would we dictate that the car has
to have an answer to these unlikely scenarios before we're allowed to get the benefits from it?”
That's an excellent question. If driverless cars save life overall why not allow them on to the road before we
resolve what they should do in very rare circumstances? Ultimately, though, we'd better hope that our machines
can be ethically programmed - because, like it or not, in the future more and more decisions that are currently
taken by humans will be delegated to robots.
Copyright © BBC News at bbc.co.uk/news. Used with permission, all rights reserved.
Unless otherwise noted, this content is licensed under the CC BY-NC-SA 4.0 license.