The ethical dilemmas of robotics
If the idea of robot ethics
sounds like something out
of science fiction, think
again, writes Dylan Evans.
Scientists are already beginning
to think seriously about the
new ethical problems posed by
current developments in
robotics. In Pictures: Robot
This week, experts in South menagerie
Korea said they were drawing
up an ethical code to prevent humans abusing robots, and vice
versa. And, a group of leading roboticists called the European
Robotics Network (Euron) has even started lobbying
governments for legislation.
At the top of their list of concerns is safety. Robots were once
confined to specialist applications in industry and the military,
where users received extensive training on their use, but they
are increasingly being used by ordinary people.
Robot vacuum cleaners and lawn mowers are already in many
homes, and robotic toys are increasingly popular with children.
As these robots become more intelligent, it will become harder
to decide who is responsible if they injure someone. Is the
designer to blame, or the user, or the robot itself?
Decisions
Software robots - basically, just complicated computer
programmes - already make important financial decisions. Whose
fault is it if they make a bad investment?
Isaac Asimov was already
thinking about these problems
back in the 1940s, when he
developed his famous "three
laws of robotics".
He argued that intelligent robots
should all be programmed to
obey the following three laws:
• A robot may not injure a
human being, or,
through inaction, allow a
human being to come to
harm
• A robot must obey the
orders given it by human
beings except where
such orders would Robots have become a lot more
intelligent over the decades
conflict with the First Law
• A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law
These three laws might seem like a good way to keep robots
from harming people. But to a roboticist they pose more
problems than they solve. In fact, programming a real robot to
follow the three laws would itself be very difficult.
For a start, the robot would need to be able to tell humans apart
from similar-looking things such as chimpanzees, statues and
humanoid robots.
This may be easy for us humans, but it is a very hard problem
for robots, as anyone working in machine vision will tell you.
Robot 'rights'
Similar problems arise with rule two, as the robot would have to
be capable of telling an order apart from a casual request, which
would involve more research in the field of natural language
processing.
Asimov's three laws only
address the problem of making
robots safe, so even if we could
find a way to program robots to
follow them, other problems
could arise if robots became
sentient.
If robots can feel pain, should
they be granted certain rights? Nasa's Robotnaut is designed to
If robots develop emotions, as work on Mars
some experts think they will,
should they be allowed to marry humans? Should they be allowed
to own property?
These questions might sound far-fetched, but debates over
animal rights would have seemed equally far-fetched to many
people just a few decades ago. Now, however, such questions
are part of mainstream public debate.
And the technology is progressing so fast that it is probably wise
to start addressing the issues now.
One area of robotics that raises some difficult ethical questions,
and which is already developing rapidly, is the field of emotional
robotics.
This is the attempt to endow More pressing moral
robots with the ability to questions are already being
recognise human expressions raised by the increasing use
of emotion, and to engage in of robots in the military
behaviour that humans readily
perceive as emotional. Humanoid heads with expressive features
have become alarmingly lifelike.
David Hanson, an American scientist who once worked for
Disney, has developed a novel form of artificial skin that bunches
and wrinkles just like human skin, and the robot heads he covers
in this can smile, frown, and grimace in very human-like ways.
These robots are specifically designed to encourage human
beings to form emotional attachments to them. From a
commercial point of view, this is a perfectly legitimate way of
increasing sales. But the ethics of robot-human interaction are
more murky.
Jaron Lanier, an internet
pioneer, has warned of the
dangers such technology poses
to our sense of our own
humanity. If we see machines as
increasingly human-like, will we
come to see ourselves as more
machine-like?
Lanier talks of the dangers of David Hanson's K bot can mimic
"widening the moral circle" too human expressions
much.
If we grant rights to more and more entities besides ourselves,
will we dilute our sense of our own specialness?
This kind of speculation may miss the point, however. More
pressing moral questions are already being raised by the
increasing use of robots in the military.
The US military plans to have a fifth of its combat units fully
automated by the year 2020. Asimov's laws don't apply to
machines which are designed to harm people. When an army can
strike at an enemy with no risk to lives on its own side, it may be
less scrupulous in using force.
If we are to provide intelligent answers to the moral and legal
questions raised by the developments in robotics, lawyers and
ethicists will have to work closely alongside the engineers and
scientists developing the technology. And that, of course, will be
a challenge in itself.