Criminal Liability of Robots
Criminal Liability of Robots
Technological advancement across the globe is causing human numbness, as we have started
operating in an auto mode when it comes to using gadgets, equipment’s or robotic
machinery. ‘Robot’ is not to be understood as a material manifestation but is the ‘Ability’ of
a machine to interact with its immediate environment. In a plethora of sci-fi movies, robots
are seen armored with guns and automated weapons ruthlessly killing humans to the extent of
making them extinct on this planet. But do we ever ponder, what will happen if this fiction
turns into a reality, when robots engage in misconduct and criminal activities. Remembering
an incident narrated in the Novel, 2001: Space Odyssey, where the AI murdered the crew
members on a space ship because of its inability to resolve a conflict between different
commands. This situation leads us to an engaging debate about whether robots can be made
criminally liable? If yes, under what conditions can they be held responsible for the
misconduct? Can they be punished like humans or do we make the owners and the
manufacturers liable? Whether the morality and ethical values are engrossed in the
algorithms of these machines? Will it be a new concept to label morally corrupt robots as
‘Criminal Robots’? The paper attempts to explain concept behind the idea of making robots
criminally liable and entwine their very existence with human sustenance. The paper will
also focus on the emotional harm suffered by the victims and lastly will discuss the counter
arguments put forward against criminal liability of the robots.
Introduction
The tech-race in the mad world can cause havoc and it may require a technological hault or
manufacturing minimalism. Who needs a human when siri and alexa can talk to you and your
google assistant obeys your every verbal command, but what if they misbehave or sound
indecent, whom should we go to? The Artificial intelligence growth is novice and we are
already dealing with legal issues pertaining to self-driven cars, robotic army, automated
weapons, security robots etc., for instance, Tay, a ‘chat bot’ made racist remarks on twitter
and in 2016 we heard about the death of a person in an accident caused by a self-driven car.
So, if we consider the above-mentioned cases, who do we think can be held responsible. On
shortlisting, we find certain categories, for example, the AI machines themselves, or it can
either be the person who manufactured these AI machines, who can be held liable for defects
and malfunction. Another person can be the software programmers or user of these robots/
machines, who is using the machines directly or is operating through a subordinate or an
acquaintance. An incident was reported in a motorcycle company in Japan in 1981 where AI
Robot pushed an employee in an adjacent operating machine mistakenly identifying him as a
threat to the mission. The employee died instantly. This example is neither from the movies
nor from the novels but a real time incident. Who was liable? AI follows cognitive processes
and that’s the reason AI driven machines are termed as Intelligent beings. Today’s world
breach and contradict the three principles of robotics stated by Issac Asimov in 1950, viz., a
robot must not injure any human, a robot must obey commands of the human, except where
the act comes in conflict with first law, and lastly, a robot must protect itself unless in
contravention with the other two laws. But maybe these laws were for the Robots in general
and not Robots with Artificial Intelligence.
Essential purpose of making the robots liable is to identify the culpable individual hiding
under the garb of manufacturer, supplier or trainer. The translucency pertaining to machines
require proper cooperation between regulatory and enforcement agencies on one side and
manufacturers of these products and machines on the other side. It will help in understanding
the real reason and responsibility behind an overt act or a malfunction. There may be
arguments regarding the redundancy in deciding the criminal liability of a machines, but there
are still few reasons to justify this decision, for instance, we are only looking at the liability in
case of ‘acts’, where malfunction, indecision, defect etc., has caused harm to an individual.
What we are not considering is the ‘omission’ part, where the manufacturer lapsed in adding
a moral component, training the robot to act in a situation of crises. Sometimes, the
manufacturers or anyone dealing with functioning of robots may not be directly associated
with the wrongdoing of the machine at a particular point of time but because they are causally
attached, they can be vicariously held responsible. They are responsible for providing
algorithms and thus will be liable for any combination the robot chooses to use. But is that
fair, a manufacturer must not be made responsible on the basis of designing learning
algorithms, a robot may choose indefinite number of computations and combinations for its
independent decision making, how can manufacturers be made liable for each combination.
Counter Arguments
Though, we are justifying our stance of making robots criminally liable, but still there are
counter arguments that keep the query unresolved. The most common argument is that a
machine cannot form any mens rea and without guilty mind every act cannot be justified.
They cannot be agents of any moral or immoral action in the absence of forming an intention.
The next argument is in regard to morality. Since when have we started considering the
merger of law and morality, how will the machines identify whether an act is moral or not.
For anyone who doesn’t realize that the act was wrongful, will it be justified to blame. It is
contended that robots do not have a conscience to distinguish between good and bad.
Howsoever, hard one may programme a robot but still certain percentage of expected fallacy
will remain. Every time comparing the liability of the robots with the liabilities of the
corporations is not a wise decision. The norms set to explain the liability of corporations is
not of recent past. The jurisprudence, law and procedural intricacies are well settled. Whereas
deciding onto the liability of the robots is still an ongoing concern. Also, can the viruses,
worms and bugs injected into a system be used as a defence of insanity, as the machine
couldn’t use the mind because of an external impact. Will the Constitutions of the world
consider AI entities as a part of ‘we the people’. The standard of due attention and care can
be followed by humans not by machines. Also, even if we decide to hold a robot liable, which
court will have the jurisdiction to try the case. Adding one more concern in the horizon is
answering the very question, whether robots can be arrested. The question of liability as we
are discussing is not limited in applicability to self-driven cars, but also extends to situations
were unmanned automated drones fire missiles at wrong targets due to malfunction etc.
Another critical issue pertains to identifying the individual for identifying culpability. When
we say that manufacturer and trainers are responsible, it’s not one person, but numerous
hands that make, maintain or repair the hardware and the software. When we refer to the
forensics principle of legal neutrality, which means that AI system’s criminal behavior must
be evaluated and investigated like human’s, one must note that any external data sent through
a ‘back door’ in an AI system makes it difficult for investigators to investigate and find out
real person behind it. Another important aspect is that, every decision made by a robot is not
an instant response. Every decision is an outcome of a long causation chain which involves
various steps.