0% found this document useful (0 votes)
38 views7 pages

Criminal Liability of Robots

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views7 pages

Criminal Liability of Robots

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Criminal Liability of AI Entities

Technological advancement across the globe is causing human numbness, as we have started
operating in an auto mode when it comes to using gadgets, equipment’s or robotic
machinery. ‘Robot’ is not to be understood as a material manifestation but is the ‘Ability’ of
a machine to interact with its immediate environment. In a plethora of sci-fi movies, robots
are seen armored with guns and automated weapons ruthlessly killing humans to the extent of
making them extinct on this planet. But do we ever ponder, what will happen if this fiction
turns into a reality, when robots engage in misconduct and criminal activities. Remembering
an incident narrated in the Novel, 2001: Space Odyssey, where the AI murdered the crew
members on a space ship because of its inability to resolve a conflict between different
commands. This situation leads us to an engaging debate about whether robots can be made
criminally liable? If yes, under what conditions can they be held responsible for the
misconduct? Can they be punished like humans or do we make the owners and the
manufacturers liable? Whether the morality and ethical values are engrossed in the
algorithms of these machines? Will it be a new concept to label morally corrupt robots as
‘Criminal Robots’? The paper attempts to explain concept behind the idea of making robots
criminally liable and entwine their very existence with human sustenance. The paper will
also focus on the emotional harm suffered by the victims and lastly will discuss the counter
arguments put forward against criminal liability of the robots.

Introduction
The tech-race in the mad world can cause havoc and it may require a technological hault or
manufacturing minimalism. Who needs a human when siri and alexa can talk to you and your
google assistant obeys your every verbal command, but what if they misbehave or sound
indecent, whom should we go to? The Artificial intelligence growth is novice and we are
already dealing with legal issues pertaining to self-driven cars, robotic army, automated
weapons, security robots etc., for instance, Tay, a ‘chat bot’ made racist remarks on twitter
and in 2016 we heard about the death of a person in an accident caused by a self-driven car.
So, if we consider the above-mentioned cases, who do we think can be held responsible. On
shortlisting, we find certain categories, for example, the AI machines themselves, or it can
either be the person who manufactured these AI machines, who can be held liable for defects
and malfunction. Another person can be the software programmers or user of these robots/
machines, who is using the machines directly or is operating through a subordinate or an
acquaintance. An incident was reported in a motorcycle company in Japan in 1981 where AI
Robot pushed an employee in an adjacent operating machine mistakenly identifying him as a
threat to the mission. The employee died instantly. This example is neither from the movies
nor from the novels but a real time incident. Who was liable? AI follows cognitive processes
and that’s the reason AI driven machines are termed as Intelligent beings. Today’s world
breach and contradict the three principles of robotics stated by Issac Asimov in 1950, viz., a
robot must not injure any human, a robot must obey commands of the human, except where
the act comes in conflict with first law, and lastly, a robot must protect itself unless in
contravention with the other two laws. But maybe these laws were for the Robots in general
and not Robots with Artificial Intelligence.

The “Moral Algorithms”


On making an assessment by reading about various incidents of malfunctioning of the AI
machines one can easily make out that general principles of criminal law followed for human
criminals or delinquents cannot be applied to the Robotic ‘Star Wars’ world. Hence, certain
conditions are identified which can help us examine the possibility of making robots liable.
These conditions are, firstly, algorithm of the robot is advanced enough to make moral
decisions, this very condition will segregate the robots from mere mechanic tools, as only the
former can possess the capability of making distinct moral decisions. This condition may
sound comic and rebut the moral part, but consider a situation where a self-driven car can
either crash into an old man on the left or a school boy on the right, what is moral? Two
models are suggested, one is the rule-based model, where all the moral commands and
instructions are pre-fed into the system and the robot will not take any decision beyond what
is configured. In the second case it’s the utility-maximization model, based on reinforced
learning where the machine will learn through its trials and errors. The robots in this case will
be mindful and act after analyzing each present moment. The later model can prove to be
more relevant for building ‘Robots with Ethics’. Secondly, the robots must make moral
decisions, and also be well equipped to communicate it to humans. What is the point of
machine taking decisions, making it impossible for humans to decipher it. This condition will
help the humans identify the pattern on which the bots may operate, as they will be reflective
of the moral components of their decisions. This condition will help us understand the reason
why the bot acted in a particular way and under what situation. In case of a malfunction, we
will find a ‘reason’, which will help us identify the cause of the incident. Third, most
important condition is the activity of the robot, in the absence of human supervision. The
most important part is the robotic behavior when it is left unguided, un-supervised, free and
unrestrained. This is the hardest test to pass. The behavior and actions of the robot at this
stage will be the most crucial determining factor for fixing liability. These conditions will not
only help us answer the liability issues but also trigger the self-policing regime to make the
manufacturers conscious of their products, thus pushing them to be more careful about the
defects and malfunction. These basic conditions would expect more from the manufacturers
in terms of identifying the defects, understanding the economics of potential harm and
upgradation of procedural safeguards.

Robots as ‘Legal Persons’


After qualifying the conditions stated above, the next obstacle in making a robot liable would
be its ‘legal personality’. It’s only a Natural Person or a legal person that can be made liable.
Robots are not natural persons but are they legal persons in the eyes of law is an important
question. Most of the countries do not have laws on artificial intelligence and the laws
pertaining to technology in these countries are not updated. In the absence of law and norms
it’s difficult to debate or conceptualize the idea of robotic criminality. The conflict will arise
in the very definition, which means what kind of machines would be tagged as legal persons,
what characteristics of these machines would form the basic qualification for attaining the
status of legal personhood. But whatever, the global community may agree upon, the
automated machines with the ability of making independent decisions away from human
scrutiny may fall within its ambit. Another conflict zone is in regard to determining the actus
rea and mens rea, the core essentials for considering an act to be a crime. Actus rea is fine
but how to prove the mens rea, here we defend ourselves by taking examples of strict liability
offences or situations where penalties are levied on the corporations. The fault elements may
be intention, negligence, wilfulness, recklessness etc. The liability may not alone be of the
robot, but it may be of manufacturer, programmer etc., Will it be fair to label these machines
as ‘criminal’. In my opinion the answer is ‘No’ as these are nothing but defective tools and
creating unnecessary terminology would undermine the very objective of identifying the
liability. Another issue can be in regard to constant updates from cloud servers, which
automatically updates the system, what will be the situation if this comes up as a defence, that
conditions fulfilled were of last operating system and with every update one must follow the
same set of verifications to keep the certification of legal hood valid. The deliberations we are
doing today, may not be suitable for the forthcoming technology, but what is essential is that
the jurisprudence and the basic principles must lay a stronger foundation for dealing with
unforeseen developments and incidents.

Criminal Liability of Robots


Three most common models are identified for Artificial Intelligence (AI) entity’s criminal
liability, the Perpetration-via-Another Liability Model (AI entity does not possess any human
attribute and is an innocent agent), the Natural-Probable-Consequence Liability Model (deep
involvement of programmers and users in the AI activity) and the Direct Liability Model
(focus is on AI entity itself). It is suggested that three models must be combined to get a
panoramic view of the criminal liability. Another point to consider is that it's not just the
wrongful behavior that has to be considered while fixing liability, it is also the harm suffered
by the victim at the hands of a non-human entity. The damage can be physical, psychological
or even financial. The anger and frustration of being wronged must be compared with any
other offence listed under traditional penal laws. Many a times we observe the possessiveness
or obsession of human beings for material things they own. A defect or a damage to a device
that they use regularly sometimes result in anxiety and panic. Imagine the psychological
impact a machine may cause on human mind and whether the emotional setbacks can be a
reason fair enough to count the victims too, when fixing liability.

Essential purpose of making the robots liable is to identify the culpable individual hiding
under the garb of manufacturer, supplier or trainer. The translucency pertaining to machines
require proper cooperation between regulatory and enforcement agencies on one side and
manufacturers of these products and machines on the other side. It will help in understanding
the real reason and responsibility behind an overt act or a malfunction. There may be
arguments regarding the redundancy in deciding the criminal liability of a machines, but there
are still few reasons to justify this decision, for instance, we are only looking at the liability in
case of ‘acts’, where malfunction, indecision, defect etc., has caused harm to an individual.
What we are not considering is the ‘omission’ part, where the manufacturer lapsed in adding
a moral component, training the robot to act in a situation of crises. Sometimes, the
manufacturers or anyone dealing with functioning of robots may not be directly associated
with the wrongdoing of the machine at a particular point of time but because they are causally
attached, they can be vicariously held responsible. They are responsible for providing
algorithms and thus will be liable for any combination the robot chooses to use. But is that
fair, a manufacturer must not be made responsible on the basis of designing learning
algorithms, a robot may choose indefinite number of computations and combinations for its
independent decision making, how can manufacturers be made liable for each combination.

Can ‘Robots’ be punished?


Before discussing this component, two queries must be considered: first, will the punishments
have an impact on AI entity? Second, what is the significance of imposing such punishments
on AI entities? If we assume and confirm that robots can be made criminally liable, what
punishments can be imposed will be the only question that would need out of the box
thinking. Some of the interesting punishments suggested in this scenario includes: Death
Sentence (physically dismantling and destroying the bot), Hospital sanction (re-writing the
moral algorithm of the robot), Prison sentence (prohibiting use of that robot), Fine (Payment
out of insurance funds), Correctional service (training to teach correct way of action) etc.
Compensation can be awarded from any insurance scheme or state fund for immediate relief
as done by members of the Europe who introduced ‘a mandatory insurance scheme and a
fund’ for victims. Sanctions like destroying the robots is said to give psychological relief to
the victims of technology. Before any decision is taken to impose a sanction, certain points
must be consciously considered: first, where the human involvement is proportionally higher
than the level of automation. Certainly, in this case there is a necessity to identify the liability
of the human user. Second, where there is no human interruption and full automation, the
criminal liability may not be of any human.

Counter Arguments
Though, we are justifying our stance of making robots criminally liable, but still there are
counter arguments that keep the query unresolved. The most common argument is that a
machine cannot form any mens rea and without guilty mind every act cannot be justified.
They cannot be agents of any moral or immoral action in the absence of forming an intention.
The next argument is in regard to morality. Since when have we started considering the
merger of law and morality, how will the machines identify whether an act is moral or not.
For anyone who doesn’t realize that the act was wrongful, will it be justified to blame. It is
contended that robots do not have a conscience to distinguish between good and bad.
Howsoever, hard one may programme a robot but still certain percentage of expected fallacy
will remain. Every time comparing the liability of the robots with the liabilities of the
corporations is not a wise decision. The norms set to explain the liability of corporations is
not of recent past. The jurisprudence, law and procedural intricacies are well settled. Whereas
deciding onto the liability of the robots is still an ongoing concern. Also, can the viruses,
worms and bugs injected into a system be used as a defence of insanity, as the machine
couldn’t use the mind because of an external impact. Will the Constitutions of the world
consider AI entities as a part of ‘we the people’. The standard of due attention and care can
be followed by humans not by machines. Also, even if we decide to hold a robot liable, which
court will have the jurisdiction to try the case. Adding one more concern in the horizon is
answering the very question, whether robots can be arrested. The question of liability as we
are discussing is not limited in applicability to self-driven cars, but also extends to situations
were unmanned automated drones fire missiles at wrong targets due to malfunction etc.
Another critical issue pertains to identifying the individual for identifying culpability. When
we say that manufacturer and trainers are responsible, it’s not one person, but numerous
hands that make, maintain or repair the hardware and the software. When we refer to the
forensics principle of legal neutrality, which means that AI system’s criminal behavior must
be evaluated and investigated like human’s, one must note that any external data sent through
a ‘back door’ in an AI system makes it difficult for investigators to investigate and find out
real person behind it. Another important aspect is that, every decision made by a robot is not
an instant response. Every decision is an outcome of a long causation chain which involves
various steps.

Conclusion and Suggestions


It’s very difficult to conclude whether the robots can be or cannot be made criminally liable.
Only one thing that draws the consensus is that there must be a liability connected to
automated and AI driven mechanics. The reason being criminal liability of robots
encompasses two concepts, one is the liability of the machine itself and other is the liability
of the makers of the machine. After analyzing the arguments and facts discussed above, I
would like to put forward the following suggestions: First, from a policy point of view, one
must consider the applicability of criminal law at the first instance. This can be done by
balancing the safety measures taken by those who deploy such systems, setting up a protocol
or standard for usage of such AI driven machines and condemning unacceptable behavior.
Second, there may be diversified range of settings and applications under which every AI
driven machine with different risk levels and benefits may work. For every act or omission
similar kind of criminal sanction may not be a justified response. Not every act committed by
a robot would need a criminal enforcement or a penal sanction, rather regulator controls and
sanctions could be resorted to, focusing on limiting recurrence than blaming. Third, certain
parameters and benchmark are required to be set to assess the criminal liability for example,
level of risk and potential harm, automation capacity and proportion of human oversight.
Fourth, where the operator of AI Robot is involved at every stage, on failure of a system or
mishappening, one must consciously asses and distinguish between intentional criminal harm
and unintentional criminal harm. It’s convenient to make the manufacturer or user
responsible for direct and intentional harm caused by Robots but its difficult to assess
culpability when human user took due care and caution. Also, we must not simply leave the
burden on system manufacturers to check the safety but must also establish a safety assurance
system, protection by design coupled with third party scrutiny. The fifth suggestion is the
reference to the Model AI Governance Framework, Singapore which provides three
approaches: Human in the loop, in this case human makes the final decision, other one is
Human over the loop, where the human is not directly engaged but plays a supervisory role
and last approach is Human out of the loop, as the name suggest AI decides and executes
decisions without the interference of the humans. This categorization can help us determine
the level of human involvement and its effect on criminal law. Though, one may not be clear
in deciding the criminal liability of AI robots but surely a debate has triggered to fix the
‘Responsibility’ of these machines.

You might also like