0% found this document useful (0 votes)
156 views13 pages

Topic: Criminal Liability of Artificial Intelligence.: Ilma Khan Roll No: 39 B.A.LLB 2 Year Criminal Law

This document discusses criminal liability for artificial intelligence. It acknowledges the teacher for the opportunity and thanks friends and family for their help. It provides background on the history and development of AI, including early research and modern applications. It defines key aspects of AI like knowledge engineering and machine learning. The document outlines the basic requirements for criminal liability - an unlawful act (actus reus) and intent (mens rea). It presents three models for assigning criminal liability to AI: 1) the innocent agent model, where a human programmer would be liable; 2) the natural consequences model, where foreseeable harms from an AI's actions create liability; and 3) direct liability of the AI itself.

Uploaded by

khadija khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views13 pages

Topic: Criminal Liability of Artificial Intelligence.: Ilma Khan Roll No: 39 B.A.LLB 2 Year Criminal Law

This document discusses criminal liability for artificial intelligence. It acknowledges the teacher for the opportunity and thanks friends and family for their help. It provides background on the history and development of AI, including early research and modern applications. It defines key aspects of AI like knowledge engineering and machine learning. The document outlines the basic requirements for criminal liability - an unlawful act (actus reus) and intent (mens rea). It presents three models for assigning criminal liability to AI: 1) the innocent agent model, where a human programmer would be liable; 2) the natural consequences model, where foreseeable harms from an AI's actions create liability; and 3) direct liability of the AI itself.

Uploaded by

khadija khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Ilma Khan

Roll No: 39
B.A.LLB 2nd year
Criminal Law

Topic: Criminal Liability of


Artificial Intelligence.

Submitted to: Dr. Owais Farooqi


Acknowledgement
I would like to express my special thanks of gratitude to my teacher Dr. Owais
Farooqi. who gave me the golden opportunity to do this project on the topic
Criminal Liability of Artificial Intelligence. which also helped me in doing a lot of
Research and I came to know about so many new things I am really thankful.

Secondly, I would also like to thank my parents and friends who helped me a lot in
finalizing this project within the limited time frame

.
Introduction
Artificial-intelligence is the capability of a machine to imitate intelligent behavior. Artificial-
intelligence is the simulation of human behavior and cognitive processes on a computer and
hence is the study of the nature of the whole space of intelligent minds. Artificial-intelligence
research began in the 1940s and early 1950s. Since then, Artificial-intelligence entities have
become an integral part of modem human life, functioning much more sophisticatedly than other
daily tools. Could they become dangerous? In fact, they already are.
What if a man orders a robot to hurt another person for the own good of the other person? What
if the robot is in police service and the commander of the mission orders it to arrest a suspect and
the suspect resists arrest? Or what if the robot is in medical service and is ordered to perform a
surgical procedure on a patient, the patient objects, but the medical doctor insists that the
procedure is for the patient's own good, and repeats the order to the robot?
Systems that use artificial intelligence technologies are becoming increasingly autonomous in
terms of the complexity of the tasks they can perform, their potential impact on the world and the
diminishing ability of humans to understand, predict and control their functioning. Most people
underestimate the real level of automation of these systems, which have the ability to learn from
their own experience and perform actions beyond the scope of those intended by their creators.
This causes a number of ethical and legal difficulties.
Artificial-intelligence is a machine process that makes predictions or takes actions based on data,
and often attempts to imitate human behavior. However, computers and machines are unable to
interpret why specific data is important and they lack the emotional intelligence to determine
which result is most relevant. Most experts agree that artificial-intelligence in its current form
should be used for tasks that are repetitive and lack efficiency, as opposed to those that require
human understanding. Within the pharmaceutical industry, these tasks are usually associated
with big data.
History of artificial intelligence.
The term artificial intelligence was coined in 1956, but AI has become more popular today
thanks to increased data volumes, advanced algorithms, and improvements in computing power
and storage.
Early AI research in the 1950s explored topics like problem solving and symbolic methods. In
the 1960s, the US Department of Defense took interest in this type of work and began training
computers to mimic basic human reasoning. For example, the Defense Advanced Research
Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA
produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were
household names.
This early work paved the way for the automation and formal reasoning that we see in computers
today, including decision support systems and smart search systems that can be designed to
complement and augment human abilities.
While Hollywood movies and science fiction novels depict AI as human-like robots that take
over the world, the current evolution of AI technologies isn’t that scary – or quite that smart.
Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for
modern examples of artificial intelligence in health care, retail and more.
What is artificial intelligence?
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans. Some of the activities computers with
artificial intelligence are designed for include:
• Speech recognition
• Learning
• Planning
• Problem solving

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It
has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized. The core
problems of artificial intelligence include programming computers for certain traits such as:
• Knowledge
• Reasoning
• Problem solving
• Perception
• Learning
• Planning
• Ability to manipulate and move objects
Knowledge engineering is a core part of AI research. Machines can often act and react like
humans only if they have abundant information relating to the world. Artificial intelligence must
have access to objects, categories, properties and relations between all of them to implement
knowledge engineering. Initiating common sense, reasoning and problem-solving power in
machines is a difficult and tedious task.
Machine learning is also a core part of AI. Learning without any kind of supervision requires an
ability to identify patterns in streams of inputs, whereas learning with adequate supervision
involves classification and numerical regressions.
Classification determines the category an object belongs to and regression deals with obtaining a
set of numerical input or output examples, thereby discovering functions enabling the generation
of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms
and their performance is a well-defined branch of theoretical computer science often referred to
as computational learning theory.
Machine perception deals with the capability to use sensory inputs to deduce the different aspects
of the world, while computer vision is the power to analyze visual inputs with a few sub-
problems such as facial, object and gesture recognition
.

Essentials for the Imposition of criminal liability


The basic question of criminal law is the question of criminal liability whether the specific entity
(human) bears criminal liability for a specific offense committed at a specific point in time and
space. In order to impose criminal liability upon a person, two main elements must exist. The
first is the external or factual element criminal conduct (actus reus) while the other is the internal
or mental element knowledge or general intent vis-A-vis the conduct element (mens rea). If one
element is missing, no criminal liability can be imposed.
Gabriel Hallevy discusses how, and whether, artificial intelligent entities might be held
criminally liable. Criminal laws normally require both an actus reus (an action) and a mens rea (a
mental intent), and Hallevy helpfully classifies laws as follows:
1. Those where the actus reus consists of an action, and those where the actus reus consists of a
failure to act.
2. Those where the mens rea requires knowledge or being informed; those where the mens rea
requires only negligence (“a reasonable person would have known”); and strict liability offences,
for which no mens rea needs to be demonstrated.
No other criteria or capabilities are required in order to impose criminal liability, not from
humans, nor from any other kind of entity, including corporations.In order to impose criminal
liability on any kind of entity, it must be proven that the above two elements existed.
Three Models of the Criminal Liability of
Artificial Intelligence.
The way humans cope with breaches of legal order is through criminal law operated by the
criminal justice system. Accordingly, human societies define criminal offenses and operate
social mechanisms to apply them. This is how criminal law works. Originally, this way has been
designed by humans and for humans. However, as technology has developed, criminal offenses
are committed not only by humans. The major development in this issue has occurred in the 17th
century. In the 21st century criminal law is required to supply adequate solutions for commission
of criminal offenses through artificial intelligent (AI) systems. Basically, there are three basic
models to cope with this phenomenon within the current definitions of criminal law. These
models are:
1. The Perpetration-by-Another Liability Model.
2. The Natural Probable Consequence Liability Model.
3. The Direct Liability Model.

• The Perpetration-by-Another Liability Model: liability


model (innocent agent)
This first model does not consider the artificial-intelligence entity as possessing any human
attributes. The entity is considered an innocent agent. A machine is a machine, and is never
human. However, one cannot ignore an artificial-intelligence entity's capabilities. These
capabilities are insufficient to deem a perpetrator of an offense. These capabilities resemble the
parallel capabilities of a mentally limited person, such as a child, a person who is mentally
incompetent or one who lacks a criminal state of mind. Legally, when an offense is committed
by an innocent agent that person is criminally liable as a perpetrator-via-another.
If an offence is committed by a mentally deficient person, a child or an animal, then the
perpetrator is held to be an innocent agent because they lack the mental capacity to form a mens
rea. However, if the innocent agent was instructed by another person (for example, if the owner
of a dog instructed his dog to attack somebody), then the instructor is held criminally liable.
For example: A programmer designs software for an operating robot. The robot is intentionally
placed in a factory, and its software is designed to torch the factory at night when no one is there.
The robot committed the arson, but the programmer is deemed the perpetrator. The second
person who might be considered the perpetrator-via another is the user of the Al entity. The user
did not program the software, but he uses the artificial intelligence entity, including its software,
for his own benefit.
• The Natural Probable Consequence Liability Model:
foreseeable offences committed by artificial-intelligence
entity (activated inappropriately)
In this model, part of the artificial-intelligence program which was intended for good purposes is
activated inappropriately and performs a criminal action. Example: a Japanese employee of a
motorcycle factory was killed by an artificially intelligent robot working near him. The robot
erroneously identified the employee as a threat to its mission, and calculated that the most
efficient way to eliminate this threat was by pushing him into an adjacent operating machine.
Using its very powerful hydraulic arm, the robot smashed the surprised worker into the machine,
killing him instantly, and then resumed its duties.
The normal legal use of “natural or probable consequence” liability is to prosecute accomplices
to a crime.
This model of criminal liability assumes deep involvement of the programmers or users in the Al
entity's daily activities, but without any intention of committing any offense via the artificial-
intelligence entity.
Originally, the natural-probable-consequence liability model was used to impose criminal
liability upon accomplices, when one committed an offense, which had not been planned by all
of them and which was not part of a conspiracy. The established rule prescribed by courts and
commentators is that accomplice liability extends to acts of a perpetrator that were a "natural and
probable consequence" of a criminal scheme that the accomplice encouraged or aided. Natural-
probable-consequence liability has been widely accepted in accomplice liability statutes and
codifications.
• The Direct Liability Model: (Artificial-intelligence entity as
being tantamount to human offender)
The third model does not assume any dependence on a specific programmer or user. Any person
attributed with both elements of the specific offense is held criminally accountable for that
specific offense. "No other criteria are required in order to impose criminal liability”. A person
might possess further capabilities, but, in order to impose criminal liability, the existence of the
external element and the internal element required to impose liability for the specific offense is
quite enough.

It is relatively simple to attribute an actus reus to an artificial-intelligence system. If a system


takes an action that results in a criminal act, or fails to take an action when there is a duty to act,
then the actus reus of an offence has occurred. Assigning a mens rea is much harder, and so it is
here that the three levels of mens rea become important. For strict liability offences, where no
intent to commit an offence is required, it may indeed be possible to hold AI programs criminally
liable. Considering the example of selfdriving cars, speeding is a strict liability offence, if a self-
driving car was found to be breaking the speed limit for the road it is on, the law may well assign
criminal liability to the AI program that was driving the car at that time.
Challenges of artificial intelligence
Predicting and analysing legal issues and their solutions, however, is not that simple. For instance, criminal
law is going to face drastic challenges. What if an AI-based driverless car gets into an accident that causes
harm to humans or damages property? Who should the courts hold liable for the same? Can AI be thought
to have knowingly or carelessly caused bodily injury to another? Can robots act as a witness or as a tool for
committing various crimes?
Except for Isaac Asimov’s ‘three laws of robotics’ discussed in his short story, ‘Runaround’, published in
1942, only recently has there been interest across the world to develop a law on smart technologies. In the
U.S., there is a lot of discussion about regulation of AI. Germany has come up with ethical rules for
autonomous vehicles stipulating that human life should always have priority over property or animal life.
China, Japan and Korea are following Germany in developing a law on self-driven cars.
In India, NITI Aayog released a policy paper, ‘National Strategy for Artificial Intelligence’, in June 2018,
which considered the importance of AI in different sectors. The Budget 2019 also proposed to launch a
national programme on AI. While all these developments are taking place on the technological front, no
comprehensive legislation to regulate this growing industry has been formulated in the country till date.
Legal personality of AI
First we need a legal definition of AI. Also, given the importance of intention in India’s criminal law
jurisprudence, it is essential to establish the legal personality of AI (which means AI will have a bundle of
rights and obligations), and whether any sort of intention can be attributed to it. To answer the question on
liability, since AI is considered to be inanimate, a strict liability scheme that holds the producer or
manufacturer of the product liable for harm, regardless of the fault, might be an approach to consider. Since
privacy is a fundamental right, certain rules to regulate the usage of data possessed by an AI entity should
be framed as part of the Personal Data Protection Bill, 2018.
Traffic accidents lead to about 400 deaths a day in India, 90% of which are caused by preventable human
errors. Autonomous vehicles that rely on AI can reduce this significantly, through smart warnings and
preventive and defensive techniques. Patients sometimes die due to non-availability of specialised doctors.
AI can reduce the distance between patients and doctors. But as futurist Gray Scott says, “The real question
is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to
decide that?”

Conclusion
The rapid development of Artificial Intelligence technology requires current legal solutions in
order to protect society from possible dangers inherent in technologies not subject to the law,
especially criminal law. Criminal law has a very important social function, to preserve social
order for the benefit and welfare of society. The threats upon that social order may be posed by
humans, corporations. Traditionally, humans have been subject to criminal law , except when
otherwise decided by international consensus. Thus, minors and mentally ill persons are not
subject to criminal law in most legal systems around the world. Although corporations in their
modem form have existed since the fourteenth century, it took hundreds of years to subordinate
corporations to the law, and especially, to criminal law.
Thus, there is no substantive legal difference between the idea of criminal liability imposed on
corporations and on Al entities. It would be outrageous not to subordinate them to human laws,
as corporations have been. Models of criminal liability exist as general paths to impose
punishment.

You might also like