Ethical Safe Lawful A Toolkit For Artificial Intelligence Projects
Ethical Safe Lawful A Toolkit For Artificial Intelligence Projects
Ethical Safe Lawful A Toolkit For Artificial Intelligence Projects
Introduction
Artificial intelligence is starting to come of age. Businesses
looking to exploit this technology are having to confront new
practical, legal and ethical challenges.
This guide provides a brief overview of the legal issues that arise
when rolling out artificial intelligence within your business. It is
based partly on our experience of using artificial intelligence
within Linklaters. The guide focuses on the position in the
United Kingdom, but much of its content is equally relevant in
other jurisdictions. However, it does not consider autonomous
vehicles or robotics, which raise their own regulatory and
commercial issues.
The guide starts with a short technical primer. This describes
some of the recent advances in artificial intelligence and the
likely limitations on this technology.
1
FOREWORD
Artificial intelligence is starting to come of age. Businesses looking to exploit this technology are
having to confront new practical, legal and ethical challenges.
This toolkit provides a brief overview of the legal issues that arise when 5. Financial services – We outline some of the specific concerns for
rolling out artificial intelligence within your business. It is based partly on financial services firms, including lessons to be learnt from the
our experience of using artificial intelligence within Linklaters. The toolkit rules on algorithmic trading.
focuses on the position in the United Kingdom, but much of its content
is equally relevant in other jurisdictions. However, it does not consider Finally, we address the broader ethical challenges raised by this
autonomous vehicles or robotics, which raise their own regulatory and technology and the way in which the government and regulators are
commercial issues. addressing these challenges.
The toolkit starts with a short technical primer. This describes some of We hope this toolkit helps you to engage with this exciting new
the recent advances in artificial intelligence and the likely limitations of technology ethically, safely and lawfully.
this technology.
A summary of the key items you should consider in relation to the legal aspect of AI programmes.
The history of the development of artificial intelligence has been cyclical; periods of great interest and
investment followed by disappointment and criticism (the so-called “AI winters”).
We are now in an AI summer. Developments in areas such as language For the purpose of this toolkit, we adopt a less abstract definition and use
translation, facial recognition and driverless cars have led to interest in the “artificial intelligence” to refer to the use of technology to carry out a task
sector from investors, and also from regulators and governments. that would normally need human intelligence. 2
Artificial intelligence is unlikely to fulfil all the extravagant predictions The term is also widely misused. Simple decision trees or expert systems
about its potential, but the technology has made, and is likely to continue are often labelled “artificial intelligence”. While these tools can be
to make, significant advances. extremely useful, they cannot sensibly be described as intelligent.
Thinking Thinking
humanly rationally
Technology that uses
“Artificial intelligence is a key part of
Technology that uses
thought processes and logical processes the fourth industrial revolution. Strong
reasoning in the same to create rational general artificial intelligence is still
way as a human. thought. some way off, but narrow domain-specific
tools will be important in a range of
AI sectors from healthcare to finance.”
Acting
humanly
Acting rationally
The use of technology
that mimics human The use of technology
behaviour, regardless of that acts to achieve the
its internal thought best outcome.
processes.
TECHNICAL PRIMER
What is artificial intelligence good at? What types of tasks can artificial intelligence take on?
Many of the misconceptions about artificial intelligence arise out of There are a range of factors that determine whether artificial intelligence is
anthropomorphism: the attribution of human traits to this technology. In suited to a particular task. They include:
other words, the assumption that an artificially intelligent machine will
>> Closed or open context – A crucial factor is whether the task exists in a
think and feel as we do.
closed or open environment. Games such as chess are easier to tackle
This is not likely to happen in the short to medium term. There is no because the position of each player is clearly described (by the position
current prospect of a general human-like intelligence capable of dealing of each piece) and rules are easy to define. In contrast, open context
with multiple different and unconnected problems. Artificially intelligent tasks, such as imitating a human in natural language conversation 3 are
systems do many wonderful things, but still cannot deal with the wide much harder to solve.
variety of unconnected intellectual puzzles we all have to grapple with on
>> Known or unknown state – It is also more difficult for artificial
a daily basis.
intelligence to tackle a task if there are missing facts. At one end of
What artificial intelligence systems can do is master specific tasks or the spectrum is chess, in which the whole state is known, i.e. both
domains. The capabilities of these systems have improved over time. players know where all the pieces are and whose turn is next. It is also
For example, chess playing computers first appeared in the late 1970s, possible to deal with some “known unknowns”, for example by making
but it was not until 1997 that IBM’s Deep Blue stunned the world by probabilistic assumptions, but this is more difficult. For example, a
beating Garry Kasparov. More recently, AlphaZero, the gameplaying machine could play a mean hand of poker by making an assessment
system created by DeepMind, taught itself to play chess to a superhuman of the possible cards held by the other players. However, it is unlikely
standard in hours. to be able to deal well with “unknown unknowns”, where it is difficult to
even know what information is missing.
The table below provides a broad overview of developments in AI over
the years: >> Evaluation of success – Some artificial intelligence tools learn by taking
So what can artificial intelligence do? decisions based on an assessment of which is most successful, or trying
multiple courses of action and reinforcing approaches that achieve the
2007
Optimal most success. For example, a machine playing chess will typically make
Super human
1997 moves to maximise the chance of winning. This technology is harder to
2005 2005
use where there is no easy way to evaluate success.
Human
2018
2018 2018
2018
>> Data, data, data – Underlying many of these recent developments is the
Sub-human
use of data to allow the artificially intelligent system to learn. Getting hold
of the right data is key to solving many artificial intelligence challenges.
Dr
Ch
Sp
Fa
Dr
Tr
Vo
Le
an
au
ivi
ga
cia
ice
am
es
sla
ng
la
gh
re
re
re
dv
tio
ts
co
co
co
ice
n
gn
gn
gn
itio
itio
itio
n
n
n
TECHNICAL PRIMER
& “OWNERSHIP” Determine what intellectual property rights will arise for
each element, as this will determine what rights you
can assert against third parties.
Agree on who will own the intellectual property rights to
reflect your commercial aims and put in place
appropriate documentation to achieve that.
Include other contractual protections to achieve your
commercial aims, e.g. rights to use, exclusivity
arrangements and confidentiality obligations.
Assess other forms of collaboration, e.g. taking an
equity stake or entering into a joint venture.
01 COLLABORATION & “OWNERSHIP”
Artificial intelligence projects will often involve a collaboration between those with expertise in artificial
intelligence and an industry partner. In this section, we consider the key issues to address as part of
that collaboration. In particular:
>> Contribution – Why do these collaborations take place and what does
each person bring to that collaboration?
Nakhoda
>> “Ownership” – Discussions about the relationship often focus on who
“owns” the technology and the data. This is not always a helpful way to At Linklaters, we have developed our own AI and technology
frame the commercial relationship especially given intellectual property platform, Nakhoda, which began as a collaboration with the
rights do not always allow for “ownership”. London-based artificial intelligence company Eigen Technologies
Limited and eventually experimented with technologies from
>> Contract and beyond contracts – We consider other contractual
companies such as RAVN and Kira.
mechanisms to support each party’s commercial aims and other means
to structure the collaboration. These collaborations allowed us to fuse our legal expertise with the
technical expertise of these companies to create highly customised
What do you bring to the party? and flexible artificial intelligence solutions which can considerably
enhance the efficiency of many legal processes. Examples include:
The development of artificial intelligence may require collaboration,
typically between those with expertise in artificial intelligence and an >> working with a leading financial institution to experiment
industry partner. Why do these collaborations take place and what does with the application of artificial intelligence to the review of non-
each side contribute? disclosure agreements
>> The tech company – The technology company will bring expertise >> applying artificial intelligence to the large scale review of English
in artificial intelligence. This may mean its own technology but also land registry documents in loan portfolio transactions, delivering
expertise in the use of existing tools in the market. The technology substantial improvements in efficiency and accuracy for our clients
company might also bring faster and more agile ways of working.
>> The industry partner – The industry partner will have sector-specific
knowledge of the issue and may hold the data needed for the machine
to learn to solve the problem. Perhaps most importantly, they are
a customer, both in hypothetical terms by specifying what “good
looks like”, but also as an actual paying end user of the technology.
Credentials from a marquee customer in a particular field will be
invaluable for a young and untested technology company.
01 COLLABORATION & “OWNERSHIP”
>> an “exploitation right” – To use and exploit the asset as they see fit; and
Who pays?
>> a “monopoly/property right” – To stop others from using and exploiting
The contribution of each party might also be financial. The question is:
the asset.
who pays and how much?
That experience of ownership does not read across to the world of data
The starting assumption might be that the tech company has unique
and other intangible rights in a simple manner or at all. This often results
skills and expertise in artificial intelligence and so should be more richly
in a difficult and unhelpful debate. There are three levels of analysis.
remunerated for its contribution. This assumption should be challenged,
particularly where the tech company is just deploying open source
artificial intelligence tools.
In contrast, the industry partner might well make an equally, or not more,
valuable contribution by providing access and use of its own data and Stuart Bedford
sector-specific knowhow, something it has a monopoly over. Partner, Technology M&A
This debate has been particularly acute in some sectors, such as
healthcare. For example, how should the NHS ensure that it is
adequately compensated for the value of any data it provides as part of a
collaboration? This is one issue that the data trusts set up in response to “Businesses across the globe are looking
the Hall-Pesenti Report are tasked with addressing (see Box: How can I at whether AI can help them innovate and
get the data?). adapt their business models and bring them
Each party’s financial contribution might, of course, need to reflect a critical competitive advantage. We have
simpler commercial realities. Smaller tech companies might not have already seen this lead to a significant number
the means to support themselves without financial support from their
of acquisitions of AI start-ups by the major
industry partner. In this situation, it may be worth considering other ways
to collaborate (see box: Get involved or get left behind). tech companies, as established players strive
to bring in new technologies that complement
What does “ownership” mean? and evolve their existing offerings. This M&A
This issue is often one of the major stumbling blocks in any collaboration, activity is not though the preserve of the
sometimes marked by a dogged attachment to “ownership” without tech giants and we continue to see companies
properly understanding what it means.
in the financial services, energy, consumer
When non-lawyers assess ownership, they usually bring to that discussion and other sectors undertaking acquisitions and
their own experience in everyday life of owning things; houses, dogs,
collaborations with start-ups to enable them to
bananas and so on. Ownership of a physical thing gives two commercially
important rights: access the benefits that AI can bring.”
01 COLLABORATION & “OWNERSHIP”
The Emotional The emotional discussions typically revolve Typically, a party seeking ownership wants the
around who will “own it” without a proper analysis exploitation right plus the monopoly/property right.
of what “it” is or what “ownership” means. The claim “I want ownership” is just shorthand for
this position but fails to recognise that ownership
Vague claims to ownership can result in heated is a fluid spectrum of options, not a narrow binary
and unproductive arguments. They can also lead outcome.
to positions that are neither clear nor helpful.
The Legal In contrast, the legal analysis is likely to narrowly to exceptions and limitations. On the other
focus on the individual rights generated by the hand, they do not comprise exploitation rights;
project. Because the components of an artificial ownership of certain intellectual property rights in
collaboration are not physical things, like bananas, technology or data carries with it no guarantee of
legal ownership will mainly be through intellectual your legal right to exploit it as you see fit.
property rights such as copyright or database rights.
Any legal rights in technology and data may also
Intellectual property rights are inherently negative apply in a fragmented and piecemeal way and
rights, i.e. the right to stop other people from may not provide a comprehensive response to
doing certain things in relation to protected works. challenges of ownership.
So, typically, they do provide a monopoly/property
right in respect of protected works, although The box Does anyone own data? provides an
this may be narrower in scope than monopoly/ example of these limitations.
property rights for physical things and subject
The Commercial The most productive discussion is a commercial >> Monopoly/property rights: Do I want to control
analysis of the rights each party wants. By this or prevent others, including my collaboration
we mean: partner, from using the output from the project
or from adapting and modifying it?
>> Exploitation rights: What rights do I want in
respect of the output of the project? Do I just Once there is agreement on the commercial
want the right to use the output? Or do I also position, it should be possible to support it with
want the right to adapt and modify it? Or to appropriate ownership and licensing of intellectual
license it to third parties? property rights, combined with other contractual
rights. This then allows a substantive assessment of
whether it reflects the emotional need for ownership.
01 COLLABORATION & “OWNERSHIP”
>> The AI algorithm 15 – The core of the project is likely to be the artificial
intelligence algorithm. This will normally be provided by the tech
“Intellectual property issues are important
company, who will want to retain ownership of it. However, the value of
this contribution may sometimes be overstated. The tech company may in any collaboration, but the interaction
well be repurposing an existing open source algorithm. with artificial intelligence technology is
>> The data – The industrial partner may provide data to train and test not straightforward. This needs a clear,
the artificial intelligence algorithm. The industrial partner will normally structured approach that focuses on each
want to retain ownership of its own data (but see box: Does anyone own party’s commercial objectives.”
data?) and will need to be alert to any regulatory, contractual or other
constraints over the use of that data (see Developing AI & data).
>> Database rights 20 – EU database rights arise in a database (i.e. a The approach to intellectual property rights can also be supplemented
collection of data that is arranged in a systematic or methodical way with cruder commercial tools. For example:
and individually accessible by electronic means) where there has
>> Confidentiality – The agreement might specifically require the other
been a substantial investment in obtaining, verifying or presenting the
party to keep certain materials confidential or to only use them for
contents. “Investment” refers to resources used to find and collate
specific purposes.
existing, independently created materials. Investment in creating
the data does not count. Database rights are most likely to protect >> Exclusivity – The parties might also include obligations to provide data
collections of data, but there are significant limitations on the scope of on an exclusive basis, to deal with each other on an exclusive basis or
their protection (see box: Does anyone own data?). not to deal with competitors. These would need to be carefully assessed
to ensure they are enforceable.
>> Patents – A patent grants a national monopoly right in relation to an
invention that is new and inventive. While this is a powerful form of In some cases, the complexity and potential fragility of a purely
protection, obtaining a patent can be expensive and time consuming. contractual relationship may mean that a broader and deeper relationship
In the EU, patents are also not available for a computer program as is more appropriate. See Get involved or get left behind for other models
such 21 or a way of doing business (though the position is different in to kick-start innovation within your organisation.
other jurisdictions). For some collaborations, it may be worth discussing
whether any patent applications will be made and, if so, who will make
them and in which jurisdiction(s).
Get involved or get left behind: models for kick-starting innovation in your organisation
Incubation Acceleration
What? Ideas generated inside the business are developed by an What? A group of start-ups are selected to participate in a limited-
internal entity that enjoys complete autonomy from the rest of the time programme run by the company and then returned to the
business. outside economy (or are acquired by the company).
Why? The technology under development is not mature enough to Why? You are not ready to invest and wish to explore different
be integrated into your business as a whole; an incubator allows the options before a potential future investment.
ideas to be developed on a stand-alone basis, even whilst the entity
itself remains part of the corporate group. Pros Cons
Why? You wish to access technology without making an investment Why? You can team up with other investors/corporates to develop
or exposing yourself to the risk of the start-up failing. the technology.
Collaborations between (potential) competitors should be carefully
Pros Cons considered under the applicable antitrust laws.
>> Limited financial investment >> Very limited control Pros Cons
or risk
>> Assumes a certain level of
>> Getting to know the organisational maturity from >> Share innovation/combine >> Complexity of establishment
products/services the start-up technologies and decision-making (and
of unwind in case of failure)
>> Getting to know the team >> Others may benefit from the >> Expansion from established
same products/services business lines >> Slow and delicate
implementation
>> Contractual complexity >> Risk sharing and cost
savings >> High rate of failure
There are limits on the extent to which intellectual property rights can >> Database rights: Database rights will also only provide limited
protect various aspects of an artificial intelligence project. One protection here. It is likely that data on the scores in aptitude tests or
example is data. performance grading will not be protected, as that data is created by
the employer – i.e. the investment is not the right sort of investment
It is easiest to use a hypothetical example. An employer wants to
(obtaining, verification or presentation of pre-existing data).
improve its graduate recruitment process. To do this it collects the
following information: This means the employer’s desire for “ownership” is not necessarily
supported by significant protection under intellectual property law.
>> Application Information: The organisation collects biographical details
The employer will need to rely on duties of confidence and other
for each applicant consisting of university attended, degree class and
contractual protections to prevent misuse.
A-Level results. It also collects the scores it awards to each applicant for
aptitude and psychometric tests taken as part of the recruitment process.
The protection for the output of an artificial intelligence algorithm 22 is it sufficiently substantial or creative to be a literary work? This is
is also potentially limited. The key intellectual property right is likely important because in the UK at least, computer-generated data is
to be copyright. There is a specific provision in UK law addressing unlikely to be protected by copyright.
this issue:
“Monkey selfies”
“In the case of a literary, dramatic, musical or artistic work which
is computer-generated, the author shall be taken to be the person A useful analogy comes from the nature photographer David Slater,
by whom the arrangements necessary for the creation of the work who travelled to Indonesia to photograph macaque monkeys. The
are undertaken.” 23 macaques would not let him get near enough for a close-up, so he set
up his camera to allow them to take selfies of themselves.
However, this provision poses as many questions as it answers.
In particular: The US Copyright Office ruled the photographs could not be
copyrighted as protection does not extend to “photographs and
>> Who is making the arrangements? Is it the person who supplied the
artwork created by animals or by machines without human
artificial intelligence algorithm, the person who trained the algorithm
intervention”. This need for human input into the work is also
or the person who runs the algorithm? 24
applicable to works created by artificial intelligence.
>> Is the “arrangement” substantive enough? For a copyright work to
A separate, and more bizarre, aspect of the dispute is still
acquire copyright it seems likely that the human involvement in the
outstanding. In 2015, the campaign group, People for the Ethical
arrangements must have some substantive content – i.e. drawing
Treatment of Animals, filed a lawsuit claiming that the monkey, who
a portrait with creative assistance from an electronic paint program
they called Naruto, owns the copyright. The dispute focused on the
would be substantive, just pressing a button would not. Under
standing of animals to seek legal action. However, it seems unlikely the
US, not UK law, this point was made by the US Copyright Office:
English courts will allow an artificially intelligent entity to own property
“the Office will not register works produced by a machine or mere
any time soon.
mechanical process that operates randomly or automatically without
any creative input or intervention from a human author”.
Developing Ai
Ensure your use of personal data complies with
the GDPR.
Consider the impact of third parties accessing the data
and ensure you have either a data sharing or data DEVELOPING
processing agreement.
Where possible, anonymise the data to avoid
these concerns.
& Data & Data AI & DATA
Consider the use of regulatory or
development sandboxes.
Conduct a data protection impact assessment
Recent developments in artificial intelligence
where necessary.
are largely driven by data. This section looks
at how you can obtain that data and the
constraints on its use. It also looks at the use
of regulatory and development sandboxes.
02 DEVELOPING AI & DATA
The key advances in artificial intelligence over the past few years have been driven by machine
learning which, in turn, is fuelled by data. In many cases, businesses are free to use the data they
hold for whatever purpose they want, including developing artificial intelligence algorithms. However,
the following issues should be considered carefully:
>> Data quality – It is vital you use sufficient high-quality, well formatted system will face in the live environment. If the system has not been
data to train and test your artificial intelligence tool. trained to recognise and deal with particular scenarios it might act
unpredictably.
>> Confidentiality – If the data relates to a third party, it might be
confidential or provided under a limited licence. >> Bias – The data may itself contain decisions that reflect human biases.
These biases may then be picked up by the artificial intelligence
>> Data protection – Where personal data is used to develop, train or test
system. One example is facial recognition systems which have been
artificial intelligence algorithms, that processing will need to be fair and
shown to perform badly on darker-skinned women. This is thought to be
lawful and otherwise comply with data protection law.
because they have been trained on datasets that predominantly contain
>> Third-party involvement – If third parties will have access to the data, pictures of white men. 26
that may complicate the data protection and confidentiality issues.
>> Discriminatory data – The data used to train the artificial intelligence
These constraints are discussed below. One solution is the use should also be appropriate and not likely to lead to discriminatory
of development “sandboxes” to provide a safe means to conduct outcomes. For example, there would be no basis for using data on an
development work (see box Secure development sandboxes). It may also applicant’s race or sexual orientation to train an artificial intelligence tool to
be worth considering use of the regulatory sandboxes provided by the decide on mortgage applications.
Information Commissioner and the Financial Conduct Authority.
>> Inappropriate data – Even where use of the data might be objectively
justified, it may not be socially acceptable or appropriate on public policy
Data quality
grounds. Imagine there was clear evidence that the most successful
It is essential that you use sufficient high-quality and consistently trainee solicitors were themselves the children of lawyers. A tool used
formatted data to train and test your artificial intelligence tool. Poor-quality to shortlist employment applicants for interview might make “better”
or inappropriate data raises the following concerns: decisions by factoring this in, but it would clearly be wrong to do so. It is
important that the training set is sifted appropriately to remove the risk of
>> Quality – It is essential that the data is accurate, complete and properly
discriminatory or inappropriate decision-making.
formatted. An algorithm trained on poor-quality data will only ever
deliver poor-quality decisions.
The testing of the system is a great success with 99% accuracy – i.e. You may also need to consider contractual and equitable duties of
it matches the human decisions ninety-nine times out of a hundred. confidence. The scope of those duties will vary. In many cases, they
However, when the system is rolled out two problems arise. should not prevent the internal use of confidential information for testing
and development work. However, this depends on the context. For
Sex discrimination
example, if a contract limits your use of that information to a particular
The system only shortlists 5% of male candidates, compared to purpose, that might prevent use for development purposes. In any event,
15% of female candidates.27 sharing confidential information with third parties as part of a collaboration
may be problematic.
On further investigation, this reflects the proportion of male and
female candidates shortlisted by the human review process. This may Duties of confidence are particularly relevant when using medical
well be sex discrimination. The male applicant may have been treated information. This is likely to be subject not only to the so-called ‘common
less favourably simply on the grounds of his sex, which could give rise law duty of confidence’ 28 but also the various guidance and codes. 29 In
to a claim under the Equality Act 2010. The artificial intelligence has practice, the use of confidential medical information for the development
adopted biases in the underlying data it was trained and tested on. of artificial intelligence may need a section 251 application to the relevant
More fundamentally, why was the sex of the candidate fed into the Confidentiality Advisory Group. 30
algorithm in the first place? Given the potentially discriminatory
outcomes, it would be better not to have used this data in the first place. Data protection
New types of data Where personal data is used is develop, train or test the system, you must
ensure that use is fair and lawful under data protection laws, including the
The employer expands the pool of universities from which it will GDPR – see Key obligations under the GDPR.
accept applications. This is to attract a wider range of graduates.
However, none of the candidates from that wider pool are shortlisted. These rules apply even if you are only using that personal data within your
own development environment. In particular, you must satisfy a statutory
In our hypothetical example, this might be because the system processing condition, which in many cases will be the so-called legitimate
does not recognise, or does not allocate any value to, those interest test. A crucial factor in determining whether that test is satisfied is the
universities. The artificial intelligence may not react predictably safeguards applied to that personal data; the use of a development sandbox
where there are changes in the types of data it is having to process. may help (see box: Playing Safely – Secure development sandboxes).
02 DEVELOPING AI & DATA
You may also need to document this evaluation. This will be through either
Third-party involvement
a data protection impact assessment or legitimate interests assessment
– see Impact assessments. If the testing and development is part of an These issues are likely to be more difficult if a third party is involved: for
ongoing programme, you may want to conduct a framework assessment, example, either to provide the technology or in a more substantive role,
rather than an individual assessment for each project. such as a commercial data-sharing collaboration. In particular:
>> Combination with other data sources – Finally, the original dataset could
be combined with other data sources to identify someone. Determining
Ed Chan whether this is the case is a very difficult exercise and there is often
Partner, Head of AI Working Group not a bright line test to determine when data is identifiable. The
UK Information Commissioner recommends a “motivated intruder”
test – i.e. considering whether a person who starts without any prior
knowledge but wants to identify the persons in the original dataset
“We have spent a lot of time looking could identify anyone. As a result, true anonymisation can be very
at the use of artificial intelligence hard. Statements by engineers or business people that the data is
“anonymised” should be treated with caution and challenged; “What do
at Linklaters to streamline repetitive you mean by that?”. The Netflix example (opposite) demonstrates how
processes. Artificial intelligence will difficult full anonymisation can be.
change the very nature of our work, and That said, even partial anonymisation will normally be a useful exercise
we need to mould this technology so that and will be a powerful factor justifying this use of the information. There are
it truly supports what our lawyers do.” also statutory protections under the Data Protection Act 2018 which make
re-identification of anonymised personal data a criminal offence in some
circumstances. 35 However, anonymisation is not always a silver bullet.
One way to help manage confidentiality and data protection issues is Behavioural protections: The technical controls should be backed
to conduct your testing and development in a secure development up with appropriate behavioural controls over those using the
sandbox. The sandbox would typically involve: sandbox. This might involve additional confidentially obligations and/
or an express prohibition on attempting to identify individuals from
Clean data sources: The data placed in the sandbox should be
pseudonymised data.
reviewed carefully to ensure that use within the sandbox complies
with data protection and confidentiality laws, and the use of data is Controls on third-party access: If third parties have access to the
consistent with any licence attaching to that data. The data in the sandbox, they should normally be subject to appropriate contractual
sandbox might be: and confidentiality obligations.
>> Fully anonymised data. If the anonymisation process has been carried If the third party acts as a data processor, it must be subject to data
out correctly, use of this data should not be subject to data protection processor clauses.
or confidentiality laws (see Anonymisation – Pitfall or silver bullet?).
Genomics England: A good example of the use of a sandbox-type
>> Pseudonymised data. This is data that has been manipulated so environment is the approach of Genomics England which is in the
that it is only possible to identify the relevant individual using other process of sequencing 100,000 genomes from around 70,000 people.
information. For example, replacing the individual’s name with a key Third parties wanting to access Genomics England’s data services
code. Pseudonymised data is still personal data but it is generally must first pass a rigorous ethical review and have their research
easier to justify its use. proposal approved by an Ethics Advisory Committee. In addition, no
raw genome data can be taken away. The genome data is always kept
>> Raw data. This is data that has not been anonymised or redacted.
within Genomics England’s data centres and can only be accessed
Technical protections: Those controls will depend on the aim of the remotely. In other words, third parties are provided with a reading
sandbox but might include a one-way data gate – i.e. data can flow library and not a lending library.
into the sandbox but cannot generally be taken out of the sandbox.
This would help to minimise any privacy intrusion.
Access to data is essential to the development of artificial intelligence arrangements, including the value of the data being provided.
tools. Without data it is very difficult to compete; particularly for small
>> Public data – The UK Government is also taking steps to make
and medium sized companies who may not be able to pay for access
public data available for reuse, including use to develop artificial
to data or be able to create their own data at scale.
intelligence. For example, various health datasets are available
The problem. There are three principal challenges: from Public Health England. The Office for National Statistics and
bodies such as the UK Data Service also make a number of datasets
>> Privacy and confidentiality – The sharing of information about
available for reuse. It might also be possible to obtain information
identified individuals or companies must respect those persons’
from the Government using the Freedom of Information Act 2000.
privacy and confidentiality rights. This can be a significant barrier, as
evidenced by the controversy over the arrangements between The >> Data mining rights – The EU’s proposed Directive on copyright in
Royal Free Hospital and DeepMind in relation to health records (see the digital single market contains a proposed right to allow research
box: Digital Health: Apps, GDPR and Confidentiality). organisations to carry out text and data mining of copyright works to
which they have lawful access. This is similar to the existing rights in
>> Competitive incentives – There may be no commercial incentive on
the UK for non-commercial research. 37
those holding data to provide others with access. Companies with
very large datasets may well want to keep that data to themselves >> Competition law remedies – In some circumstances, competition
in order to improve their own products and services. There may be law could be used to obtain data. Where the holder of the data
little benefit in providing potential competitors with that data. has a dominant position, it might be possible to compel the holder
to provide a licence of that data if it is an “essential facility” – i.e.
>> Market fragmentation – In some markets there is significant
the refusal: (i) is preventing the emergence of a new product for
fragmentation with data being held by multiple different entities each
which there is a potential consumer demand; (ii) is “unjustified”;
of which may take a different approach to providing access to third
and (iii) excludes competition in the secondary market. 38 Similarly,
parties and store the data in different formats (such as the NHS).
discriminatory access terms or exclusive data supply arrangements
The solution. This problem is recognised and is being addressed in could also raise competition issues. However, using competition law
various ways: to get access to data will likely be expensive and uncertain.
>> Data Trusts – The Hall-Pesenti Report suggests the creation of >> Data portability – In certain circumstances, individuals have the
“data trusts”. These would not create a legal trust as such, and are right to data portability under the GDPR, i.e. to be provided with
instead a framework for data sharing. This includes: (i) template a copy of their information in a machine-readable form. However,
terms and conditions; (ii) helping the parties define the purposes for this is unlikely to generate sufficient volumes of data to support the
which the data will be used; (iii) agreeing the technical mechanisms development of artificial intelligence.
for transfer and storage; and (iv) helping determine the financial
DATA PROTECTION – A QUICK OVERVIEW
Data protection laws in the EU are mainly set out in the General Data Protection Regulation
(“GDPR”). 39 This is supplemented in the UK by the Data Protection Act 2018.
The GDPR applies to the processing of personal data. This is information >> Keep personal data secure – This is a particular concern for some
that relates to identified or identifiable living individuals. It does not protect artificial intelligence algorithms and big data projects which use large
information about companies or other non-personal data, e.g. share amounts of personal data. Some security breaches must be notified to
prices or weather reports. regulators and individuals (see Cyber threats).
Those processing personal data do so as either a controller (who >> Sensitive personal data – Additional restrictions apply if you are using
determines the purpose and means of the processing) or as a processor information about criminal offences or certain sensitive characteristics
(who simply acts on the controller’s instructions). The majority of the (known as special personal data 43). This type of personal data can only
obligations in the GDPR apply to controllers. If you are rolling out an be used where specific statutory conditions are satisfied. 44 There is also
artificial intelligence system for your own purposes, you are likely to do so an increased risk of discrimination when using this type of information.
as a controller. 40
The GDPR also includes an “accountability” principle, meaning that
Key obligations under the GDPR you must not only comply with the law but also be able demonstrate how
you comply.
The GDPR imposes a wide range of obligations. Those specifically
relevant to artificial intelligence systems are set out below. 41
Data processing conditions
>> Look after your data – Your use of personal data should be fair and
When you use personal data, you must also satisfy at least one statutory
lawful, and you should only use personal data for purposes for which it
processing condition. 45 There are six different processing conditions:
was collected or for other compatible purposes. You should ensure the
personal data you use is accurate, not excessive and not kept for longer >> Consent – This applies where the individual has given consent. Under
than necessary. the GDPR, a consent will only be valid if there is a clear and specific
request to the individual, and the individual actively agrees to that
>> Tell people what you are doing – You should normally tell individuals if
use. It is not possible to imply consent and you cannot rely on legalese
you are processing their personal data. There are additional obligations
buried deep within your terms and conditions. This high threshold
if you are using a system to carry out automated decision making.
means consent will rarely be appropriate to justify processing in an
These obligations are discussed in the section on Liability & regulation.
artificial intelligence project.
>> Respect individual rights – Individuals have rights not to be subject to
>> Necessary for performance of a contract – This applies where the
automated decision making (see Liability & regulation). Individuals also
processing is necessary for the performance of a contract with the
have rights to object to processing or to ask that their data is quarantined
individual or in order to take steps at the request of the individual prior
or erased. These other rights are complex and may need to be factored
to entering into a contract. It is not relevant where the contract is with a
into your project. 42
third party.
DATA PROTECTION – A QUICK OVERVIEW
>> Legal obligation – This applies where the processing is necessary for >> Is the processing necessary for that purpose? This may be a bigger
compliance with a legal obligation under EU or Member State law. challenge. For development work, the Information Commissioner may
well want to know why it could not be conducted with pseudonymised or
>> Vital interests – This applies where the processing is necessary in order
anonymised data (especially if that personal data is private in nature).
to protect the vital interests of the individual or of another natural person.
This is typically limited to processing needed for medical emergencies. >> Do the individual’s interests override those legitimate interests? This will
depend on a range of factors including the sensitivity of the personal
>> Public functions – This applies where the processing is necessary
data, the reasonable expectations of the individual and the public interest
for the performance of a task carried out in the public interest or in
in the underlying purpose. Safeguards will be an important part of this
the exercise of official authority vested in the controller under EU or
balancing exercise.
Member State law.
>> What is the legitimate interest? The law recognises that businesses have
a legitimate interest in a wide range of activities, such as marketing or
increasing the internal efficiency of the business. However, where the
purpose serves an obvious public interest (e.g. detecting fraud or cyber-
attacks) that interest will carry greater weight.
DATA PROTECTION – A QUICK OVERVIEW
Fair and lawful processing generally means that personal data should
only be used for the purpose for which it was originally collected.
However, the GDPR allows use for new purposes (such as development
of new technology) if the new purpose is compatible. This requires an
assessment of a range of factors including: (i) any link between the
original and new purpose and the context of the new purpose; (ii) the type
of personal data being processed; (iii) the consequences for individuals;
and (iv) the safeguards used.46
Impact assessments
The use of personal data for the development of artificial intelligence
is likely to engage a range of relatively complex issues that require a
number of value judgements to be made. In most cases, you will need to
document this evaluation. This will be through either a:
The aim of an artificially intelligent system is to be intelligent – to analyse, decide, and potentially act,
with a degree of independence from its maker.
This is a potential concern. The algorithm at the heart of the artificially Contractual liability
intelligent system may be opaque and, unlike a human, there is no
Where you provide an artificially intelligent system or service to a third party
common-sense safety valve. Delegating decisions to a machine which
under contract, there is a risk of contractual liability if the system fails to
you do not control or even understand raises interesting issues. You
perform. However, that liability can be regulated in two important ways.
should consider:
First, and most importantly, the contract can define the basis on which
>> Liability – Liability will primarily be determined by contract. In the
you provide the system. For example, this might: (a) impose an “output
absence of a contract, liability could arise in tort (though this may be
duty” to ensure the output of the system meets specified standards, such
subject to the restrictive rules around pure economic loss) or under
as percentage accuracy; (b) impose a lesser “supervision duty” to take
product liability regimes.
reasonable care developing or deploying the system; or (c) make it clear
>> Fair use of personal data – Data protection issues arise not only that the system is provided “as is” and that use is entirely at the third
when developing artificial intelligence (as previous discussed) but also party’s risk.
when deploying that technology, including restrictions on automated
Where you are dealing with a consumer, you would need to ensure
decision making.
that your terms are consistent with the statutory implied terms that
>> Competition law assessment – There is a risk that an artificial digital products are of satisfactory quality and fit for purpose under the
intelligence solution, particularly a pricing bot, could lead to anti- Consumer Rights Act 2015.
competitive behaviour.
Secondly, the contract can exclude or limit your liability. These protections
We consider these issues below. In many cases, you will need to include would, however, be subject to the normal statutory controls. In business
suitable measures to supervise the operation of the artificially intelligent contracts, that means the Unfair Contract Terms Act 1979 and, in a
system (see Safe use) to mitigate these liabilities and meet your regulatory consumer contract, the Consumer Rights Act 2015.
responsibilities.
Product liability
Is there a duty to treat job applicants fairly?
Strict liability could arise under product liability laws. In the UK,
businesses supplying products to consumers have strict liability
Imagine a large employer uses an artificial intelligence system to
obligations to ensure their safety. 52
automatically shortlist candidates for interview. The solution uses
information extracted from the candidate’s CV and performance in The term “product” includes “all movables … even though incorporated
aptitude tests. into another movable or into an immovable”. This term would not cover
a business’s internal use of artificial intelligence tools (as they are not
Question: Does an applicant have any remedy in tort if the system
provided to a consumer) or web-based access to an artificial intelligence
“wrongly” rejects their application?
tool (as there is no product). However, product liability will be relevant if
Answer: It is unlikely that the employer has a duty of care to the artificial intelligence were embedded in a product sold to a consumer.
properly consider the applicant for shortlisting. This is not a
One such example is an autonomous vehicle. The application of product
relationship in which there is an established duty of care. While
liability law to autonomous vehicles raises a number of interesting issues
there is clearly proximity between the employer and the applicant,
including what standard of safety a person might reasonably expect of
there would be strong arguments that it is not reasonable, as a
the product 53 and the application of various defences, such as where the
matter of public policy, to impose this duty on employers given
defect could not have been discovered at the time the product was put on
they might receive thousands of applications for positions and they
the market. However, these issues are outside the scope of this toolkit and
cannot reasonably be expected to review all of them in detail.
are largely superseded by the specific insurance and liability provisions in
Finally, this would be a significant extension to the law of the UK Automated and Electric Vehicles Act 2018.
negligence and so would fail the incremental test. Such a duty
would have significant and wide-ranging effects. Put differently, is it Unfairness, bias and discrimination
reasonable to require an employer to carefully review every CV it is
One of the key principles under data protection law is that personal data must
presented with?
be processed fairly and lawfully. This is a broad common-sense concept.
While the applicant is unlikely to have a remedy in tort, their interests
Importantly, data protection laws do not regulate the minds of humans.
are likely to be protected under data protection law, particularly
Human decisions cannot generally be challenged on data protection law
because of the controls placed on automated decision making. The
unless based on inaccurate or unlawfully processed data. 54 In contrast, a
applicant might also have remedy if the decision is discriminatory.
decision made by the mind of a machine may well be open to challenge
on general grounds of unfairness.
Moreover, under the accountability principle in the GDPR you are obliged not
to just ensure your processing is fair but be able to demonstrate this is the
case. The challenge is to square this with the use of an opaque algorithm.
03 LIABILITY & REGULATION
Similarly, if the algorithm is opaque there is a risk that it will make >> Performance of contract – Automated decision making is also permitted
decisions that are either discriminatory or reflect bias in the underlying where it is necessary for the performance of a contract or in order to
dataset. This is not just a potential breach of data protection law but might enter into a contract. An example might be carrying out credit checks
also breach the Equalities Act 2010. on a new customer.
The solution will depend greatly on the context and the impact on the >> Authorised by law – Finally, automated decision-making processing is
individual; the inner workings of an AI-generated horoscope 55 require permitted where it is authorised by law.
much less scrutiny than an algorithm to decide whether to grant someone
Even where automated decisions are permitted, you must put suitable
a mortgage.
safeguards in place to protect the individual’s interests. This means notifying
There are various options to address the use of opaque algorithms the individual (see below) and giving them the right to a human evaluation of
including properly testing the algorithm, filleting the input data to avoid the decision and to contest the decision. The Information Commissioner also
discrimination, or producing counterfactuals. We consider these issues in recommends the use of counterfactuals to help the individual understand
the Safe use section. how the decision was made (see Verification and counterfactuals).
These are all issues you should address in your data protection impact Beyond the technical requirements of data protection law, there is a wider
assessment or legitimate interests assessment – see Impact assessments. ethical question of whether it is appropriate to delegate the final decision
about an individual to a machine. As the human rights organisation Liberty
Automated decisions – “The computer says no” submitted to a recent Select Committee hearing: “where algorithms are used
in areas that would engage human rights, they should at best be advisory”. 57
The GDPR contains controls on the use of automated decision making, i.e.:
Guidance from regulators suggests that this will include a range of A large employer uses an artificial intelligence solution to
different activities, such as deciding on loan applications or changing automatically shortlist candidates for interview. This constitutes
credit card limits. Automated decision making is only permitted in the automated decision making as the decision is made solely by
following situations: automated means and significantly affects applicants.
>> Human involvement – If a human is involved in the decision-making The employer is permitted under the GDPR to make these
process it will not be a decision based solely on automated processing. automated decisions as they are taken with a view to entering an
However, that involvement would have to be meaningful and substantive. employment contract with the individual.58 However, in addition
It must be more than just rubber-stamping the machine’s decision. to the general steps described above to ensure fair and lawful
processing, the employer must:
>> Consent – Automated decision making is permitted where the individual
has provided explicit consent. While this sounds like an attractive >> notify applicants that the decision not to shortlist them was taken
option, the GDPR places a very high threshold on consent and this will using automated means; and
only be valid where the relevant decision-making process has been >> allow the applicant to contest the decision and ask for human
clearly explained and agreed to. evaluation.
03 LIABILITY & REGULATION
>> about the significance of the automated decision making; and Anti-competitive pricing bots
>> how the automated decision making operates. You should also address the risk that an algorithm might result in anti-
competitive behaviour. There are four areas of concern. 61
The obligation is to provide “meaningful information about the logic
involved”. This can be challenging if the algorithm is opaque. The logic The first and least controversial is the messenger scenario; where the
used may not be easy to describe and might not even be understandable technology is intended to monitor or implement a cartel – i.e. it is a tool
in the first place. These difficulties are recognised by regulators who do to execute a human intention to act anti-competitively. One example
not expect organisations to provide a complex explanation of how the is two poster sellers who agreed not to undercut each other’s prices
algorithm works or disclosure of the full algorithm itself. 59 However, you on Amazon’s UK website. That agreement was implemented using
should provide as full a description about the data used in the decision- automated repricing software. 62
making process as possible, the broad aim of the processing and
The second concern arises where more than one business is relying on
counterfactual scenarios (see Safe use) as an alternative.
the same pricing algorithm, a so-called (inadvertent) hub and spoke
arrangement. In Eturas, 63 the administrator of an online travel booking
Cyber threats
system sent out a notice to travel agents informing them of a restriction
The security of the system will be essential. A breach could have serious on discount rates which had been built into the system. The Court of
consequences including: Justice decided that those travel agents could be liable if, knowing of
this message, they failed to distance themselves from it. Neither this, nor
>> Uncontrolled behaviour – The security breach could allow the hacker to
the first scenario, necessarily involve artificial intelligence or stretch the
take control of the system. One visceral example is someone hijacking a
boundaries of competition law.
driverless car, which could result in personal injury or death. 60 However,
it is easy to imagine other situations in which an out of control artificial The third, predictable agent, scenario is more interesting. This arises
intelligence could cause serious damage. where a number of parties across an industry unilaterally deploy their
own artificially intelligent systems based on fast, predictive and similar
>> Unauthorised disclosure – If the security breach results in personal
03 LIABILITY & REGULATION
The fourth situation is the digital eye. An all-seeing and fully intelligent
artificial intelligence is able to survey the market and extend tacit collusion
beyond oligopolistic markets to non-price factors. This type of artificial
intelligence envisages users of the algorithm being able to tell it to “make
me money” and, through a process of “learning by doing”, the algorithm
reaches an optimal solution for achieving this aim. However, such
advanced technology does not seem likely in the short term.
It is important to put the right systems and controls in place to ensure that live use of artificial
intelligence systems is properly supervised.
As part of your normal risk management framework, you should: We consider these safeguards below.
The answer to this is, partly, more data. The more data you have to train and Another way to assess the operation of an artificially intelligent system
test the system, the more confident you can be that it is working properly. is to produce counterfactuals. For example, where a loan application is
rejected by an artificially intelligent system it could provide the applicant
Dynamic and complex systems not just with a rejection but also with an assessment of the minimum
change needed for the application to be successful (e.g. the loan would
An added complication is that the situations faced by the artificial
be granted if it were for £2,000 less or the borrower’s income £5,000
intelligence system change over time. The system’s reaction to a change
more). These counterfactuals could be produced by varying the input
in environment may not be predictable.
data until a positive result was achieved.
Similarly, as artificial intelligence systems become more prevalent, it
These counterfactuals can be used to create a series of edge cases, i.e.
will be necessary to consider the potential interactions between these
situations in which there is a tipping point between a positive and negative
different systems. A system may well operate properly in an insulated test
decision. The edge cases will provide some insight into the decision-
environment but generate complex and undesirable behaviours when
making process and will help to test the soundness of those decisions.
combined with other systems (see The $23 million textbook).
Analysing these edge cases may need visualisation tools given the likely
complex dependencies between the inputs to the system.
Verification and counterfactuals
The UK Information Commissioner advocates the use of counterfactuals
This testing process should, in some cases, be accompanied by some form
when conducting automated decision making about individuals.
of human verification of the artificial intelligence’s decision-making process.
Appropriate counterfactuals should be provided to help individuals
For simple tasks, there might be easy ways to do this. For example, a understand the basis for the decision. Another means recommended
picture classification algorithm might highlight the pixels that strongly by the Information Commissioner is qualified transparency. This would
influence the classification decision. This might help a human to gain involve the use of an expert to test the quality, validity and reliability of the
some comfort that the artificial intelligence is working properly. machine’s decision making.
aborted if the system has gone rogue) or reserving the right to revoke the
contract in certain circumstances. So long as that framework is clear, it is Take a hypothetical example. Consider an insurance company that
very likely it would be enforceable under English law. develops an artificial intelligence tool to detect fraudulent claims.
Assume that:
Our paper on Smart Contracts and Distributed Ledger – A Legal
Perspective, co-authored with the International Swaps and Derivatives >> the tool is 98% accurate. This means 98% of fraudulent claims
Association, contains a detailed assessment of some of these issues. 70 are picked up and 98% of valid claims are determined not to be
fraudulent; and
Understanding and interpreting outputs
>> one in 500 claims is actually fraudulent.
Much of the discussion in this section assumes the artificial intelligence
Question: What is the chance that a claim flagged by the tool as
system is, at least in part, the decision maker.
fraudulent is in fact fraudulent?
However, for many practical applications of the technology the artificial
Answer: The answer is not 98%. In fact, it is only 9%. 72
intelligence system will simply provide assistance to a human decision
maker. For example, highlighting patients who may be developing a medical In other words, the majority of the claims flagged as fraudulent
condition or transactions on a bank account that appear to be fraudulent. will actually be valid. Knowing this is important to ensure not
only that those claims are dealt with without an automatic
Where the artificial intelligence system is providing this input, it is vital that
assumption of guilt, but also that the large number of non-
the human understands the limits on that information and can interpret
fraudulent claims being flagged is not necessarily a failure by the
that information correctly. The example below illustrates the risk of
artificial intelligence system.
misinterpreting the data.
05
Ensure appropriate systems and controls are in place.
Consider how the use of artificial intelligence fits into
the senior manager regime.
Comply with the rules on algorithmic trading and
high-frequency trading.
FINANCIAL
SERVICES
05 FINANCIAL SERVICES
Financial services firms must ensure that their approach to artificial intelligence reflects the additional
regulatory requirements placed upon them. This toolkit does not provide an exhaustive review of the
implications of financial services regulation on artificial intelligence but simply highlights some of the
more important considerations.
Risk management framework Finally, financial services firms should consider how they ensure that
artificial intelligence used for trading, only trades within the approved
The starting point is that the use of artificial intelligence will need to
framework of the firm, and how they can ensure transactions entered into
be factored into the firm’s overall risk management framework. This
by artificial intelligence are legally enforceable (see Contractual agents
means ensuring that it takes reasonable care to organise and control its
and circuit breakers).
affairs responsibly and effectively, with adequate risk management and
appropriate systems and controls put in place. 73 This will include:
Senior Managers and Certification Regime
>> Governance: Putting in place a clear and formalised
Similarly, it is important to identify where ultimate responsibility for
governance framework.
the use of artificial intelligence should lie. The Senior Managers and
>> Compliance: Ensuring sufficient appropriately trained technical, Certification Regime is intended to enhance individual accountability
legal, monitoring, risk and compliance staff with at least a general within firms. Documentation must be provided to the regulators stating
understanding of the artificially intelligent systems deployed. the responsibilities of Senior Managers. Certain firms must also provide
a responsibilities map showing that there are no gaps in the allocation of
>> Outsourcing: Where part of the artificial intelligence project is outsourced,
responsibilities.
the firm remains fully responsible for its regulatory obligations.
For firms subject to the Senior Managers and Certification Regime,
This is likely to require an assessment of the various issues addressed in
senior management will need to consider how they intend to allocate
the previous chapter on Safe use.
responsibility for managing the risks associated with artificial intelligence.
Financial services firms should consider whether their Compliance Depending on the type of firm, this may sit with the Senior Manager who
and Audit functions have the right skills and experience in order to performs the Chief Operations function and is responsible for managing
undertake that supervision. Similarly, they would need to consider the internal operations, systems and technology of a firm.
what documentation they need to demonstrate they have undertaken
appropriate testing and supervision.
Regulatory sandbox
FCA and Big Data
While the uncontrolled deployment of new technology could be harmful,
regulators also appreciate the benefit that innovation could bring to firms The Financial Conduct Authority has been considering artificial
and consumers. intelligence and data analytics for some time. In 2016, it carried
out a review of Big Data in retail general insurance, issuing a
One of the measures that the Financial Conduct Authority has taken to
feedback statement in September 2016 (FS16/5).
support innovation is the creation of a regulatory sandbox to give a range
of businesses (not just authorised firms) the ability to test products and Overall, it found broadly positive consumer outcomes. Big Data
services in a controlled environment. provides a means to transform how consumers deal with firms,
encourages innovation and streamlines the sales and claims
These sandbox tests are intended for projects that provide a public benefit
processes. On that basis it decided not to launch an in-depth
and are conducted on a small scale, e.g. for limited duration with a
market study. However, there were two areas of concern:
limited number of customers. The sandbox is also closely overseen by the
Financial Conduct Authority and appropriate safeguards will be needed to >> Big Data allows increased risk segmentation, so categories of
protect consumers. customers may find it harder to obtain insurance.
However, in return for these restrictions a number of tools are on offer, such >> Big Data could allow firms’ ability to identify opportunities to
as restricted authorisation, individual guidance, informal steers, waivers charge certain types of customer more, for example charging
and no enforcement action letters. These help to reduce time-to-market. customers more if they have a low sensitivity to prices and are
less likely to shop around.
Recent sandbox projects include:
>> Veridu Labs which has a KYC and AML solution backed by machine
learning and network analyses to facilitate onboarding and access to
business banking; and
The Markets in Financial Instruments Directive (2014/65/EU) (“MiFID >> Kill functionality: Firms must have emergency ‘kill functionality’,
II”) introduced specific rules for algorithmic trading and high- allowing them to cancel all unexecuted orders with immediate effect.
frequency trading to avoid the risk of rapid and significant market
>> Testing: The systems must be properly tested and deployed only
distortion. These restrictions are relevant to some artificial intelligence
with proper controls and authority.
tools deployed by financial services firms and are also an interesting
illustration of the sorts of legislative controls that might be based on Beyond these requirements, the regime does not regulate the outcome
this technology. of the algorithmic trading strategy as such. In other words, the aim
is not to ensure that the algorithms make good profitable decisions,
Under these rules, algorithmic trading is defined as trading where a
rather it is to ensure an orderly market.
computer algorithm automatically determines parameters of orders
(e.g. initiation, timing, quantity or price) subject to certain exemptions.
Where a firm conducts algorithmic trading, it must comply with the
general MiFID II requirements and notify the relevant competent
authorities. In addition:
>> Controls: Firms must put in place effective systems and risk controls
to ensure that their trading systems are resilient and have sufficient
capacity, are subject to appropriate trading thresholds and limits
and prevent the sending of erroneous orders. This should include
real-time monitoring of all activity under their trading code for signs
of disorderly trading.
>> Market Abuse: Firms must put in place effective systems and risk
controls to ensure the trading systems cannot be used for market
abuse or in breach of the rules of a trading venue.
Financial advice powered by artificial intelligence (or any form of “Artificial intelligence is a key
automation) is subject to the same regulatory obligations as more ingredient in the Fintech sector. We are
traditional financial advice delivered by humans, and the obligations
will fall on the firm offering the system rather than (for instance) a
seeing more and more clients looking to
third-party provider who creates the relevant artificial intelligence. It exploit this technology.”
is up to regulated firms to ensure that any advice offered by them
using artificial intelligence is “suitable” for the client.
This toolkit considers the legal issues associated with artificial intelligence. However, the law
sometimes focuses on past problems and only provides a narrow view of the issues.
>> Values – How will your approach to artificial intelligence reflect your Some organisations have taken stronger and more innovative steps
company’s values and approach to corporate social responsibility? to provide accountability and transparency.
>> Employees – What impact will artificial intelligence have on your For example, DeepMind, the artificial intelligence company,
workforce, both in terms of ensuring your employees have the right skill appointed a number of public figures to independently review its
set and in terms of changes to your employees’ working environment? healthcare business. These Independent Reviewers meet four
>> Transparency – How will your use of artificial intelligence affect your times a year to assess and scrutinise DeepMind’s operation and
reputation and how can you be transparent about your use of this issue a publicly available annual report outlining their findings. The
technology? Independent Reviewer’s latest report is available here 75 and sets
out 12 ethical principles with which they consider DeepMind and
This section also provides a brief overview of the various UK and EU other healthcare technology companies should comply.
initiatives to respond to, and regulate, artificial intelligence. Similarly, SAP has created an external artificial intelligence ethics
board. The five-person committee includes technical experts and
Your values – More than just a legal issue a theologian. It will ensure the adoption of artificial intelligence
Any business using artificial intelligence should be mindful of the wider principles in collaboration with the AI steering committee at SAP.
ethical implications of using that technology. This means taking a broad
future-looking view of the likely implications of artificial intelligence on
your business, your employees, the environment, communities and
countries.
Large businesses may also want to consider how this fits into their wider
accountability framework. For example, which board committee should be
tasked with assessing the wider impacts of artificial intelligence and how
can it ensure that it can access the right expertise to supervise this area?
ETHICS AND GOVERNMENT RESPONSES
Those values will be different for every business, but the UN Guiding
Principles on Business and Human Rights, “Protect, Respect and
AI at Microsoft
Remedy” provide a useful framework from which to conduct this
analysis. For example, they mandate the use of impact assessments,
Microsoft has issued a set of AI principles to ensure its work is built
transparency and remedies which can all be used when assessing the
on ethical foundations. 76 There are four key principles:
use of artificial intelligence.
1. F
airness. AI must maximise efficiencies without destroying
dignity and guard against bias. Your employees – Robots in the workplace
2. Accountability. AI must have algorithmic accountability. The introduction of artificial intelligence into the workplace may have an
impact on your workforce. This might include:
3. Transparency. AI must be transparent.
>> Workforce displacement – There are predictions that artificial
4. E
thics. AI must assist humanity and be designed for
intelligence will replace many white-collar jobs, in much the same way
intelligent privacy.
as the automation of manufacturing has greatly reduced the number
This is supported by five design principles: of blue-collar manufacturing jobs. For example, the chief economist
of the Bank of England has warned that “large swathes” of people
1. H
umans are the heroes. People first, technology second.
may become “technologically unemployed” as artificial intelligence
Design experiences that augment and unlock human
makes many jobs obsolete. 77 Alternatively, those who are displaced by
potential.
artificial intelligence will move to lower skilled jobs that still need human
2. K
now the context. Context defines meaning. Design for intelligence, but against the background of a depressed labour market.
where and how people work, play, and live. This could lead to a “minimum wage economy” for many with much
greater inequality. Employers should be mindful of the opportunities to
3. B
alance EQ and IQ. Design experiences that bridge
retrain and redeploy displaced employees and the overall impact on
emotional and cognitive intelligence.
employee morale.
4. E
volve over time. Design for adaptation. Tailor experiences
>> Skill sets – It will also be necessary to ensure that your workforce has
for how people use technology.
the right skill set to adapt to a changing environment. This might involve
5. H
onor societal values. Design to respect differences and reskilling your existing employees or hiring employees with different skill
celebrate a diversity of experiences. sets in the future.
ETHICS AND GOVERNMENT RESPONSES
You should also consider the public’s perception of the use of The NHS has also issued a new code of conduct for artificial intelligence
artificial intelligence, given increasing sensitivity to this and other data- and other data-driven technologies to allow NHS patients to benefit from
heavy technologies. the latest innovations. The code has 10 principles setting out how the
government will make it easier for companies to work with the NHS and
what the NHS expects in return. 81
ETHICS AND GOVERNMENT RESPONSES
GDPR.
The EU General Data Protection Regulation, see Data protection –
A quick overview.
Hall/Pesenti Report.
The Hall/Pesenti Report, Growing the artificial intelligence industry in the
UK by Dame Wendy Hall & Dr Jerome Pesenti.
1 Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig. 21 Though a computer program might be patentable if there is some technical contribution over and above
that provided by the program itself. See the EPO’s Guidelines for Examination on Artificial Intelligence and
2 This might be a dynamic definition. Historically, once a task is easily accomplished by a computer it often Machine Learning.
ceases to be considered artificial intelligence (i.e. artificial intelligence is “anything computers still can’t
do”). See What Computers Still Can’t Do: A Critique of Artificial Reason by Hubert L Dreyfus. 22 Where the software is simply used as a tool, for example Microsoft Word, the person using that tool will be
the author. Word does not supply any element of “originality”. In contrast, where an artificial intelligence
3 This challenge was identified in 1950 by Alan Turing. He proposed what has come to be known as the algorithm creates a work, it may have a creative role and help provide the necessary ingredient of originality.
“Turing test”, in which a human would evaluate natural language conversations between a human and a
machine. The machine will pass the test if the evaluator cannot tell machine from human. 23 Section 9(3), Copyright, Designs and Patents Act 1998.
4
Miller v Jackson [1977] QB 966: “In summertime village cricket is the delight of everyone. Nearly every 24 For example, the designer of a pool game was the person who made the arrangements for the creation of
village has its own cricket field where the young men play and the old men watch. In the village of Lintz in each individual frame of the game. The player of the game is not “an author of any of the artistic works
County Durham they have their own ground, where they have played these last 70 years. They tend it well.” created in the successive frame images. His input is not artistic in nature and he has contributed no skill or
labour of an artistic kind. Nor has he undertaken any of the arrangements necessary for the creation of the
5
Donoghue v Stevenson [1932] A.C. 562: “For a manufacturer of aerated water to store his empty bottles in frame images. All he has done is to play the game.” Nova v Mazooma [2006] EWHC 24.
a place where snails can get access to them, and to fill his bottles without taking any adequate precautions
by inspection or otherwise to ensure that they contain no deleterious foreign matter, may reasonably be 25 Such as “Collative redress across the globe: a review in 19 jurisdictions” or “FAQs on the ISDA
characterised as carelessness without applying too exacting a standard.” Benchmarks Supplement”.
6 With apologies to Sir Martin Nourse (Tektrol Ltd v International Insurance Co of Hanover [2005] EWCA Civ 845). 26
Study finds gender and skin-type bias in commercial artificial-intelligence systems, MIT News, 11 February 2018.
7 Texts such as the Bible are used for non-European languages given it has been widely translated. 27 This type of discrimination would be atypical: see Amazon scrapped ‘Sexist AI” tool, BBC News, 10
October 2018.
8
AlphaZero AI beats champion chess program after teaching itself in four hours, The Guardian, 7 December 2017.
28 In practice, these confidentiality duties are likely to arise in equity.
9
What Artificial Experts Can and Cannot Do, Hubert L. Dreyfus & Stuart E. Dreyfus, 1992. This classic
example features in many undergraduate’s computer science courses and demonstrates the problem is 29 Such as guidance issued by the National Data Guardian and the various codes of practice issued by the
not new. However, it is also worth noting there is an ongoing debate as to whether this actually happened NHS and HSCIC.
or is just an apocryphal story.
30 See section 251 of the National Health Act 2006 and the associated Health Service (Control of Patient
10 See Building safe artificial intelligence: specification, robustness and assurance, Pedro Ortega and Vishal Information) Regulations 2002.
Maini, 27 September 2018.
31 See Article 28 of the GDPR.
11 See footnote 10.
32 For example, a licence agreement stated that each party “agrees to keep the terms of this Agreement
12 While not discussed in the paper, one assumes this particular problem could easily be fixed by not confidential”. In addition, either party could terminate for a material breach and for this “purpose…
repopulating the waypoints. breach of the confidentiality obligations...constitutes a non-remediable material breach”. One of the parties
disclosed the agreement to a potential purchaser so the other party terminated the agreement. The Court
13 See The quality of live subtitling, Ofcom, 17 May 2013. of Appeal decided that the strict wording of the agreement applied, and the termination was justified. See
Kason Kek-Gardner v Process Components [2017] EWCA Civ 2132.
14 See 2001: A Space Odyssey, Ex Machina and Avengers: Age of Ultron, respectively.
33 The anonymisation process itself is a processing that must be justified under the GDPR but will normally
15 In some cases, the project could involve the creation of specialised hardware on which to run the
be permitted so long as the personal data is truly anonymised.
algorithm, though this is likely to be rare.
34 A postcode identifies around 15 households (though some postcodes relate to a single property) so the
16 Recital 7 and Article 1(3) of the Software Directive 2009/24/EC.
combination of a postcode with other information, such as date of birth, will normally identify an individual.
17 Copyright will also protect the object code for that software and any preparatory works. However, it does In rare cases, a postcode alone will identify an individual.
not protect the underlying ideas or any programming interfaces.
35 See section 171 of the DPA 2018.
18 “Neural weights”: the relative importance ascribed to items within a dataset for the purpose of analysis or
36 Source “How To Break Anonymity of the Netflix Prize Dataset” by Arvind Narayanan and Vitaly Shmatikov.
decision making
https://arxiv.org/abs/cs/0610105v2.
19 The weightings within a neural net will be computer generated as part of the training process. Copyright
37 See section 29A of the Copyright, Designs and Patents Act 1988.
does protect computer generated works but only if the work is a literary, dramatic, musical or artistic
work (section 9(3) of the Copyright, Designs and Patents Act 1988). It is questionable if something like 38 IMS Health v. NDC Health (C-418/01).
weightings in a neural network could be called a literary work.
39 The GDPR is supplemented by the ePrivacy Directive (2002/58/EC) which, amongst other things, imposes
20 Database copyright may also subsist in a database in which the selection or arrangement of the data constitutes additional limitations on the use of traffic and location data. The EU is currently planning to replace this
the author’s own intellectual creation (for example, a large index of My favourite 20th century poems). Directive with a new Regulation.
FOOTNOTES
40 There has been some discussion about whether the artificial intelligence system might itself be an 62 Online seller admits breaking competition law, Competition and Markets Authority, July 2016.
independent data controller. As such systems do not have legal or natural personality, and are not really
“intelligent”, this seems unlikely for the time being. 63 “Eturas” UAB and Others v Lietuvos Respublikos konkurencijos taryba (C-74/14).
41 This is not an exhaustive list. See our GDPR Survival Guide for more information - https://www.linklaters. 64 See footnote 61.
com/en/insights/publications/2016/june/guide-to-the-general-data-protection-regulation.
65 Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017.
42 Our GDPR Survival Guide contains a detailed summary of these rights, see footnote above.
66 Commisioner Vestage. See footnote above.
43 This consists of racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic
67 See Neutral networks, explained, Physics World, 9 July 2018.
information, biometric information, health information or information about sex life or sexual orientation.
68 For example, see Thornton v Shoe Lane Parking Ltd [1971] 1 All ER 686 in relation to a parking ticket
44 See Article 9 of the GDPR and Schedule 1 of the Data Protection Act 2018.
machine: “The customer pays his money and gets a ticket. He cannot refuse it. He cannot get his money
45 Article 6 of the GDPR. back. He may protest to the machine, even swear at it; but it will remain unmoved. He is committed
beyond recall. He was committed at the very moment when he put his money into the machine. The
46 Article 6(4) of the GDPR. contract was concluded at that time.”
47
Guidelines on Data Protection Impact Assessment and determining whether processing is “likely to result 69 For completeness, an artificial intelligence would likely be treated as a “mere tool” for contracting, and not
in a high risk”, The Article 29 Working Party (WP 248 rev 01), October 2017. a distinct agent under agency law.
48 Information Commissioner’s Examples of processing ‘likely to result in high risk’ read in light of EDPB 70 See https://www.linklaters.com/en/about-us/news-and-deals/news/2017/smart-contracts-and-distributed-
Opinion 22/2018. ledger--a-legal-perspective.
49 This report does not consider driverless cars. 71 Example adapted from Decision time for AI: Sometimes accuracy is not your friend, The Register, 6 July 2018.
50 In a medical scenario, the doctor’s use of an artificial intelligence tool would be subject to a duty of 72 Assume you have a million claims. Of those 998,000 will be valid and 2,000 will be fraudulent (one in five
care. That duty is likely to be defined by the Bolam test. The key questions would likely be: (i) would a hundred). Of the fraudulent claims, 1960 will be flagged (2,000 x 98%). Of the valid claims, 19,960 will be
responsible professional use an artificially intelligent tool in this situation? (ii) what reliance would that flagged (998,000 x 2%). Thus the percentage of flagged claims that are actually fraudulent is 8.9% (1960
professional place on the output of that tool? ÷ (1960+19,960)).
51
Customs & Excise Commissioners v Barclays Bank plc [2007] 1 AC 181. 73 See FCA Principle 3 and SYSC 8.
52 Primary liability falls on the manufacturer, “own brander” or importer, but distributors can also have liability 74 See https://www.fca.org.uk/publications/multi-firm-reviews/automated-investment-services-our-expectations.
in more limited circumstances. See the Product Liability Directive 85/374 implemented by the Consumer
Protection Act 1987. 75 See https://deepmind.com/applied/deepmind-health/transparency-independent-reviewers/independent-
reviewers/.
53 This is the test to determine if a product is defective. The court might consider that, for a car, those
expectations are high, see Boston Scientific v AOK (C‑503/13). 76 See Microsoft AI principles, https://www.microsoft.com/en-us/ai/our-approach-to-ai.
54 See Peter Nowak v Data Protection Commissioner, Case C-434/16 and Johnson v Medical Defence Union 77 Bank of England chief economist warns on AI jobs threat, BBC News, 20 August 2018.
[2007] EWCA Civ 262.
78 See Driven to despair — the hidden costs of the gig economy, Financial Times, 22 September 2017.
55 This inherently contradictory system may be impossible to develop.
79 Big Data, AI, Machine Learning, and Data Protection, Information Commissioner’s Office, September 2017.
56 Article 22(1), GDPR.
80
FCA publishes feedback statement on Big Data Call for Input, Financial Conduct Authority, September 2016.
57 See statement by Silkie Carlo of Liberty in the House of Commons Science and Technology Committee’s
81 New guidance to help NHS patients benefit from digital technology, 5 September 2018.
report on Algorithms in decision making.
82 Artificial Intelligence for Europe, European Commission, April 2018.
58
Guidelines on automated decision making and profiling, Article 29 Working Party (WP251 rev 01),
February 2018. 83 Revealed: Google AI has access to huge haul of NHS patient data, New Scientist, 29 April 2016.
59 See Guidelines on automated individual decision making and profiling, Article 29 Working Party (WP 251 84 See http://s3-eu-west-1.amazonaws.com/files.royalfree.nhs.uk/Reporting/Streams_Report.pdf.
rev 01).
60 For example, see Fiat Chrysler recalls 1.4 million cars after Jeep hack, BBC News, 24 July 2015
61 See Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (2016), Maurice E.
Stucke and Ariel Ezrachi.
CONTACTS
8424_INT_F/10.18
Linklaters LLP is a limited liability partnership registered in England and Wales with registered number OC326345. It is a law firm authorised and regulated by the Solicitors Regulation Authority.
The term partner in relation to Linklaters LLP is used to refer to a member of Linklaters LLP or an employee or consultant of Linklaters LLP or any of its affiliated firms or entities with equivalent standing and qualifications.
A list of the names of the members of Linklaters LLP and of the non-members who are designated as partners and their professional qualifications is open to inspection at its registered office, One Silk Street, London EC2Y 8HQ,
England or on www.linklaters.com and such persons are either solicitors, registered foreign lawyers or European lawyers.
Please refer to www.linklaters.com/regulation for important information on our regulatory position.Please refer to www.linklaters.com/regulation for important information on Linklaters LLP’s regulatory position.