Artificial intelligence_ Development, risks and regulation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

House of Lords Library

Artificial intelligence: Development, risks


and regulation
In Focus
Published Tuesday, 18 July, 2023
James Tobin

Artificial intelligence (AI) is developing at a rapid pace. From generative


language models like ChatGPT to advances in medical screening
technology, policymakers and the developers of the technology alike
believe that it could deliver fundamental change across almost every
area of our lives. But such change is not without risk. Debate is ongoing
on how best to regulate these innovative technologies and differences
of approach have already emerged internationally as countries across
the world examine how best to adapt.

Table of contents
1. What is artificial intelligence?

2. Ongoing development of AI: Potential benefits and risks


2.1 Current contribution of AI to the UK economy

2.2 Potential benefits and risks of AI

2.3 Potential impact on the UK employment market

2.4 Case study: Potential impact on the knowledge and creative industries
(House of Lords Communications and Digital Committee report, January
2023)
3. Calls for rapid regulatory adaptation
3.1 Letter from key figures in AI, science and technology calling for a pause
in AI development (March 2023)

3.2 Joint report by Sir Tony Blair and Lord Hague of Richmond (June 2023)

4. Proposed regulatory approaches: UK


4.1 UK government approach to artificial intelligence

4.2 Current regulatory environment for AI in the UK

4.3 Proposals for future regulatory reform: Government white paper

4.4 Individual sectoral guidance on the use of artificial intelligence

5. Other regulatory approaches: Examples from around the world


5.1 European Union

5.2 United States of America

5.3 Regulatory approaches compared: Potential lessons for the UK?

On 24 July 2023, the House of Lords is due to debate the following motion:

“Lord Ravensdale (Crossbench) to move that this House takes note of the ongoing
development of advanced artificial intelligence, associated risks and potential
approaches to regulation within the UK and internationally.”

1. What is artificial intelligence?


Artificial intelligence (AI) can take many forms. As such, there is no agreed single
definition of what it encompasses. In broad terms, it can be regarded as the theory and
development of computer systems able to perform tasks normally requiring human
intelligence , such as visual perception, speech recognition, decision-making, and
translation between languages. According to IBM, the current real-world applications of
AI include:

extracting information from pictures (computer vision)


transcribing or understanding spoken words (speech to text and natural language
processing)
pulling insights and patterns out of written text (natural language understanding)
speaking what has been written (text to speech, natural language processing)
autonomously moving through spaces based on its senses (robotics)
generally looking for patterns in large amounts of data (machine learning)

In banking, for example, AI is currently used to detect and flag suspicious activity to a
bank’s fraud department , such as unusual debit card usage and large account deposits.
The NHS also reports that AI is being used to benefit people in health and care by
analysing X-ray images to support radiologists in making assessments and helping
clinicians read brain scans more quickly, by supporting people in ‘virtual wards’, who
would otherwise be in hospital to receive the care and treatment they need, and through
remote monitoring technology such as apps and medical devices which can assess
patients’ health and care while they are being cared for at home.

To achieve this, AI systems rely upon large datasets from which they can decipher
patterns and correlations , thereby enabling the system to ‘learn’ how to anticipate future
events. It does this by relying upon and/or creating algorithms based on the dataset
which it can use to interpret new data. This data can be structured, such as bank
transactions, or unstructured, such as enabling a driverless car to respond to the
environment around it.

The different forms that AI can take range from so-called ‘narrow’ AI designed to perform
specific tasks to what is known as ‘strong’ or ‘general’ AI with the capacity to learn and
reason. The House of Commons Library recently drew upon research from Stanford
University and other sources to offer the following definitions:

Narrow AI is designed to perform a specific task (such as speech recognition),


using information from specific datasets, and cannot adapt to perform another
task. These are often tools that aim to assist, rather than replace, the work of
humans.

Artificial general intelligence (AGI—also referred to as ‘strong’ AI) is an AI system


that can undertake any intellectual task/problem that a human can. AGI is a
system that can reason, analyse and achieve a level of understanding that is on a
par with humans; something that has yet to be achieved by AI. The US computer
scientist Nils John Nilsson, for example, proposed that one way to test if a system
had achieved AGI was if it could successfully learn the skills to perform the
different jobs “ordinarily performed by humans”, from “knowledge work” (such as a
Library assistant) to “manual labour” (such as a roofer).

Machine learning is a method that can be used to achieve narrow AI; it allows a
system to learn and improve from examples, without all its instructions being
explicitly programmed. It does this by finding patterns in large amounts of data,
which it can then use to make predictions (for example what film or TV programme
you might like to watch next on a streaming platform). The AI can then
independently amend its algorithm based on the accuracy of its predictions.
Deep learning is a type of machine learning whose design has been informed by
the structure and function of the human brain and the way it transmits
information. The application of deep learning can be seen in ‘foundation models’,
of which ‘large language models (LLMs)’ such as ChatGPT, are one example. The
term refers to those models that are trained on very large, unlabelled datasets and
which can be adapted to do a wide range of tasks, despite not having been trained
explicitly to do those tasks. In other words, the model can take information it has
learnt about in one situation and apply it to another, different situation. Sometimes
LLMs are refined or ‘fine-tuned’ (trained using additional data) to achieve a
specific goal. ChatGPT, for example, has been fine-tuned to allow users to ask it a
question, or make a request, and for it to generate “human-like text” in response.
Illustrative example: Text generated directly by AI
As a basis for comparison, when ChatGPT—known as a large language model or LLM,
which is designed to use a wide knowledge base to provide responses to user prompts
—is asked to define the key forms of AI it gives the following response:
1 Narrow AI: Narrow AI, also known as weak AI, refers to AI systems designed to
perform specific tasks or solve specific problems. These systems are limited in
their scope and cannot exhibit general intelligence. Examples of narrow AI
include voice assistants like Siri or Alexa, image recognition algorithms, and
recommendation systems.

2 General AI: General AI, also referred to as strong AI or human-level AI, is an AI


system that possesses the ability to understand, learn, and perform any
intellectual task that a human being can do. General AI would possess a broad
range of cognitive abilities and be capable of reasoning, understanding natural
language, learning new tasks, and adapting to different situations. While
significant progress has been made in AI, true general AI remains a theoretical
concept and has not yet been achieved.

3 Machine learning: Machine learning is a subset of AI that focuses on the


development of algorithms and models that enable computer systems to learn
and make predictions or decisions without being explicitly programmed.
Machine learning algorithms are trained on large datasets and can recognize
patterns, make predictions, and continuously improve their performance over
time. Popular machine learning techniques include supervised learning,
unsupervised learning, and reinforcement learning.

4 Deep learning: Deep learning is a subfield of machine learning that employs


artificial neural networks, inspired by the structure and function of the human
brain, known as deep neural networks. Deep learning models are capable of
learning hierarchical representations of data and can process large amounts of
complex information, such as images, audio, and text. Deep learning has been
highly successful in tasks like image and speech recognition, natural language
processing, and autonomous driving.

5 Natural language processing (NLP): Natural language processing is a branch


of AI that focuses on enabling computers to understand, interpret, and generate
human language. NLP algorithms analyse and process text and speech data,
enabling tasks such as language translation, sentiment analysis, chatbots, and
voice assistants.

6 Robotics and automation: Robotics combines AI and physical systems to


create machines or robots capable of performing physical tasks and interacting
with the physical world. AI-powered robots can be designed for various
applications, such as industrial automation, healthcare assistance, exploration,
and even domestic tasks.
As this example demonstrates these systems are capable of generating sophisticated
responses. However, LLMs such as ChatGPT and Google Bard have sometimes been found
to have an issue with “hallucinating” inaccurate information and providing it in response
to a user prompt .

As noted by the UK government in its ‘ National AI strategy ’, AI is the fastest growing deep
technology in the world, with “huge potential to rewrite the rules of entire industries,
drive substantial economic growth and transform all areas of life”. Such transformative
technology brings both risks and benefits, as explored below.

2. Ongoing development of AI: Potential benefits


and risks
2.1 Current contribution of AI to the UK economy
The AI sector is already providing a significant contribution to the UK economy. The
‘Artificial intelligence sector study 2022 ’ published by the Office for Artificial Intelligence
and the Department for Science, Innovation and Technology found that there were 3,170 AI
companies working in the UK which generated £10.6bn in AI-related revenue. The sector
also employed more than 50,000 people in AI-related roles, generating £3.7bn in gross
value added (GVA). Further, it had secured £18.8bn in private investment since 2016.

Taken from the same publication, figure 1 illustrates the size profile of AI companies in the
UK:

Figure 1. Size profile of UK AI companies, 2022

(Source: Department for Science, Innovation and Technology and Office for Artificial Intelligence,
‘ Artificial Intelligence sector study 2022 ‘, March 2023)

The categories in the graph are defined as follows: large business have more than 250
employees; medium, 50–249; small, 10–49; and micro, less than 9.
Other key findings from the report included (emphasis in original):

Of the 3,170 active companies identified through the study, 60% are dedicated AI
businesses and 40% are diversified ie, have AI activity as part of a broader
diversified product or service offer.

Compared to similar studies into other emerging technology sectors, a greater


proportion of diversified AI companies have been identified, highlighting the broad
scope for development of AI technology applications by established
technology companies across sectors.

On average 269 new AI companies have been registered each year since 2011, with
a peak in new company registrations in the same year as the AI sector deal in
2018 (429 companies).

Together, the data on company size and business model suggest that dedicated AI
companies are both smaller and more dependent on AI products for revenue.
Diversified AI companies are typically larger and likely to generate a greater
proportion of revenues from less capital-intensive provision of AI-related services.

London, the South East and the East of England account for 75% of registered
AI office addresses, and also for 74% of trading addresses. Just under one-third
of AI companies with a registered address outside of London, the South East and
the East of England still have a trading presence in those regions, highlighting the
apparent significance of those regions to development of the UK AI sector to date.
[These findings are illustrated in figure 4 below.]

While absolute numbers are smaller, the study has identified more notable
proportions of wider regional AI activity in automotive, industrial automation
and machinery; energy, utilities and renewables; health, wellbeing and
medical practice; and agricultural technology.

In the most recent financial year (2021/22), annual revenues generated


specifically from AI-related activity by UK AI companies totalled an estimated
£10.6bn, split approximately 50/50 between dedicated and diversified companies.

Across both dedicated and diversified AI companies, study estimates suggest that
there are 50,040 full time equivalents (FTEs) employed in AI-related roles, 53%
of which are within dedicated AI companies.
Based on a combination of official company data, survey responses and
associated modelling, AI companies are estimated to contribute £3.7bn in GVA
to the UK economy. For large companies the GVA-to-turnover ratio is 0.6:1 (ie, for
every £1 of revenue, large AI companies generate 60p in direct GVA). GVA-to-
turnover ratios among small and medium sized enterprises (SMEs) are much lower
(0.2:1 for medium-sized companies and negative for small and micro businesses),
which reflects the capital-intensive, high R&D nature of deep technology
development.
Since 2016, AI companies have secured a total of £18.8bn in private investment.
2021 was a record year for AI investment, with over £5bn raised across 768 deals,
representing an average deal size of £6.7mn. Further, AI investment increased
almost five-fold between 2019 and 2021.
In 2022 dedicated AI companies secured a higher average deal value than
diversified companies for the first time. However, data on AI investment by stage
of evolution may also be signalling some tightening of investment available to
seed and venture stage companies and, given the significance of private
investment for AI technology development evidenced by data on revenues and
GVA, this could pose a risk to realising the potential within early-stage AI
companies.

The study highlighted a notable opportunity for companies operating in the AI


implementation space to build teams of AI implementation experts that can
support AI adoption opportunities across sectors. This adoption opportunity is
supported by investment data, which highlights that in 2022 investments were
made in 52 unique industry sectors, compared to investments across just 35
different sectors in 2016.

A selection of this data is illustrated in the charts below, again taken from the ‘Artificial
intelligence sector study 2022 ’:
Figure 2. Breakdown of machine learning companies by industry
and sub-industry in the UK, 2022

Figure 3. UK AI revenue by firm size, 2022

Figure 4. Regional AI activity in the UK (by registered


addresses), 2022
2.2 Potential benefits and risks of AI
The variety of potential applications of AI also means that there is a similarly wide number
of potential benefits and risks to using such technology. The European Parliament
summarises the potential benefits of AI at a societal level as follows:

AI could help people with improved health care, safer cars and other transport
systems, tailored, cheaper and longer-lasting products and services. It can also
facilitate access to information, education and training. […] AI can also make
workplace safer as robots can be used for dangerous parts of jobs, and open new
job positions as AI-driven industries grow and change.

For businesses, AI can enable the development of a new generation of products


and services, and it can boost sales, improve machine maintenance, increase
production output and quality, improve customer service, as well as save energy.

AI used in public services can reduce costs and offer new possibilities in public
transport, education, energy and waste management and could also improve the
sustainability of products.

Democracy could be made stronger by using data-based scrutiny, preventing


disinformation and cyber attacks and ensuring access to quality information […].

AI is predicted to be used more in crime prevention and the criminal justice


system, as massive datasets could be processed faster, prisoner flight risks
assessed more accurately, crime or even terrorist attacks predicted and prevented.
In military matters, AI could be used for defence and attack strategies in hacking
and phishing or to target key systems in cyberwarfare.
However, the article also noted some of the risks of AI. These included liability, including
who is responsible for any harms or damage caused by the use of AI. Similarly, in an
article on Forbes’ website, the futurist Bernard Marr also suggested that the biggest risks
of AI at a broad level are:

A lack of transparency, particularly regarding the development of deep learning


models (including the so-called ‘ Black Box’ issue where AI generates unexpected
outputs and human scientists and developers are not clear why it has done so ).

Bias and discrimination, particularly where AI systems inadvertently perpetuate


or amplify societal bias.

Privacy concerns, particularly given the capacity of AI to analyse large amounts of


personal data.

Ethical concerns, especially concerning the challenges inherent to instilling moral


and ethical values in AI systems.

Security risks, including the development of AI-driven autonomous weaponry.

Concentration of power, given the risk of AI development being dominated by a


small number of corporations.

Dependence on AI, including the risk that an overreliance on AI leads to a loss of


creativity, critical thinking skills and human intuition.

Job displacement, the potential for AI to abolish the need for some jobs (whilst
potentially generating the need for others).

Economic inequality, and the risk that AI will disproportionally benefit wealthy
individuals and corporations.

Legal and regulatory challenges, and the need for regulation to keep pace with
the rapid development of innovation.

An AI arms race, where companies, and countries, compete to be first to generate


new capabilities at the expense of ethical and regulatory concerns.

Loss of human connection, and the fear that a reliance on AI-driven


communication and interactions could lead to diminished empathy, social skills
and human connections.

Misinformation and manipulation, and the risk that AI-generated content drives
the spread of false information and the manipulation of public opinion.

Unintended consequences, particularly around the complexity of AI systems and a


lack of human oversight leading to undesirable outcomes.

Existential risks, including the rise of artificial general intelligence (AGI) that
surpasses human intelligence and raises long term risks for the future of humanity.
On the risk of misinformation and manipulation, several commentators have already
suggested that elections in 2024, particularly the US presidential election, may be the
first elections where the campaigning process is significantly affected by AI .

2.3 Potential impact on the UK employment market


A report commissioned by the government from consultancy firm PWC in 2021 found that
7 percent of jobs in the UK labour marker were at high risk of being automated in the next
five years. This rose to 30 percent after 20 years:

“Our base case estimate is that around 7 percent of existing UK jobs could face a high
(over 70 percent) probability of automation over the next five years, rising to around
18 percent after 10 years and just under 30 percent after 20 years. This is within the
range of estimates from previous studies and draws on views from an expert
workshop on the automatability of occupations and detailed analysis of OECD
[Organisation for Economic Co-operation and Development] and ONS [Office for
National Statistics] data on how this is related to the task composition and skills
required for different occupations.”

The manufacturing sector was highlighted in the report as being at risk of losing the most
jobs over the next 20 years, with job losses also expected in transport and logistics, public
administration and defence, and the wholesale and retail sectors. In contrast, the health
and social work sector was highlighted as most likely to see the largest job gains, while
gains were also expected in the professional and scientific, education, and information
and communications sectors. Jobs in lower paid clerical and process-orientated roles
were most likely to be at risk of being lost. In contrast, the report suggested there would
be job gains in managerial and professional occupations.

The report concluded that the most plausible assumption is that the long-term impact of
AI on employment levels in the UK would be broadly neutral, but that the potential
impacts within that umbrella were unclear.

More recent analyses of AI, particularly since the release of LLMs such as ChatGPT and
Google Bard, have questioned whether the impact of AI will indeed be felt most in lower-
paid/manual occupations. Analysis published in March 2023 by OpenAI itself, the creator
of ChatGPT, suggested that higher wage occupations tend to be more exposed to LLMs .
Its analysis also found that within this there will be differentiation depending on the
nature of the tasks involved:

“[T]he importance of science and critical thinking skills are strongly negatively
associated with exposure, suggesting that occupations requiring these skills are less
likely to be impacted by current LLMs. Conversely, programming and writing skills
show a strong positive association with exposure, implying that occupations involving
these skills are more susceptible to being influenced by LLMs.”

On 21 April 2023, the House of Commons Business, Energy and Industrial Strategy
Committee published a report on post-pandemic economic growth and the UK labour
markets . This report highlighted the impact that AI could have on productivity within the
UK. It refers to research from Deloitte that found that “by 2035 AI could boost UK labour
market productivity by 25%”, and that “four out of five UK organisations said that use of AI
tools had made their employees more productive, improved their decision-making, and
made their process more efficient”.

It also argued that AI and related technologies may have a positive impact on helping
people access the labour market who have otherwise found it difficult to find and stay in
employment, such as disabled people.

Estimates of the impact of AI on the UK and the world economy continue to be published
regularly as these products develop. Recent examples include research from McKinsey
which suggested that generative AI could add value equivalent to the UK’s entire GDP to
the world economy over the coming years :

“Generative AI’s impact on productivity could add trillions of dollars in value to the
global economy. Our latest research estimates that generative AI could add the
equivalent of $2.6tn to $4.4tn annually across the 63 use cases we analysed—by
comparison, the United Kingdom’s entire GDP in 2021 was $3.1tn. This would increase
the impact of all artificial intelligence by 15 to 40 percent. This estimate would
roughly double if we include the impact of embedding generative AI into software
that is currently used for other tasks beyond those use cases.”

2.4 Case study: Potential impact on the knowledge and


creative industries (House of Lords Communications and
Digital Committee report, January 2023)
There are potential applications of AI across almost every sphere of human life and as a
result it would be impossible to examine them all here. However, in January 2023, the
House of Lords Communications and Digital Committee examined the potential impact of
AI on the creative industries in the UK as part of a wider examination of the sector, which
provides an illustrative example.

The committee heard evidence that new technologies and the rise of digitised culture will
change the way creative content is developed, distributed and monetised over the next
five to 10 years. The committee drew particular attention to intellectual property (IP), the
protection of which it said was vital to much of the creative industries, and the impact
upon it of the rise of AI technologies, particularly text and data mining of existing
materials used by generative AI models to learn and develop content.

The committee also drew attention to recently proposed reforms to IP law:


“The government’s proposed changes to IP law provided an illustrative example of
the tension between developing new technologies and supporting rights holders in
the creative industries. In 2021 the Intellectual Property Office (IPO) consulted on the
relationship between IP and AI. In 2022 the IPO set out its conclusions, which included
“a new copyright and database right exception which allows text and data mining for
any purpose”.”

The committee suggested that such proposals were “misguided” and took “insufficient
account of the potential harm to the creative industries”. Arguing that the development of
AI was important, but “should not be pursued at all costs”, the committee argued that the
IPO should pause its proposed changes to the text and data mining regime “immediately”.
The committee added that the IPO should conduct and publish an impact assessment on
the implications for the creative industries, and if this assessment found negative effects
on businesses in the creative industries, it should then pursue alternative approaches,
such as those employed by the European Union. (The European Union’s approach is
examined in section 5.1 of this briefing.)

The committee also warned against the use of AI to generate, reproduce and distribute
creative works and image likenesses which went against the rights of performers and the
original creators of the work.

In its response to the committee , the government said that, “in light of additional
evidence” of the impact on the creative sector, the government “will not be proceeding”
with the proposals for an exception for text and data mining of copyrighted works.
Instead, the government said it would work with users and rights holders to produce a
“code of practice by the summer [2023]” on text and data mining by AI.

There are several court challenges underway on the use of existing written content and
images to train generative AI. For example, authors Paul Tremblay and Mona Awad have
launched legal action in the United States against OpenAI alleging that copies of their
work were used unauthorised to develop its ChatGPT LLM. The debate on how best to
protect copyright and creative careers such as writing and illustrating continues. The
Creators’ Rights Alliance (CRA) , which is formed of bodies from across the UK cultural
sector, contends that current AI technology is accelerating and being implemented
without enough consideration of issues around ethics, accountability, and economics for
creative human endeavour.

The CRA argues that there should be a clear definition and labelling of what constitutes
solely AI generated work and work made with the intervention of creators, and that the
distinct characteristics of individual performers and artists should be protected. At the
same time, it said that copyright should be protected, including no data mining of existing
work without consent, and that there should be transparency over the data used to create
generative AI. The CRA also called for increased protection for creative roles such as
visual artists, translators and journalists, or these roles could be lost to AI systems.
3. Calls for rapid regulatory adaptation
The potential benefits and harms of AI have led to calls for governments to adapt quickly
to the changes AI is already delivering and the potentially transformative changes to
come. These include calls to pause AI development and for countries including the UK to
deliver a step-change in regulation, potentially before the technology passes a point
when such regulation can be effective. The chief executive of Google, Sundar Pichai, is one
example of a leading technology figure who has warned about the potential harms of AI
and called for a suitable regulatory framework .

This is a fast-moving area and this briefing concentrates on reports published since the
beginning of 2023. For an exploration of publications and milestones before this time,
including the work of the House of Lords Committee on Artificial Intelligence, see the
earlier House of Lords Library briefing ‘ Artificial intelligence policy in the UK: Liaison
Committee report ’, published in May 2022.

3.1 Letter from key figures in AI, science and technology


calling for a pause in AI development (March 2023)
In March 2023, more than 1,000 artificial intelligence experts, researchers and backers
signed an open letter calling for an immediate pause on the creation of “giant” AIs for at
least six months , so the capabilities and dangers of such systems can be properly studied
and mitigated.

The signatories included engineers from Amazon, DeepMind, Google, Meta and Microsoft,
as well as academics and prominent industry figures such as Elon Musk, who co-founded
OpenAI (the research lab responsible for ChatGPT and GPT-4), Emad Mostaque, who
founded London-based Stability AI, and Steve Wozniak, the co-founder of Apple.

The letter noted that, given the potential of advanced AI, it “should be planned for and
managed with commensurate care and resources”. It said that “unfortunately, this level of
planning and management is not happening”, even though recent months have seen “AI
labs locked in an out-of-control race to develop and deploy ever more powerful digital
minds that no one—not even their creators—can understand, predict, or reliably control”.

The letter added (emphasis in the original):


“Contemporary AI systems are now becoming human-competitive at general tasks,
and we must ask ourselves: Should we let machines flood our information channels
with propaganda and untruth? Should we automate away all the jobs, including the
fulfilling ones? Should we develop nonhuman minds that might eventually outnumber,
outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Such decisions must not be delegated to unelected tech leaders. Powerful AI
systems should be developed only once we are confident that their effects will be
positive and their risks will be manageable. This confidence must be well justified
and increase with the magnitude of a system’s potential effects. OpenAI’s recent
statement regarding artificial general intelligence , states that “at some point, it may
be important to get independent review before starting to train future systems, and
for the most advanced efforts to agree to limit the rate of growth of compute used for
creating new models”. We agree. That point is now.”

The Future of Life Institute, which coordinated the production of the letter, also published
a policy paper entitled ‘ Policy making in the pause ’, which offered the following
recommendations to govern the future of AI development:

1 mandate robust third-party auditing and certification for specific AI systems

2 regulate access to computational power

3 establish capable AI agencies at the national level

4 establish liability for AI-caused harms

5 introduce measures to prevent and track AI model leaks

6 expand technical AI safety research funding

7 develop standards for identifying and managing AI-generated content and


recommendations

However, not everyone shares the perspective that a pause is either necessary or
practical. For example, BCS, the chartered institute for IT, has said this would only result
in an “asymmetrical pause” as bad actors would ignore it and seize the advantage .
Instead, the institute said that humanity would benefit from ethical guardrails around AI
rather than halt any development. The chief executive of the BCS, Rashik Parmar, said:
“we can’t be certain every country and company with the power to develop AI would obey
a pause, when the rewards for breaking an embargo are so rich”.

Instead, BCS has issued a report outlining how AI can be helped to “grow up responsibly” ,
such as by making it part of public education campaigns and ensuring it is clearly
labelled whenever it is used. Among the paper’s key recommendations are:

Organisations should be transparent about their development and deployment of


AI, comply fully with applicable laws (eg in relation to data protection, privacy and
intellectual property) and allow independent third parties to audit their processes
and systems.
There should be clear and unambiguous health warnings, labelling and
opportunities for individuals to give informed consent prior to being subject to AI
products and services.
AI should be developed by communities of competent, ethical, and inclusive
information technology professionals, supported by professional registration.
AI should be supported by a programme of increased emphasis on computing
education and adult digital skills and awareness programmes to help the general
public understand and develop trust in the responsible use of AI, driven by
government and industry.
AI should be tested robustly within established regulatory ‘sandboxes’ (as
proposed in the government white paper examined in section 4 below).
The use of sandboxes should be encouraged beyond a purely regulatory need. For
example to test the correct skills and registration requirements for AI assurance
professionals and how best to engage with civic societies and other stakeholders
on the challenges and opportunities presented by AI.

3.2 Joint report by Sir Tony Blair and Lord Hague of


Richmond (June 2023)
On 13 June 2023, Sir Tony Blair, the former Labour Prime Minister, and William Hague (Lord
Hague of Richmond), the former leader of the Conservative Party, released a joint report,
‘ A new national purpose: AI promises a world-leading future of Britain ’, which described
AI as “the most important technology of our generation”.

The authors said that getting policy right on this issue was therefore “fundamental” and
contended that it could “define Britain’s future”. The report noted that the potential
opportunities were “vast”, including the potential to “change the shape of the state, the
nature of science and augment the abilities of citizens”. However, like others, the two
former party leaders also noted that the risks were “profound”.

As a result, the report called for urgent action, including a “radical new policy agenda
and a reshaping of the state, with science and technology at its core”. Noting that AI is
already having an impact and that the pace of change is only likely to accelerate in the
coming years, the authors contend that “our institutions are not configured to deal with
science and technology, particularly their exponential growth”. They said that it was
“absolutely vital that this changes”. This included a reorientation in the way government is
organised, works with the private sector, promotes research, draws on expertise and
receives advice.

To achieve this, the report offered specific recommendations including:

Securing multi-decade investment in science-and-technology infrastructure as


well as talent and research programmes by reprioritising large amounts of capital
expenditure to this task.
Boosting how Number 10 operates, dissolving the AI Council and empowering the
Foundation Model Taskforce by having it report directly to the prime minister.
Sharpening the Office for Artificial Intelligence so that it provides better foresight
function and agility for government to deal with technological change.

The report also contended that the UK could become a leader in the development of safe,
reliable and cutting-edge AI, in collaboration with its allies. The authors contended that
the UK has an “opportunity to construct effective regulation that goes well beyond existing
proposals yet is also more attractive to talent and firms than the approach being adopted
by the European Union”.

Again, the report offered recommendations on how this could be achieved, including:

“Creating Sentinel, a national laboratory effort focused on researching and testing


safe AI, with the aim of becoming the “brain” for both a UK and an international AI
regulator. Sentinel would recognise that effective regulation and control is and will
likely remain an ongoing research problem, requiring an unusually close combination
of research and regulation.”

Finally, the report contended that the UK could pioneer the deployment and use of AI
technology in the real world, “building next-generation companies and creating a 21st
century strategic state”. To achieve this, the report recommended:

Launching major AI talent programmes, including international recruitment and


the creation of polymath fellowships to allow top non-AI researchers to learn AI as
well as leading AI researchers to learn non-AI fields and cross-fertilise ideas.
Requiring a tiered-access approach to compute provision under which access to
larger amounts of compute comes with additional requirements to demonstrate
responsible use.
Requiring generative-AI companies to label the synthetic media they produce as
deepfakes and social-media platforms to remove unlabelled deepfakes.
Building AI-era infrastructure, including compute capacity and remodelling data,
as a public asset with the creation of highly valuable, public-good datasets.

The report added that it was “critical to engage the public throughout all of these
developments” to ensure AI development is accountable and give people the skills and
chance to adapt.
4. Proposed regulatory approaches: UK
4.1 UK government approach to artificial intelligence
On 22 September 2021, the government published its ‘ National AI strategy ’, setting out its
ten-year plan on AI. The strategy set out three high-level aims:

invest and plan for the long-term needs of the AI ecosystem to continue our
leadership as a science and AI superpower

support the transition to an AI-enabled economy, capturing the benefits of


innovation in the UK, and ensuring AI benefits all sectors and regions

ensure the UK gets the national and international governance of AI technologies


right to encourage innovation, investment, and protect the public and our
fundamental values

The Office for Artificial Intelligence, a unit within the Department for Science, Innovation
and Technology (DSIT), is responsible for overseeing the implementation of the national AI
strategy. There is also an AI Council , a non-statutory expert committee of independent
members set up to provide advice to the government.

In July 2022, the government published a consultation paper on establishing a “pro-


innovation” approach to AI. This was followed in March 2023 by a white paper and further
consultation exercise, which contain several principles and proposals for regulatory
reform which are discussed in detail in section 4.3 of this briefing.

4.2 Current regulatory environment for AI in the UK


The government argues that the UK is in a strong position to benefit from the development
of AI “due to our reputation for high-quality regulators and our strong approach to the
rule of law, supported by our technology-neutral legislation and regulations”.

Ministers contend that UK laws, regulators and courts already address some of the
emerging risks posed by AI technologies. However, they also concede that, while AI is
currently regulated through existing legal frameworks like financial services regulation,
some AI risks have arisen and will arise across, or in the gaps between, existing regulatory
remits.

The government provides the following evaluation of where such risks might exist and how
they could potentially be mitigated :
“Example of legal coverage of AI in the UK and potential gaps

Discriminatory outcomes that result from the use of AI may contravene the
protections set out in the Equality Act 2010. AI systems are also required by data
protection law to process personal data fairly. However, AI can increase the risk of
unfair bias or discrimination across a range of indicators or characteristics. This
could undermine public trust in AI.

Product safety laws ensure that goods manufactured and placed on the market in the
UK are safe. Product-specific legislation (such as for electrical and electronic
equipment, medical devices, and toys) may apply to some products that include
integrated AI. However, safety risks specific to AI technologies should be monitored
closely. As the capability and adoption of AI increases, it may pose new and
substantial risks that are unaddressed by existing rules.

Consumer rights law may protect consumers where they have entered into a sales
contract for AI-based products and services. Certain contract terms (for example,
that goods are of satisfactory quality, fit for a particular purpose, and as described)
are relevant to consumer contracts. Similarly, businesses are prohibited from
including certain terms in consumer contracts. Tort law provides a complementary
regime that may provide redress where a civil wrong has caused harm. It is not yet
clear whether consumer rights law will provide the right level of protection in the
context of products that include integrated AI or services based on AI, or how tort law
may apply to fill any gap in consumer rights law protection.”

In response to the 2022 consultation exercise cited above, the government reported that
those working in the AI sector said that “conflicting or uncoordinated requirements from
regulators create unnecessary burdens and that regulatory gaps may leave risks
unmitigated, harming public trust and slowing AI adoption”.

Further, the government said that respondents to the consultation had highlighted that, if
regulators were not proportionate and aligned in their regulation of AI, “businesses may
have to spend excessive time and money complying with complex rules instead of creating
new technologies”. Noting that small businesses and start-ups often do not have the
resources to do both and the prevalence of such firms in the sector, the government
argued that it was “important to ensure that regulatory burdens do not fall
disproportionately on smaller companies, which play an essential role in the AI innovation
ecosystem and act as engines for economic growth and job creation”.

4.3 Proposals for future regulatory reform: Government


white paper
The government’s proposals for future regulatory reform were set out in the March 2023
white paper, ‘ A pro-innovation approach to AI regulation ’.
Noting that across the world countries and regions were beginning to draft the rules for
AI, the white paper said that the UK “needs to act quickly to continue to lead the
international conversation on AI governance and demonstrate the value of our pragmatic,
proportionate regulatory approach”.

The white paper said that the government recognised both the rewards and risks of AI:

“While we should capitalise on the benefits of these technologies, we should also not
overlook the new risks that may arise from their use, nor the unease that the
complexity of AI technologies can produce in the wider public. We already know that
some uses of AI could damage our physical and mental health, infringe on the
privacy of individuals and undermine human rights.

Public trust in AI will be undermined unless these risks, and wider concerns about the
potential for bias and discrimination, are addressed. By building trust, we can
accelerate the adoption of AI across the UK to maximise the economic and social
benefits that the technology can deliver, while attracting investment and stimulating
the creation of high-skilled AI jobs. In order to maintain the UK’s position as a global
AI leader, we need to ensure that the public continues to see how the benefits of AI
can outweigh the risks.”

The white paper said that responding to risk and building public trust were important
drivers for regulation, but that clear and consistent regulation could also support
business investment and build confidence in innovation.

Consequently, it said that the government would put in place a new framework to bring
“clarity and coherence” to the AI regulatory landscape, which will harness AI’s ability to
drive growth and prosperity and increase public trust in its use and application. In taking
a “deliberately agile and iterative approach”, the government said that its framework was
“designed to build the evidence base so that we can learn from experience and
continuously adapt to develop the best possible regulatory regime”.

That framework is underpinned by five principles to “guide and inform the responsible
development and use of AI in all sectors of the economy”. These are:

safety, security and robustness

appropriate transparency and explainability


fairness

accountability and governance


contestability and redress

On whether new legislation would be introduced to support these aims, the white paper
said:
“We will not put these principles on a statutory footing initially. New rigid and
onerous legislative requirements on businesses could hold back AI innovation and
reduce our ability to respond quickly and in a proportionate way to future
technological advances. Instead, the principles will be issued on a non-statutory
basis and implemented by existing regulators. This approach makes use of
regulators’ domain-specific expertise to tailor the implementation of the principles to
the specific context in which AI is used. During the initial period of implementation,
we will continue to collaborate with regulators to identify any barriers to the
proportionate application of the principles, and evaluate whether the non-statutory
framework is having the desired effect.”

However, the paper also added that “following this initial period of implementation”, the
government anticipated introducing a statutory duty on regulators requiring them to have
due regard to the principles. The paper added:

“Some feedback from regulators, industry and academia suggested we should


implement further measures to support the enforcement of the framework. A duty
requiring regulators to have regard to the principles should allow regulators the
flexibility to exercise judgement when applying the principles in particular contexts,
while also strengthening their mandate to implement them. In line with our proposal
to work collaboratively with regulators and take an adaptable approach, we will not
move to introduce such a statutory duty if our monitoring of the framework shows
that implementation is effective without the need to legislate.”

Regarding the potential gaps between the remits of various regulators identified above,
the white paper noted that the 2022 AI consultation paper proposed a small coordination
layer within the regulatory architecture. However, the white paper noted that, while
industry and civil society were reportedly supportive of the intention to ensure coherence
across the AI regulatory framework, “feedback often argued strongly for greater central
coordination to support regulators on issues requiring cross-cutting collaboration and
ensure that the overall regulatory framework functions as intended”.

Consequently, the white paper said that the government had identified several central
support functions required to make sure that the overall framework offers a
“proportionate but effective” response to risk while promoting innovation across the
regulatory landscape. These were:

Monitoring and evaluation of the overall regulatory framework’s effectiveness and


the implementation of the principles, including the extent to which implementation
supports innovation. This will allow us to remain responsive and adapt the
framework if necessary, including where it needs to be adapted to remain effective
in the context of developments in AI’s capabilities and the state of the art.
Assessing and monitoring risks across the economy arising from AI.

Conducting horizon-scanning and gap analysis, including by convening industry, to


inform a coherent response to emerging AI technology trends.
Supporting testbeds and sandbox initiatives to help AI innovators get new
technologies to market.
Providing education and awareness to give clarity to businesses and empower
citizens to make their voices heard as part of the ongoing iteration of the
framework.
Promoting interoperability with international regulatory frameworks.

The white paper said that these functions would not entail the creation of a new AI
regulator:

“The central support functions will initially be provided from within government but
will leverage existing activities and expertise from across the broader economy. The
activities described above will neither replace nor duplicate the work undertaken by
regulators and will not involve the creation of a new AI regulator.”

The white paper included a consultation exercise on the proposals, which ran for 12 weeks
until 21 June 2023. The government is yet to publish an analysis of the responses received.

The government has already moved to dissolve the AI Council, as reported in the Times on
19 June 2023 . It will be replaced by a new foundation model taskforce led by technology
entrepreneur Ian Hogarth, which will spearhead the adoption and regulation of the
technology in the UK. A statement released on 7 July 2023 by the Department of Science,
Innovation and Technology said, with the terms of the current council members finishing,
it would establish a wider group of expert advisers:

“Since it was established in 2019, the AI Council has advised government on AI policy
with regards to national security, defence, data ethics, skills, and regulation, which
has played a key role in developing landmark policies including the National AI
Strategy, and the recent AI regulation white paper. The council also supported the
government’s early Covid-19 efforts, highlighting the immediate needs of the AI
startup ecosystem and facilitating rapid intelligence-gathering that shaped
government support for the tech sector in its pandemic response.

With the terms of the AI Council members coming to an end, the Department for
Science, Innovation and Technology is establishing a wider group of expert advisers
to input on a range of priority issues across the department, including artificial
intelligence. This will complement the recently established foundation model
taskforce, which will drive forward critical work on AI safety and research.”

The taskforce has been given £100mn to develop a British foundational generative AI
model akin to ChatGPT to be used in the health service and elsewhere .

Prime Minister Rishi Sunak has also announced that the UK will host a global summit on
safety in artificial intelligence in the autumn . Writing in the Guardian in June 2023, Dan
Milmo and Kiran Stacey argued that this marked a distinct “change of tone” from the
government , with ministers going from talking predominately about the benefits of AI to
the risks of such innovation. In addition to the different regulatory regimes discussed in
section 5 of this briefing, the Guardian article also reports that the G7 have agreed to
create an intergovernmental forum called the ‘Hiroshima AI process’ to debate issues
around these fast-growing tools.

4.4 Individual sectoral guidance on the use of artificial


intelligence
Organisations within the public and private sectors are evaluating how to respond to AI
and some have produced guidance on that approach. For example, the Cabinet Office
published guidance on the use of generative AI by civil servants , particularly LLMs, on 29
June 2023.

Similarly, in March 2023, the Department for Education issued guidance on the use of
generative AI in pre-university education . Noting that the technology provided both risks
and opportunities for the sector, the key principles outlined in that document stated that
educational institutions must continue to guard against misuse whilst seeking to take
advantage of these benefits.

This was followed in June 2023 by the Russell Group of universities publishing a guidance
note on the use of generative AI in higher education . Again, the perspective of the Russell
Group was not that generative AI tools should be banned, but that universities would
support staff and students to become AI literate whilst using these technologies ethically:

“Our universities wish to ensure that generative AI tools can be used for the benefit of
students and staff—enhancing teaching practices and student learning experiences,
ensuring students develop skills for the future within an ethical framework, and
enabling educators to benefit from efficiencies to develop innovative methods of
teaching.”

5. Other regulatory approaches: Examples from


around the world
5.1 European Union
In contrast to the UK, the European Commission is proposing a ‘horizontal’ and ‘risks-
based’ means of regulating , meaning that it plans to provide rules for AI across all
sectors and applications focused on the anticipated risk of such innovations.

In April 2021, the European Commission proposed the AI Act, draft legislation setting out
rules for governing AI within the EU . The AI Act would establish four levels of risk for AI:
unacceptable risk, high risk, limited risk, and minimal risk. Different rules apply
depending on the level of risk a system poses to fundamental rights.
The European Commission suggests that those risk categories would work in the following
ways :
“Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people
will be banned, from social scoring by governments to toys using voice assistance
that encourages dangerous behaviour.

High risk

AI systems identified as high-risk include AI technology used in:

critical infrastructures (eg transport), that could put the life and health of
citizens at risk
educational or vocational training, that may determine the access to
education and professional course of someone’s life (eg scoring of exams)
safety components of products (eg AI application in robot-assisted surgery)
employment, management of workers and access to self-employment (eg CV-
sorting software for recruitment procedures)
essential private and public services (eg credit scoring denying citizens
opportunity to obtain a loan)
law enforcement that may interfere with people’s fundamental rights (eg
evaluation of the reliability of evidence)
migration, asylum and border control management (eg verification of
authenticity of travel documents)

administration of justice and democratic processes (eg applying the law to a


concrete set of facts)

High-risk AI systems will be subject to strict obligations before they can be put on the
market:

adequate risk assessment and mitigation systems


high quality of the datasets feeding the system to minimise risks and
discriminatory outcomes
logging of activity to ensure traceability of results
detailed documentation providing all information necessary on the system
and its purpose for authorities to assess its compliance
clear and adequate information to the user
appropriate human oversight measures to minimise risk
high level of robustness, security and accuracy

All remote biometric identification systems are considered high risk and subject to
strict requirements. The use of remote biometric identification in publicly accessible
spaces for law enforcement purposes is, in principle, prohibited.

Narrow exceptions are strictly defined and regulated, such as when necessary to
search for a missing child, to prevent a specific and imminent terrorist threat or to
detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal
offence.

Such use is subject to authorisation by a judicial or other independent body and to


appropriate limits in time, geographic reach and the data bases searched.

Limited risk

Limited risk refers to AI systems with specific transparency obligations. When using AI
systems such as chatbots, users should be aware that they are interacting with a
machine so they can take an informed decision to continue or step back.

Minimal or no risk

The proposal allows the free use of minimal-risk AI. This includes applications such as
AI-enabled video games or spam filters. The vast majority of AI systems currently
used in the EU fall into this category.”

These proposals are intended to provide a “future proof” approach, allowing rules to
adapt to technological change. The European Commission also said that all “AI
applications should remain trustworthy even after they have been placed on the market”,
requiring ongoing quality and risk management by providers.

However, the draft legislation has been amended by the EU Council and European
Parliament, reportedly after concerns that technology such as ChatGPT, which has a large
number of potential uses, could have a correspondingly large variety of risk thresholds .
Politico notes efforts to reform the draft AI Act by its original proposers, MEPs Brando
Benifei and Dragoș Tudorache, and reports on the resistance in some areas to those
changes:
“In February [2023] the lead lawmakers on the AI Act, Benifei and Tudorache,
proposed that AI systems generating complex texts without human oversight should
be part of the “high-risk” list—an effort to stop ChatGPT from churning out
disinformation at scale.

The idea was met with scepticism by right-leaning political groups in the European
Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent
centre-right lawmaker who has a formal say over Parliament’s position, said that the
amendment “would make numerous activities high-risk, that are not risky at all”.

In contrast, activists and observers feel that the proposal was just scratching the
surface of the general-purpose AI conundrum. “It’s not great to just put text-making
systems on the high-risk list: you have other general-purpose AI systems that present
risks and also ought to be regulated”, said Mark Brakel, a director of policy at the
Future of Life Institute, a non-profit focused on AI policy.”

In May 2023, the European Parliament reported that MEPs had amended the list of those
systems which pose an unacceptable level of risk to people’s safety to include bans on
intrusive and discriminatory uses of AI systems such as “real-time” remote biometric
identification systems in publicly accessible spaces . They also expanded the classification
of high-risk areas to include harm to people’s health, safety, fundamental rights or the
environment. They also added AI systems to influence voters in political campaigns and in
recommender systems used by social media platforms to the high-risk list.

In addition, changes included obligations for providers of foundation models, who would
have to guarantee protection of fundamental rights, health and safety and the
environment, democracy and rule of law. They would need to assess and mitigate risks,
comply with design, information and environmental requirements and register in the EU
database. Generative foundation models, like ChatGPT, would have to comply with
additional transparency requirements, like disclosing that the content was generated by
AI, designing the model to prevent it from generating illegal content and publishing
summaries of copyrighted data used for training.

The final text of the AI act is set to be agreed by late 2023 or early 2024. In addition to the
AI Act, the EU has already passed several pieces of legislation such as the Digital Services
Act (DSA) and Digital Markets Act (DMA) , and alongside the AI Act is developing a civil
liability framework on adapting liability rules to the digital age and AI , and revising
sectoral safety legislation (such as regulations governing the use of machinery, artificial
intelligence and autonomous robots ).

In its own assessment of the differences between the UK and EU regime , the UK
government’s white paper said :
“The EU has grounded its approach in the product safety regulation of the single
market, and as such has set out a relatively fixed definition in its legislative
proposals. Whilst such an approach can support efforts to harmonise rules across
multiple countries, we do not believe this approach is right for the UK. We do not
think that it captures the full application of AI and its regulatory implications. Our
concern is that this lack of granularity could hinder innovation.”

5.2 United States of America


Writing in April 2023, Alex Engler at the Brookings Institute contends that the US federal
government’s approach to AI risk management can broadly be characterised as risk-
based, sectorally specific, and highly distributed across federal agencies . Mr Engler
suggests that, while there are advantages to this approach, it also contributes to the
uneven development of AI policies. He argues that, while there are several guiding federal
documents from the White House on AI harms, “they have not created an even or
consistent federal approach to AI risks”.

At the same time, Mr Engler notes that the US has invested in non-regulatory
infrastructure, such as a new AI risk management framework, evaluations of facial
recognition software, and extensive funding of AI research.

Comparing the approach taken by the US and the European Union, Mr Engler notes that
the EU approach to AI risk management, as outlined in section 5.1, is characterised by a
more comprehensive range of legislation tailored to specific digital environments. He
adds that this has led to more differences than similarities between the two approaches:

“The EU and US strategies share a conceptual alignment on a risk-based approach,


agree on key principles of trustworthy AI, and endorse an important role for
international standards. However, the specifics of these AI risk management regimes
have more differences than similarities. Regarding many specific AI applications,
especially those related to socioeconomic processes and online platforms, the EU
and US are on a path to significant misalignment.

The EU-US Trade and Technology Council has demonstrated early success working on
AI, especially on a project to develop a common understanding of metrics and
methodologies for trustworthy AI. Through these negotiations, the EU and US have
also agreed to work collaboratively on international AI standards, while also jointly
studying emerging risks of AI and applications of new AI technologies.”

For Mr Engler, more collaboration between international partners will be crucial, as


governments implement the policies that will be foundational to the democratic
governance of AI.
5.3 Regulatory approaches compared: Potential lessons
for the UK?
The report authored by Sir Tony Blair and Lord Hague , cited in section 3.2 of this briefing,
evaluated the differing regulatory approaches taken by the EU and US, and offered
recommendations on how the UK’s own approach should proceed. It argued that both the
EU and US approaches pose challenges that the UK should seek to diverge from over time.

For example, the report noted that representatives of the Large-scale Artificial
Intelligence Open Network have written to the European Parliament warning that the EU’s
draft AI Act and its “one-size-fits-all” approach will entrench large firms to the detriment
of open-source developers, limit academic freedom and reduce competition. If the EU
overly regulates AI, the report argued, it will repeat earlier failures with other technology
families and become a less relevant global market due to declining growth rates.

Meanwhile, the report suggested that a “modern aversion” on the part of the US to
investing directly in state capabilities could hamper its ability to lead on setting
international standards and norms. It noted that, at the height of the space race, the US
spent $42bn in today’s money on NASA funding in one year alone. By comparison, in 2022
the US spent $1.73bn on non-defence AI research and development, much of which was
contracted out to industry and academic researchers. The report argued that, without
sovereign-state capabilities, the US federal government could become overly reliant on
private expertise and less able to set or enforce standards.

As a result, Sir Tony and Lord Hague contended that both the US and EU approaches
risked locking in the current reality and leaders of AI, led by industry and lacking clear
incentives for alignment with democratic control and governance.

They argued that the UK should aim to fill the niche of having a relatively less regulated AI
ecosystem, but with a highly agile, technologically literate regulator tied closely to
Sentinel, their proposed national AI laboratory, and its research in this space.

However, they noted that this approach will take time. The authors suggest that by
combining flexible regulation with public investment in sovereign-state capacities, the UK
can attract private AI start-ups while building the sovereign-state technical expertise
required to set and enforce standards.

As a result, the report makes the following recommendations:

The UK should diverge from EU regulation on AI, but ensure its own regulatory
systems allow UK companies and AI models to be assessed voluntarily at EU
standards to enable exports.
In the near term, the UK should broadly align with US regulatory standards, while
building a coalition of countries through Sentinel. This position may then diverge
over time as UK regulatory expertise, the technology landscape and international
approaches mature.
In the medium term, the UK should establish an AI regulator in tandem with
Sentinel.

Cover image by Freepik .


Current page URL
https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-
regulation/

Links on this page


https://www.oxfordreference.com/display/10.1093/oi/authority.2011080309542696
0
https://www.ibm.com/design/ai/basics/ai/
https://www.gbgplc.com/en/blog/ai-a-key-player-in-financial-institutions-fight-
against-fraud/

https://transform.england.nhs.uk/information-governance/guidance/artificial-
intelligency.

https://commonslibrary.parliament.uk/research-briefings/cdp-2023-0152/
https://commonslibrary.parliament.uk/research-briefings/CDP-2023-0152/

https://www.cnbc.com/2023/05/31/openai-is-pursuing-a-new-way-to-fight-ai-
hallucinations.html
https://www.gov.uk/government/publications/national-ai-strategy/national-ai-
strategy-html-version
https://dictionary.cambridge.org/dictionary/english/deep-tech
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022

https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022
https://www.europarl.europa.eu/news/en/headlines/society/20200918STO87404/a
rtificial-intelligence-threats-and-opportunities

https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-
artificial-intelligence/?sh=2b6e146b2706
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained
https://thehill.com/opinion/technology/3969647-the-first-ai-enhanced-
presidential-election/

https://www.gov.uk/government/publications/the-potential-impact-of-ai-on-uk-
employment-and-the-demand-for-skills
https://arxiv.org/pdf/2303.10130.pdf
https://committees.parliament.uk/work/6729/postpandemic-economic-growth-uk-
labour-markets/publications/

https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-
economic-potential-of-generative-ai-the-next-productivity-frontier
https://committees.parliament.uk/publications/33536/documents/182541/default/
https://committees.parliament.uk/publications/39303/documents/192860/default/

https://www.theguardian.com/books/2023/jul/05/authors-file-a-lawsuit-against-
openai-for-unlawfully-ingesting-their-books
https://www.creatorsrightsalliance.org/ai-and-creative-work
https://www.theguardian.com/technology/2023/apr/17/google-chief-ai-harmful-
sundar-pichai

https://lordslibrary.parliament.uk/artificial-intelligence-policy-in-the-uk-liaison-
committee-report/
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://openai.com/blog/planning-for-agi-and-beyond
https://futureoflife.org/wp-
content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
https://news.sky.com/story/elon-musk-wrong-to-call-for-pause-in-development-
of-ai-warns-new-report-12871977
https://www.bcs.org/articles-opinion-and-research/helping-ai-grow-up-without-
pressing-pause/
https://www.institute.global/insights/politics-and-governance/new-national-
purpose-ai-promises-world-leading-future-of-britain
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/att
achment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf
https://www.gov.uk/government/groups/ai-council

https://www.gov.uk/government/publications/establishing-a-pro-innovation-
approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-
regulating-ai-policy-statement
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper#part-2-the-current-regulatory-environment
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper#part-2-the-current-regulatory-environment
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper
https://www.thetimes.co.uk/article/no-10-reboots-ai-council-blindsided-by-
chatgpt-and-deepmind-mcjknmt32
https://www.gov.uk/government/news/ai-council
https://www.gov.uk/government/news/initial-100-million-for-expert-taskforce-to-
help-uk-build-and-adopt-next-generation-of-safe-ai
https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-
intelligence

https://www.theguardian.com/technology/2023/jun/09/rishi-sunak-ai-summit-
what-is-its-aim-and-is-it-really-necessary
https://www.gov.uk/government/publications/guidance-to-civil-servants-on-use-
of-generative-ai/guidance-to-civil-servants-on-use-of-generative-ai
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/att
achment_data/file/1146540/Generative_artificial_intelligence_in_education_.pdf
https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf

https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-
intelligence
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/e
u-ai-act-first-regulation-on-artificial-intelligence
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-
intelligence-act/
https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-
a-step-closer-to-the-first-rules-on-artificial-intelligence
https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12979-
Civil-liability-adapting-liability-rules-to-the-digital-age-and-artificial-
intelligence_en

https://ec.europa.eu/docsroom/documents/45508
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper

https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-
transatlantic-comparison-and-steps-to-alignment/
https://www.institute.global/insights/politics-and-governance/new-national-
purpose-ai-promises-world-leading-future-of-britain
https://www.freepik.com/free-vector/gradient-brain-
background_44416640.htm#query=AI&position=30&from_view=search&track=sph

© House of Lords 2024. Re-use our content freely and flexibly with only a few conditions
under the Open Parliament Licence .

You might also like