Artificial intelligence_ Development, risks and regulation
Artificial intelligence_ Development, risks and regulation
Artificial intelligence_ Development, risks and regulation
Table of contents
1. What is artificial intelligence?
2.4 Case study: Potential impact on the knowledge and creative industries
(House of Lords Communications and Digital Committee report, January
2023)
3. Calls for rapid regulatory adaptation
3.1 Letter from key figures in AI, science and technology calling for a pause
in AI development (March 2023)
3.2 Joint report by Sir Tony Blair and Lord Hague of Richmond (June 2023)
On 24 July 2023, the House of Lords is due to debate the following motion:
“Lord Ravensdale (Crossbench) to move that this House takes note of the ongoing
development of advanced artificial intelligence, associated risks and potential
approaches to regulation within the UK and internationally.”
In banking, for example, AI is currently used to detect and flag suspicious activity to a
bank’s fraud department , such as unusual debit card usage and large account deposits.
The NHS also reports that AI is being used to benefit people in health and care by
analysing X-ray images to support radiologists in making assessments and helping
clinicians read brain scans more quickly, by supporting people in ‘virtual wards’, who
would otherwise be in hospital to receive the care and treatment they need, and through
remote monitoring technology such as apps and medical devices which can assess
patients’ health and care while they are being cared for at home.
To achieve this, AI systems rely upon large datasets from which they can decipher
patterns and correlations , thereby enabling the system to ‘learn’ how to anticipate future
events. It does this by relying upon and/or creating algorithms based on the dataset
which it can use to interpret new data. This data can be structured, such as bank
transactions, or unstructured, such as enabling a driverless car to respond to the
environment around it.
The different forms that AI can take range from so-called ‘narrow’ AI designed to perform
specific tasks to what is known as ‘strong’ or ‘general’ AI with the capacity to learn and
reason. The House of Commons Library recently drew upon research from Stanford
University and other sources to offer the following definitions:
Machine learning is a method that can be used to achieve narrow AI; it allows a
system to learn and improve from examples, without all its instructions being
explicitly programmed. It does this by finding patterns in large amounts of data,
which it can then use to make predictions (for example what film or TV programme
you might like to watch next on a streaming platform). The AI can then
independently amend its algorithm based on the accuracy of its predictions.
Deep learning is a type of machine learning whose design has been informed by
the structure and function of the human brain and the way it transmits
information. The application of deep learning can be seen in ‘foundation models’,
of which ‘large language models (LLMs)’ such as ChatGPT, are one example. The
term refers to those models that are trained on very large, unlabelled datasets and
which can be adapted to do a wide range of tasks, despite not having been trained
explicitly to do those tasks. In other words, the model can take information it has
learnt about in one situation and apply it to another, different situation. Sometimes
LLMs are refined or ‘fine-tuned’ (trained using additional data) to achieve a
specific goal. ChatGPT, for example, has been fine-tuned to allow users to ask it a
question, or make a request, and for it to generate “human-like text” in response.
Illustrative example: Text generated directly by AI
As a basis for comparison, when ChatGPT—known as a large language model or LLM,
which is designed to use a wide knowledge base to provide responses to user prompts
—is asked to define the key forms of AI it gives the following response:
1 Narrow AI: Narrow AI, also known as weak AI, refers to AI systems designed to
perform specific tasks or solve specific problems. These systems are limited in
their scope and cannot exhibit general intelligence. Examples of narrow AI
include voice assistants like Siri or Alexa, image recognition algorithms, and
recommendation systems.
As noted by the UK government in its ‘ National AI strategy ’, AI is the fastest growing deep
technology in the world, with “huge potential to rewrite the rules of entire industries,
drive substantial economic growth and transform all areas of life”. Such transformative
technology brings both risks and benefits, as explored below.
Taken from the same publication, figure 1 illustrates the size profile of AI companies in the
UK:
(Source: Department for Science, Innovation and Technology and Office for Artificial Intelligence,
‘ Artificial Intelligence sector study 2022 ‘, March 2023)
The categories in the graph are defined as follows: large business have more than 250
employees; medium, 50–249; small, 10–49; and micro, less than 9.
Other key findings from the report included (emphasis in original):
Of the 3,170 active companies identified through the study, 60% are dedicated AI
businesses and 40% are diversified ie, have AI activity as part of a broader
diversified product or service offer.
On average 269 new AI companies have been registered each year since 2011, with
a peak in new company registrations in the same year as the AI sector deal in
2018 (429 companies).
Together, the data on company size and business model suggest that dedicated AI
companies are both smaller and more dependent on AI products for revenue.
Diversified AI companies are typically larger and likely to generate a greater
proportion of revenues from less capital-intensive provision of AI-related services.
London, the South East and the East of England account for 75% of registered
AI office addresses, and also for 74% of trading addresses. Just under one-third
of AI companies with a registered address outside of London, the South East and
the East of England still have a trading presence in those regions, highlighting the
apparent significance of those regions to development of the UK AI sector to date.
[These findings are illustrated in figure 4 below.]
While absolute numbers are smaller, the study has identified more notable
proportions of wider regional AI activity in automotive, industrial automation
and machinery; energy, utilities and renewables; health, wellbeing and
medical practice; and agricultural technology.
Across both dedicated and diversified AI companies, study estimates suggest that
there are 50,040 full time equivalents (FTEs) employed in AI-related roles, 53%
of which are within dedicated AI companies.
Based on a combination of official company data, survey responses and
associated modelling, AI companies are estimated to contribute £3.7bn in GVA
to the UK economy. For large companies the GVA-to-turnover ratio is 0.6:1 (ie, for
every £1 of revenue, large AI companies generate 60p in direct GVA). GVA-to-
turnover ratios among small and medium sized enterprises (SMEs) are much lower
(0.2:1 for medium-sized companies and negative for small and micro businesses),
which reflects the capital-intensive, high R&D nature of deep technology
development.
Since 2016, AI companies have secured a total of £18.8bn in private investment.
2021 was a record year for AI investment, with over £5bn raised across 768 deals,
representing an average deal size of £6.7mn. Further, AI investment increased
almost five-fold between 2019 and 2021.
In 2022 dedicated AI companies secured a higher average deal value than
diversified companies for the first time. However, data on AI investment by stage
of evolution may also be signalling some tightening of investment available to
seed and venture stage companies and, given the significance of private
investment for AI technology development evidenced by data on revenues and
GVA, this could pose a risk to realising the potential within early-stage AI
companies.
A selection of this data is illustrated in the charts below, again taken from the ‘Artificial
intelligence sector study 2022 ’:
Figure 2. Breakdown of machine learning companies by industry
and sub-industry in the UK, 2022
AI could help people with improved health care, safer cars and other transport
systems, tailored, cheaper and longer-lasting products and services. It can also
facilitate access to information, education and training. […] AI can also make
workplace safer as robots can be used for dangerous parts of jobs, and open new
job positions as AI-driven industries grow and change.
AI used in public services can reduce costs and offer new possibilities in public
transport, education, energy and waste management and could also improve the
sustainability of products.
Job displacement, the potential for AI to abolish the need for some jobs (whilst
potentially generating the need for others).
Economic inequality, and the risk that AI will disproportionally benefit wealthy
individuals and corporations.
Legal and regulatory challenges, and the need for regulation to keep pace with
the rapid development of innovation.
Misinformation and manipulation, and the risk that AI-generated content drives
the spread of false information and the manipulation of public opinion.
Existential risks, including the rise of artificial general intelligence (AGI) that
surpasses human intelligence and raises long term risks for the future of humanity.
On the risk of misinformation and manipulation, several commentators have already
suggested that elections in 2024, particularly the US presidential election, may be the
first elections where the campaigning process is significantly affected by AI .
“Our base case estimate is that around 7 percent of existing UK jobs could face a high
(over 70 percent) probability of automation over the next five years, rising to around
18 percent after 10 years and just under 30 percent after 20 years. This is within the
range of estimates from previous studies and draws on views from an expert
workshop on the automatability of occupations and detailed analysis of OECD
[Organisation for Economic Co-operation and Development] and ONS [Office for
National Statistics] data on how this is related to the task composition and skills
required for different occupations.”
The manufacturing sector was highlighted in the report as being at risk of losing the most
jobs over the next 20 years, with job losses also expected in transport and logistics, public
administration and defence, and the wholesale and retail sectors. In contrast, the health
and social work sector was highlighted as most likely to see the largest job gains, while
gains were also expected in the professional and scientific, education, and information
and communications sectors. Jobs in lower paid clerical and process-orientated roles
were most likely to be at risk of being lost. In contrast, the report suggested there would
be job gains in managerial and professional occupations.
The report concluded that the most plausible assumption is that the long-term impact of
AI on employment levels in the UK would be broadly neutral, but that the potential
impacts within that umbrella were unclear.
More recent analyses of AI, particularly since the release of LLMs such as ChatGPT and
Google Bard, have questioned whether the impact of AI will indeed be felt most in lower-
paid/manual occupations. Analysis published in March 2023 by OpenAI itself, the creator
of ChatGPT, suggested that higher wage occupations tend to be more exposed to LLMs .
Its analysis also found that within this there will be differentiation depending on the
nature of the tasks involved:
“[T]he importance of science and critical thinking skills are strongly negatively
associated with exposure, suggesting that occupations requiring these skills are less
likely to be impacted by current LLMs. Conversely, programming and writing skills
show a strong positive association with exposure, implying that occupations involving
these skills are more susceptible to being influenced by LLMs.”
On 21 April 2023, the House of Commons Business, Energy and Industrial Strategy
Committee published a report on post-pandemic economic growth and the UK labour
markets . This report highlighted the impact that AI could have on productivity within the
UK. It refers to research from Deloitte that found that “by 2035 AI could boost UK labour
market productivity by 25%”, and that “four out of five UK organisations said that use of AI
tools had made their employees more productive, improved their decision-making, and
made their process more efficient”.
It also argued that AI and related technologies may have a positive impact on helping
people access the labour market who have otherwise found it difficult to find and stay in
employment, such as disabled people.
Estimates of the impact of AI on the UK and the world economy continue to be published
regularly as these products develop. Recent examples include research from McKinsey
which suggested that generative AI could add value equivalent to the UK’s entire GDP to
the world economy over the coming years :
“Generative AI’s impact on productivity could add trillions of dollars in value to the
global economy. Our latest research estimates that generative AI could add the
equivalent of $2.6tn to $4.4tn annually across the 63 use cases we analysed—by
comparison, the United Kingdom’s entire GDP in 2021 was $3.1tn. This would increase
the impact of all artificial intelligence by 15 to 40 percent. This estimate would
roughly double if we include the impact of embedding generative AI into software
that is currently used for other tasks beyond those use cases.”
The committee heard evidence that new technologies and the rise of digitised culture will
change the way creative content is developed, distributed and monetised over the next
five to 10 years. The committee drew particular attention to intellectual property (IP), the
protection of which it said was vital to much of the creative industries, and the impact
upon it of the rise of AI technologies, particularly text and data mining of existing
materials used by generative AI models to learn and develop content.
The committee suggested that such proposals were “misguided” and took “insufficient
account of the potential harm to the creative industries”. Arguing that the development of
AI was important, but “should not be pursued at all costs”, the committee argued that the
IPO should pause its proposed changes to the text and data mining regime “immediately”.
The committee added that the IPO should conduct and publish an impact assessment on
the implications for the creative industries, and if this assessment found negative effects
on businesses in the creative industries, it should then pursue alternative approaches,
such as those employed by the European Union. (The European Union’s approach is
examined in section 5.1 of this briefing.)
The committee also warned against the use of AI to generate, reproduce and distribute
creative works and image likenesses which went against the rights of performers and the
original creators of the work.
In its response to the committee , the government said that, “in light of additional
evidence” of the impact on the creative sector, the government “will not be proceeding”
with the proposals for an exception for text and data mining of copyrighted works.
Instead, the government said it would work with users and rights holders to produce a
“code of practice by the summer [2023]” on text and data mining by AI.
There are several court challenges underway on the use of existing written content and
images to train generative AI. For example, authors Paul Tremblay and Mona Awad have
launched legal action in the United States against OpenAI alleging that copies of their
work were used unauthorised to develop its ChatGPT LLM. The debate on how best to
protect copyright and creative careers such as writing and illustrating continues. The
Creators’ Rights Alliance (CRA) , which is formed of bodies from across the UK cultural
sector, contends that current AI technology is accelerating and being implemented
without enough consideration of issues around ethics, accountability, and economics for
creative human endeavour.
The CRA argues that there should be a clear definition and labelling of what constitutes
solely AI generated work and work made with the intervention of creators, and that the
distinct characteristics of individual performers and artists should be protected. At the
same time, it said that copyright should be protected, including no data mining of existing
work without consent, and that there should be transparency over the data used to create
generative AI. The CRA also called for increased protection for creative roles such as
visual artists, translators and journalists, or these roles could be lost to AI systems.
3. Calls for rapid regulatory adaptation
The potential benefits and harms of AI have led to calls for governments to adapt quickly
to the changes AI is already delivering and the potentially transformative changes to
come. These include calls to pause AI development and for countries including the UK to
deliver a step-change in regulation, potentially before the technology passes a point
when such regulation can be effective. The chief executive of Google, Sundar Pichai, is one
example of a leading technology figure who has warned about the potential harms of AI
and called for a suitable regulatory framework .
This is a fast-moving area and this briefing concentrates on reports published since the
beginning of 2023. For an exploration of publications and milestones before this time,
including the work of the House of Lords Committee on Artificial Intelligence, see the
earlier House of Lords Library briefing ‘ Artificial intelligence policy in the UK: Liaison
Committee report ’, published in May 2022.
The signatories included engineers from Amazon, DeepMind, Google, Meta and Microsoft,
as well as academics and prominent industry figures such as Elon Musk, who co-founded
OpenAI (the research lab responsible for ChatGPT and GPT-4), Emad Mostaque, who
founded London-based Stability AI, and Steve Wozniak, the co-founder of Apple.
The letter noted that, given the potential of advanced AI, it “should be planned for and
managed with commensurate care and resources”. It said that “unfortunately, this level of
planning and management is not happening”, even though recent months have seen “AI
labs locked in an out-of-control race to develop and deploy ever more powerful digital
minds that no one—not even their creators—can understand, predict, or reliably control”.
The Future of Life Institute, which coordinated the production of the letter, also published
a policy paper entitled ‘ Policy making in the pause ’, which offered the following
recommendations to govern the future of AI development:
However, not everyone shares the perspective that a pause is either necessary or
practical. For example, BCS, the chartered institute for IT, has said this would only result
in an “asymmetrical pause” as bad actors would ignore it and seize the advantage .
Instead, the institute said that humanity would benefit from ethical guardrails around AI
rather than halt any development. The chief executive of the BCS, Rashik Parmar, said:
“we can’t be certain every country and company with the power to develop AI would obey
a pause, when the rewards for breaking an embargo are so rich”.
Instead, BCS has issued a report outlining how AI can be helped to “grow up responsibly” ,
such as by making it part of public education campaigns and ensuring it is clearly
labelled whenever it is used. Among the paper’s key recommendations are:
The authors said that getting policy right on this issue was therefore “fundamental” and
contended that it could “define Britain’s future”. The report noted that the potential
opportunities were “vast”, including the potential to “change the shape of the state, the
nature of science and augment the abilities of citizens”. However, like others, the two
former party leaders also noted that the risks were “profound”.
As a result, the report called for urgent action, including a “radical new policy agenda
and a reshaping of the state, with science and technology at its core”. Noting that AI is
already having an impact and that the pace of change is only likely to accelerate in the
coming years, the authors contend that “our institutions are not configured to deal with
science and technology, particularly their exponential growth”. They said that it was
“absolutely vital that this changes”. This included a reorientation in the way government is
organised, works with the private sector, promotes research, draws on expertise and
receives advice.
The report also contended that the UK could become a leader in the development of safe,
reliable and cutting-edge AI, in collaboration with its allies. The authors contended that
the UK has an “opportunity to construct effective regulation that goes well beyond existing
proposals yet is also more attractive to talent and firms than the approach being adopted
by the European Union”.
Again, the report offered recommendations on how this could be achieved, including:
Finally, the report contended that the UK could pioneer the deployment and use of AI
technology in the real world, “building next-generation companies and creating a 21st
century strategic state”. To achieve this, the report recommended:
The report added that it was “critical to engage the public throughout all of these
developments” to ensure AI development is accountable and give people the skills and
chance to adapt.
4. Proposed regulatory approaches: UK
4.1 UK government approach to artificial intelligence
On 22 September 2021, the government published its ‘ National AI strategy ’, setting out its
ten-year plan on AI. The strategy set out three high-level aims:
invest and plan for the long-term needs of the AI ecosystem to continue our
leadership as a science and AI superpower
The Office for Artificial Intelligence, a unit within the Department for Science, Innovation
and Technology (DSIT), is responsible for overseeing the implementation of the national AI
strategy. There is also an AI Council , a non-statutory expert committee of independent
members set up to provide advice to the government.
Ministers contend that UK laws, regulators and courts already address some of the
emerging risks posed by AI technologies. However, they also concede that, while AI is
currently regulated through existing legal frameworks like financial services regulation,
some AI risks have arisen and will arise across, or in the gaps between, existing regulatory
remits.
The government provides the following evaluation of where such risks might exist and how
they could potentially be mitigated :
“Example of legal coverage of AI in the UK and potential gaps
Discriminatory outcomes that result from the use of AI may contravene the
protections set out in the Equality Act 2010. AI systems are also required by data
protection law to process personal data fairly. However, AI can increase the risk of
unfair bias or discrimination across a range of indicators or characteristics. This
could undermine public trust in AI.
Product safety laws ensure that goods manufactured and placed on the market in the
UK are safe. Product-specific legislation (such as for electrical and electronic
equipment, medical devices, and toys) may apply to some products that include
integrated AI. However, safety risks specific to AI technologies should be monitored
closely. As the capability and adoption of AI increases, it may pose new and
substantial risks that are unaddressed by existing rules.
Consumer rights law may protect consumers where they have entered into a sales
contract for AI-based products and services. Certain contract terms (for example,
that goods are of satisfactory quality, fit for a particular purpose, and as described)
are relevant to consumer contracts. Similarly, businesses are prohibited from
including certain terms in consumer contracts. Tort law provides a complementary
regime that may provide redress where a civil wrong has caused harm. It is not yet
clear whether consumer rights law will provide the right level of protection in the
context of products that include integrated AI or services based on AI, or how tort law
may apply to fill any gap in consumer rights law protection.”
In response to the 2022 consultation exercise cited above, the government reported that
those working in the AI sector said that “conflicting or uncoordinated requirements from
regulators create unnecessary burdens and that regulatory gaps may leave risks
unmitigated, harming public trust and slowing AI adoption”.
Further, the government said that respondents to the consultation had highlighted that, if
regulators were not proportionate and aligned in their regulation of AI, “businesses may
have to spend excessive time and money complying with complex rules instead of creating
new technologies”. Noting that small businesses and start-ups often do not have the
resources to do both and the prevalence of such firms in the sector, the government
argued that it was “important to ensure that regulatory burdens do not fall
disproportionately on smaller companies, which play an essential role in the AI innovation
ecosystem and act as engines for economic growth and job creation”.
The white paper said that the government recognised both the rewards and risks of AI:
“While we should capitalise on the benefits of these technologies, we should also not
overlook the new risks that may arise from their use, nor the unease that the
complexity of AI technologies can produce in the wider public. We already know that
some uses of AI could damage our physical and mental health, infringe on the
privacy of individuals and undermine human rights.
Public trust in AI will be undermined unless these risks, and wider concerns about the
potential for bias and discrimination, are addressed. By building trust, we can
accelerate the adoption of AI across the UK to maximise the economic and social
benefits that the technology can deliver, while attracting investment and stimulating
the creation of high-skilled AI jobs. In order to maintain the UK’s position as a global
AI leader, we need to ensure that the public continues to see how the benefits of AI
can outweigh the risks.”
The white paper said that responding to risk and building public trust were important
drivers for regulation, but that clear and consistent regulation could also support
business investment and build confidence in innovation.
Consequently, it said that the government would put in place a new framework to bring
“clarity and coherence” to the AI regulatory landscape, which will harness AI’s ability to
drive growth and prosperity and increase public trust in its use and application. In taking
a “deliberately agile and iterative approach”, the government said that its framework was
“designed to build the evidence base so that we can learn from experience and
continuously adapt to develop the best possible regulatory regime”.
That framework is underpinned by five principles to “guide and inform the responsible
development and use of AI in all sectors of the economy”. These are:
On whether new legislation would be introduced to support these aims, the white paper
said:
“We will not put these principles on a statutory footing initially. New rigid and
onerous legislative requirements on businesses could hold back AI innovation and
reduce our ability to respond quickly and in a proportionate way to future
technological advances. Instead, the principles will be issued on a non-statutory
basis and implemented by existing regulators. This approach makes use of
regulators’ domain-specific expertise to tailor the implementation of the principles to
the specific context in which AI is used. During the initial period of implementation,
we will continue to collaborate with regulators to identify any barriers to the
proportionate application of the principles, and evaluate whether the non-statutory
framework is having the desired effect.”
However, the paper also added that “following this initial period of implementation”, the
government anticipated introducing a statutory duty on regulators requiring them to have
due regard to the principles. The paper added:
Regarding the potential gaps between the remits of various regulators identified above,
the white paper noted that the 2022 AI consultation paper proposed a small coordination
layer within the regulatory architecture. However, the white paper noted that, while
industry and civil society were reportedly supportive of the intention to ensure coherence
across the AI regulatory framework, “feedback often argued strongly for greater central
coordination to support regulators on issues requiring cross-cutting collaboration and
ensure that the overall regulatory framework functions as intended”.
Consequently, the white paper said that the government had identified several central
support functions required to make sure that the overall framework offers a
“proportionate but effective” response to risk while promoting innovation across the
regulatory landscape. These were:
The white paper said that these functions would not entail the creation of a new AI
regulator:
“The central support functions will initially be provided from within government but
will leverage existing activities and expertise from across the broader economy. The
activities described above will neither replace nor duplicate the work undertaken by
regulators and will not involve the creation of a new AI regulator.”
The white paper included a consultation exercise on the proposals, which ran for 12 weeks
until 21 June 2023. The government is yet to publish an analysis of the responses received.
The government has already moved to dissolve the AI Council, as reported in the Times on
19 June 2023 . It will be replaced by a new foundation model taskforce led by technology
entrepreneur Ian Hogarth, which will spearhead the adoption and regulation of the
technology in the UK. A statement released on 7 July 2023 by the Department of Science,
Innovation and Technology said, with the terms of the current council members finishing,
it would establish a wider group of expert advisers:
“Since it was established in 2019, the AI Council has advised government on AI policy
with regards to national security, defence, data ethics, skills, and regulation, which
has played a key role in developing landmark policies including the National AI
Strategy, and the recent AI regulation white paper. The council also supported the
government’s early Covid-19 efforts, highlighting the immediate needs of the AI
startup ecosystem and facilitating rapid intelligence-gathering that shaped
government support for the tech sector in its pandemic response.
With the terms of the AI Council members coming to an end, the Department for
Science, Innovation and Technology is establishing a wider group of expert advisers
to input on a range of priority issues across the department, including artificial
intelligence. This will complement the recently established foundation model
taskforce, which will drive forward critical work on AI safety and research.”
The taskforce has been given £100mn to develop a British foundational generative AI
model akin to ChatGPT to be used in the health service and elsewhere .
Prime Minister Rishi Sunak has also announced that the UK will host a global summit on
safety in artificial intelligence in the autumn . Writing in the Guardian in June 2023, Dan
Milmo and Kiran Stacey argued that this marked a distinct “change of tone” from the
government , with ministers going from talking predominately about the benefits of AI to
the risks of such innovation. In addition to the different regulatory regimes discussed in
section 5 of this briefing, the Guardian article also reports that the G7 have agreed to
create an intergovernmental forum called the ‘Hiroshima AI process’ to debate issues
around these fast-growing tools.
Similarly, in March 2023, the Department for Education issued guidance on the use of
generative AI in pre-university education . Noting that the technology provided both risks
and opportunities for the sector, the key principles outlined in that document stated that
educational institutions must continue to guard against misuse whilst seeking to take
advantage of these benefits.
This was followed in June 2023 by the Russell Group of universities publishing a guidance
note on the use of generative AI in higher education . Again, the perspective of the Russell
Group was not that generative AI tools should be banned, but that universities would
support staff and students to become AI literate whilst using these technologies ethically:
“Our universities wish to ensure that generative AI tools can be used for the benefit of
students and staff—enhancing teaching practices and student learning experiences,
ensuring students develop skills for the future within an ethical framework, and
enabling educators to benefit from efficiencies to develop innovative methods of
teaching.”
In April 2021, the European Commission proposed the AI Act, draft legislation setting out
rules for governing AI within the EU . The AI Act would establish four levels of risk for AI:
unacceptable risk, high risk, limited risk, and minimal risk. Different rules apply
depending on the level of risk a system poses to fundamental rights.
The European Commission suggests that those risk categories would work in the following
ways :
“Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people
will be banned, from social scoring by governments to toys using voice assistance
that encourages dangerous behaviour.
High risk
critical infrastructures (eg transport), that could put the life and health of
citizens at risk
educational or vocational training, that may determine the access to
education and professional course of someone’s life (eg scoring of exams)
safety components of products (eg AI application in robot-assisted surgery)
employment, management of workers and access to self-employment (eg CV-
sorting software for recruitment procedures)
essential private and public services (eg credit scoring denying citizens
opportunity to obtain a loan)
law enforcement that may interfere with people’s fundamental rights (eg
evaluation of the reliability of evidence)
migration, asylum and border control management (eg verification of
authenticity of travel documents)
High-risk AI systems will be subject to strict obligations before they can be put on the
market:
All remote biometric identification systems are considered high risk and subject to
strict requirements. The use of remote biometric identification in publicly accessible
spaces for law enforcement purposes is, in principle, prohibited.
Narrow exceptions are strictly defined and regulated, such as when necessary to
search for a missing child, to prevent a specific and imminent terrorist threat or to
detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal
offence.
Limited risk
Limited risk refers to AI systems with specific transparency obligations. When using AI
systems such as chatbots, users should be aware that they are interacting with a
machine so they can take an informed decision to continue or step back.
Minimal or no risk
The proposal allows the free use of minimal-risk AI. This includes applications such as
AI-enabled video games or spam filters. The vast majority of AI systems currently
used in the EU fall into this category.”
These proposals are intended to provide a “future proof” approach, allowing rules to
adapt to technological change. The European Commission also said that all “AI
applications should remain trustworthy even after they have been placed on the market”,
requiring ongoing quality and risk management by providers.
However, the draft legislation has been amended by the EU Council and European
Parliament, reportedly after concerns that technology such as ChatGPT, which has a large
number of potential uses, could have a correspondingly large variety of risk thresholds .
Politico notes efforts to reform the draft AI Act by its original proposers, MEPs Brando
Benifei and Dragoș Tudorache, and reports on the resistance in some areas to those
changes:
“In February [2023] the lead lawmakers on the AI Act, Benifei and Tudorache,
proposed that AI systems generating complex texts without human oversight should
be part of the “high-risk” list—an effort to stop ChatGPT from churning out
disinformation at scale.
The idea was met with scepticism by right-leaning political groups in the European
Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent
centre-right lawmaker who has a formal say over Parliament’s position, said that the
amendment “would make numerous activities high-risk, that are not risky at all”.
In contrast, activists and observers feel that the proposal was just scratching the
surface of the general-purpose AI conundrum. “It’s not great to just put text-making
systems on the high-risk list: you have other general-purpose AI systems that present
risks and also ought to be regulated”, said Mark Brakel, a director of policy at the
Future of Life Institute, a non-profit focused on AI policy.”
In May 2023, the European Parliament reported that MEPs had amended the list of those
systems which pose an unacceptable level of risk to people’s safety to include bans on
intrusive and discriminatory uses of AI systems such as “real-time” remote biometric
identification systems in publicly accessible spaces . They also expanded the classification
of high-risk areas to include harm to people’s health, safety, fundamental rights or the
environment. They also added AI systems to influence voters in political campaigns and in
recommender systems used by social media platforms to the high-risk list.
In addition, changes included obligations for providers of foundation models, who would
have to guarantee protection of fundamental rights, health and safety and the
environment, democracy and rule of law. They would need to assess and mitigate risks,
comply with design, information and environmental requirements and register in the EU
database. Generative foundation models, like ChatGPT, would have to comply with
additional transparency requirements, like disclosing that the content was generated by
AI, designing the model to prevent it from generating illegal content and publishing
summaries of copyrighted data used for training.
The final text of the AI act is set to be agreed by late 2023 or early 2024. In addition to the
AI Act, the EU has already passed several pieces of legislation such as the Digital Services
Act (DSA) and Digital Markets Act (DMA) , and alongside the AI Act is developing a civil
liability framework on adapting liability rules to the digital age and AI , and revising
sectoral safety legislation (such as regulations governing the use of machinery, artificial
intelligence and autonomous robots ).
In its own assessment of the differences between the UK and EU regime , the UK
government’s white paper said :
“The EU has grounded its approach in the product safety regulation of the single
market, and as such has set out a relatively fixed definition in its legislative
proposals. Whilst such an approach can support efforts to harmonise rules across
multiple countries, we do not believe this approach is right for the UK. We do not
think that it captures the full application of AI and its regulatory implications. Our
concern is that this lack of granularity could hinder innovation.”
At the same time, Mr Engler notes that the US has invested in non-regulatory
infrastructure, such as a new AI risk management framework, evaluations of facial
recognition software, and extensive funding of AI research.
Comparing the approach taken by the US and the European Union, Mr Engler notes that
the EU approach to AI risk management, as outlined in section 5.1, is characterised by a
more comprehensive range of legislation tailored to specific digital environments. He
adds that this has led to more differences than similarities between the two approaches:
The EU-US Trade and Technology Council has demonstrated early success working on
AI, especially on a project to develop a common understanding of metrics and
methodologies for trustworthy AI. Through these negotiations, the EU and US have
also agreed to work collaboratively on international AI standards, while also jointly
studying emerging risks of AI and applications of new AI technologies.”
For example, the report noted that representatives of the Large-scale Artificial
Intelligence Open Network have written to the European Parliament warning that the EU’s
draft AI Act and its “one-size-fits-all” approach will entrench large firms to the detriment
of open-source developers, limit academic freedom and reduce competition. If the EU
overly regulates AI, the report argued, it will repeat earlier failures with other technology
families and become a less relevant global market due to declining growth rates.
Meanwhile, the report suggested that a “modern aversion” on the part of the US to
investing directly in state capabilities could hamper its ability to lead on setting
international standards and norms. It noted that, at the height of the space race, the US
spent $42bn in today’s money on NASA funding in one year alone. By comparison, in 2022
the US spent $1.73bn on non-defence AI research and development, much of which was
contracted out to industry and academic researchers. The report argued that, without
sovereign-state capabilities, the US federal government could become overly reliant on
private expertise and less able to set or enforce standards.
As a result, Sir Tony and Lord Hague contended that both the US and EU approaches
risked locking in the current reality and leaders of AI, led by industry and lacking clear
incentives for alignment with democratic control and governance.
They argued that the UK should aim to fill the niche of having a relatively less regulated AI
ecosystem, but with a highly agile, technologically literate regulator tied closely to
Sentinel, their proposed national AI laboratory, and its research in this space.
However, they noted that this approach will take time. The authors suggest that by
combining flexible regulation with public investment in sovereign-state capacities, the UK
can attract private AI start-ups while building the sovereign-state technical expertise
required to set and enforce standards.
The UK should diverge from EU regulation on AI, but ensure its own regulatory
systems allow UK companies and AI models to be assessed voluntarily at EU
standards to enable exports.
In the near term, the UK should broadly align with US regulatory standards, while
building a coalition of countries through Sentinel. This position may then diverge
over time as UK regulatory expertise, the technology landscape and international
approaches mature.
In the medium term, the UK should establish an AI regulator in tandem with
Sentinel.
https://transform.england.nhs.uk/information-governance/guidance/artificial-
intelligency.
https://commonslibrary.parliament.uk/research-briefings/cdp-2023-0152/
https://commonslibrary.parliament.uk/research-briefings/CDP-2023-0152/
https://www.cnbc.com/2023/05/31/openai-is-pursuing-a-new-way-to-fight-ai-
hallucinations.html
https://www.gov.uk/government/publications/national-ai-strategy/national-ai-
strategy-html-version
https://dictionary.cambridge.org/dictionary/english/deep-tech
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-
2022
https://www.europarl.europa.eu/news/en/headlines/society/20200918STO87404/a
rtificial-intelligence-threats-and-opportunities
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-
artificial-intelligence/?sh=2b6e146b2706
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained
https://thehill.com/opinion/technology/3969647-the-first-ai-enhanced-
presidential-election/
https://www.gov.uk/government/publications/the-potential-impact-of-ai-on-uk-
employment-and-the-demand-for-skills
https://arxiv.org/pdf/2303.10130.pdf
https://committees.parliament.uk/work/6729/postpandemic-economic-growth-uk-
labour-markets/publications/
https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-
economic-potential-of-generative-ai-the-next-productivity-frontier
https://committees.parliament.uk/publications/33536/documents/182541/default/
https://committees.parliament.uk/publications/39303/documents/192860/default/
https://www.theguardian.com/books/2023/jul/05/authors-file-a-lawsuit-against-
openai-for-unlawfully-ingesting-their-books
https://www.creatorsrightsalliance.org/ai-and-creative-work
https://www.theguardian.com/technology/2023/apr/17/google-chief-ai-harmful-
sundar-pichai
https://lordslibrary.parliament.uk/artificial-intelligence-policy-in-the-uk-liaison-
committee-report/
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://openai.com/blog/planning-for-agi-and-beyond
https://futureoflife.org/wp-
content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
https://news.sky.com/story/elon-musk-wrong-to-call-for-pause-in-development-
of-ai-warns-new-report-12871977
https://www.bcs.org/articles-opinion-and-research/helping-ai-grow-up-without-
pressing-pause/
https://www.institute.global/insights/politics-and-governance/new-national-
purpose-ai-promises-world-leading-future-of-britain
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/att
achment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf
https://www.gov.uk/government/groups/ai-council
https://www.gov.uk/government/publications/establishing-a-pro-innovation-
approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-
regulating-ai-policy-statement
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper#part-2-the-current-regulatory-environment
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper#part-2-the-current-regulatory-environment
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper
https://www.thetimes.co.uk/article/no-10-reboots-ai-council-blindsided-by-
chatgpt-and-deepmind-mcjknmt32
https://www.gov.uk/government/news/ai-council
https://www.gov.uk/government/news/initial-100-million-for-expert-taskforce-to-
help-uk-build-and-adopt-next-generation-of-safe-ai
https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-
intelligence
https://www.theguardian.com/technology/2023/jun/09/rishi-sunak-ai-summit-
what-is-its-aim-and-is-it-really-necessary
https://www.gov.uk/government/publications/guidance-to-civil-servants-on-use-
of-generative-ai/guidance-to-civil-servants-on-use-of-generative-ai
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/att
achment_data/file/1146540/Generative_artificial_intelligence_in_education_.pdf
https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-
intelligence
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/e
u-ai-act-first-regulation-on-artificial-intelligence
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-
intelligence-act/
https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-
a-step-closer-to-the-first-rules-on-artificial-intelligence
https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12979-
Civil-liability-adapting-liability-rules-to-the-digital-age-and-artificial-
intelligence_en
https://ec.europa.eu/docsroom/documents/45508
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-
approach/white-paper
https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-
transatlantic-comparison-and-steps-to-alignment/
https://www.institute.global/insights/politics-and-governance/new-national-
purpose-ai-promises-world-leading-future-of-britain
https://www.freepik.com/free-vector/gradient-brain-
background_44416640.htm#query=AI&position=30&from_view=search&track=sph
© House of Lords 2024. Re-use our content freely and flexibly with only a few conditions
under the Open Parliament Licence .