Artificial Intelligence Introduction
Artificial Intelligence Introduction
Artificial Intelligence Introduction
27 March 2018
In the words of the late Stephen Hawking, ‘AI [Artificial Intelligence] could be the biggest event in the history of our
civilisation. Or the worst. We just don’t know.’ As the world stands at the cusp of this transformative technology,
much is at stake. Deployed wisely, AI holds the promise of addressing some of the world’s most intractable
challenges, from climate change and poverty to disease. Used in bad faith, it can lead the world on a downward
spiral of totalitarianism and war, endangering – according to Hawking – the very survival of humankind itself.
Finding a policy response to what is undoubtedly ‘the next big thing’ is both urgent and challenging. Europe needs
an ambitious and rapid deployment strategy, covering both business and public administration. This
must go hand in hand with a world-class research and science strategy, as well as an international drive to claim
its stake in what is for now a heated race between the United States and China for global dominance. In addition
to creating an enabling environment for AI, Europe must use its widely recognised values and principles to build
global regulatory norms and frameworks that ensure a human-centric and ethical development of this technology.
EPSC Strategic Notes are analytical papers on topics chosen by the President of the European Commission. They are produced by the European Political
Strategy Centre (EPSC), the European Commission’s in-house think tank.
Disclaimer
The views expressed in the EPSC Strategic Notes series are those of the authors and do not necessarily correspond to those of the European Commission.
1
EPSC Strategic Notes
• Gather the world’s leading AI • Use AI in a wide range of • Develop major breakthroughs in
talents together fields – manufacturing, research and development
medicine, national defence
• Establish initial frameworks for AI • Become a leading player in AI • Expand the use of AI through social
laws, regulations, ethics and policy research and development governance and national defence
• Finalise AI laws, regulations, • Create leading AI innovation and
ethical norms, policies and personnel training bases
safety mechanisms
Source: ‘Sizing the prize – What’ s the real value of AI for your business and how can you capitalise’, PwC, 2017
Where does Europe stand? Europe’s external challenge is the uneven pace at
which AI is being developed around the world, with other
jurisdictions enjoying structural advantages. Places like
Broadly speaking Europe faces two major Silicon Valley, for example, have a unique economic
challenges, an internal and an external one. framework geared to support disruptive innovations
with strong commercial applications. It is also a place
The internal challenge relates to the uptake of AI where the quintessential ingredient for AI, data, is more
technologies by companies and the public sector, and to easily available. This is also the case in China, where the
putting in place a regulatory framework that is flexible regulatory environment offers little in terms of privacy
enough to adapt to future technological progress, while or control of personal data, and where major public and
respecting key fundamental principles. Such principles private investments continue to flow into AI development.
include social and institutional considerations – for This underscores the key role of cultural factors, giving
example the defence of democracy, protection of China a strong advantage: 93% of Chinese customers are
vulnerable persons (i.e. children), and data privacy – as willing to share location data with their car manufacturer,
well as economic ones, such as fostering innovation and compared to 65% of Germans and 72% of Americans,
competition. suggesting that China is more likely to become the
hotbed of the ‘car data revolution.’ 25 While China
Companies across the continent are slow to continues to be an opportunity for European companies
adopt digital technologies in general. Only 4% wanting to invest and expand abroad, it will also
of world data is stored in the EU and a mere 25% of increasingly become a major competitor if Chinese firms
large EU enterprises and 10% of EU SMEs used big can implement more advanced AI technologies and
data analytics in 2017.22 Data scientists account for far work with larger volumes of granular data.
less than 1% of total employment in most EU Member
States.23 While large corporations are able to adopt China’s efforts on the corporate side are mirrored in
AI technologies in order to improve their own systems academia where Chinese researchers are currently
(voice, face recognition, personal assistants, bot-to-bot publishing more journal articles on deep learning than their
communication), smaller companies face significant US or European counterparts (Figure 2). While Europe’s
constraints, including the lack of qualified staff, higher science and research base is comparatively strong, it
cost of investment, difficulty in assessing economic suffers from a long-standing inability to turn promising
returns, or simply doubts about the possible integration inventions into genuine innovations, resulting in a scarcity
of AI in the company. of globally successful, sizable digital companies.26
Furthermore, Europe has the potential to leverage Europe also lags behind the US and China on
the backbone of its economy – its high value-added patent submissions27 and investments. Between
manufacturing and industry base, which currently 2002 and 2015, while the number of ICT patents
accounts for roughly 23% of GDP – to get ahead in submitted in India more than doubled and increased by
the Internet of Things (IoT) and in Artificial Intelligence. as much as 50% in China, average submissions in the
However, today, much of this sector still largely operates EU28 actually decreased over the same period.28
in an analogue world. Missing the digitalisation boat
(again) would not only put EU companies at significant Figure 2: China leads the way on deep
disadvantage vis-à-vis their competitors. It would also
learning research
over time significantly impact the wider economy, be it
in terms of growth, tax revenues or employment. 400
350 China
By creating an interconnected system of machines and United States
300
Number of publications
Canada
adopting AI-powered technologies, European companies 250 Japan
would obtain an ‘AI-multiplier’ effect. They would Germany
200 South Korea
not only become more efficient, they would also be 150
able to capture and analyse massive amounts of
100
machine generated data as a by-product of operations.
50
A ‘smart factory’,24 for example, would generate data
from automated manufacturing processes, warehouse 0
2007
2008
2009
2010
2011
2012
2013
2014
2015
In 2016, external investors poured between 900 million Europe should respond to its internal and external
and 1.3 billion euro into European firms. But they invested AI challenges by pursuing two goals: First, creating
between 1.2 and 2 billion euro in Asian companies, and 4 an enabling framework favouring investment in AI,
to 6.4 billion euro in North American ones. And, although and second, setting global AI quality standards.
some big European companies are investing in AI (ABB,
Bosch, BMW, Siemens), internal corporate investments in
AI were also much lower in Europe in 2016 (Figure 3).30 Europe should emerge as a
quality brand for AI
Figure 3: AI investments are lower in
Europe A lax approach to citizens’ digital rights may give short-
Internal corporate investments term economic advantages. The easier it is to collect and
External investments process data, the lower the cost for companies to develop
(venture capital, private equity and mergers & acquisitions) AI-powered solutions.31 Yet, pursuing a ‘Chinese’ model
North America Asia is neither possible nor desirable. Successive waves
12.2-18.8 billion euro 6.5-9.8 billion euro of technological advancement have essentially revolved
Europe around the empowerment of individuals. In the long run,
2.5-3.3 billion euro there will not be digital ‘prosperity’ for countries
that do not address issues related to the effect of
technology on citizens’ well-being. If not addressed
early-on in the development of technology, the tension
Source: McKinsey, 2017 between users and misuse of technology might escalate
at a later stage, when it can be more difficult to handle.
Even if some European AI companies are
performing well and succeeding in developing new AI Conversely, Europe has an opportunity to set global
technologies (DeepMind, Skype, etc.), they tend to be standards to reach the highest level of welfare for
acquired by non-European companies at a later citizens, gaining trust and thereby setting the ground
stage of development. The European continent at for a stable and broad level of acceptance of the
times functions as a de facto ‘incubator’ for others, new technology, not only in Europe but, over time, also
unable to build up sizable, internationally-operating in other parts of the world. In the short term this can
tech companies of its own. That hasn’t stopped tech imply additional hurdles for companies willing to invest
companies – particularly US ones – from setting up new in Europe. However, in the long run it is likely that higher
AI hubs in European countries to tap into the strong standards will prevail, so the companies that gain early
research base and highly qualified professionals.29 trust among users could have a competitive advantage.
Companies
0 1000
Source: White House, National Artificial Intelligence Research and Development Strategic Plan
Steering AI to augment rather on providing support and security to those who are more
likely to bear its costs. In the past, technological change
than substitute humans has often meant resistance to change, which has only
compounded job losses, without the upside of job gains
that early technology leadership might have afforded.
Artificial Intelligence will not lead to the end of jobs.
But this does not mean that no one will lose their job Public policy should encourage the development of
to machines. Rather, the expectations around jobs will Artificial Intelligence aimed at establishing a symbiosis
be transformed. For example, Microsoft is deploying a between human and machines. Artificial Intelligence
technology to refine radiologists’ capacity to identify the should be conceived as a complement to humans, not
boundaries of tumour cells and monitor their progress. a substitute. The goal should be a society where
However, this does not mean radiologists will be people feel empowered, not threatened by AI.
replaced by machines in any foreseeable future.32 There That is why skills-oriented actions, including retraining,
will be a place for humans in an AI-augmented society, as well as robust safety nets that accompany citizens
but the focus must be on facilitating the transition and during times of transition are of utmost importance.
These principles may at first seem to limit the scope for AI development in Europe. Yet, the GDPR also
creates opportunities: companies will be incentivised to find innovative solutions in order to be able to
process data while remaining within the legal remit of the GDPR. Data could be kept ‘close’ to the data
subject with local processing on their devices, as envisaged in the ‘GoFair’ project.34 A UK start-up called
Anon AI is winning the trust of investors on its promises to use Artificial Intelligence to ‘share data securely
using a workflow tool that automatically anonymises and adapts changing datasets’. More generally, the
principle of accountability enshrined in the GDPR is set to foster the accuracy of data; it implies increasing
trust in the source of the data and the reliability of results. Studies show that mature information
governance is a determinant of business success and data protection can been seen as an enabler, not
a barrier, to innovation.35 Google Flu Trends’ ‘epic failure’36 shows that massive amounts of data do not
guarantee accurate outcomes; data quality, as fostered by the GDPR, is crucial too.
By respecting the legitimate right to privacy of users, AI technologies would be more readily accepted by
society at large, and can rapidly emerge as global standards, granting Europe a first-mover advantage. As
recently confirmed by Facebook’s Chief Operating Officer Sheryl Sandberg,37 big multinational companies
are likely to adopt GDPR-compliant business models worldwide, rather than inefficiently operating multiple
models in different regions. People around the world are becoming more, rather than less, concerned about
the potential misuse of their data. A recent study finds hat 84% of US consumers are concerned about the
security of their personally identifiable information and 70% of them stated that their concern is greater
today than a few years ago.38
Much, however, will depend on implementation by Member States. With just a short time to go before
the entry into force of the GDPR, only Austria and Germany have adopted the necessary measures to make
their national systems compatible, including setting up national data protection authorities and designating
accreditation bodies. Diversity in enforcement by Member States or even regions risks erasing one of the
most important benefits of the GDPR for citizens and business: the creation of a uniform and predictable
approach to data protection across Europe.
The inherent AI bias participation costs, change their privacy settings, or even
if their reputation is compromised. In such a context,
companies have too few incentives to swiftly correct the
While augmenting humans’ capabilities, AI can also implicit bias of their algorithms.42
exacerbate existing power asymmetries and
biases. The AI Now 2017 report for instance highlights Even assuming that all bias could be corrected at the
how AI technologies enhance employers’ ability to development stage by well-intentioned tech companies
oversee, monitor and assess the work of employees. and programmers, AI technologies generate even deeper
Sophisticated automated software can be used to concerns. Accurate algorithms are of little use if the
grasp sentiment in the text of e-mails and attach a source of the bias is not only in the composition of the
‘productivity risk’ to employees who are deemed to be data sample or of the developing ecosystem, but rather
likely to leave the company, for example.39 is a by-product of the way people think or, more broadly,
the structure of the society they live in. Google’s
Prudence is called for because Artificial Intelligence sentiment analysis attaches a neutral value to words
increasingly powers the technologies that are such as ‘straight’ but a negative value to ‘homosexual’,
rapidly becoming the essential analytical, because it draws from the environment in which those
communicational, and even legal, infrastructure words are placed, and it seems it is more likely that
for our societies. Algorithms are affecting the hiring negative connotations are attached to minorities on
processes of companies, communication between online chats.43 Microsoft’s Tay chatbox ‘became’ racist a
smartphone applications, and what content users see on few hours after its launch, because it learned to do so
Google, Twitter or Facebook. from interacting with other users on Twitter. 44
Yet these technologies invariably reflect AI can indeed bring the consequences of power
the background and bias of the source that asymmetries to an extreme, with discrimination as a
programmed them. As such, the composition of the key risk. While discrimination as such is not always
Artificial Intelligence development ecosystem is heavily bad (cinemas ‘discriminate’ students offering them
skewed towards a population group with specific discounted tickets, for example), discrimination
characteristics: developers are mostly white males, well- enabled by massive amounts of data and sophisticated
off, well-educated, with a strong inclination for high-tech.40 algorithms can challenge the very fundamentals of our
societies.
A non-diverse environment cannot design the
new paradigm on which society will run without Predictive analysis could enable the circumvention of
inevitably replicating its own bias. In the early laws that prevent discrimination on the basis of race or
1970s, male developers introduced the first airbags sexual orientation, for example, or even impose higher
and tested them with male-sized test dummies; the insurance premiums on people with a higher likelihood
result was a 47% higher chance of serious injuries for of falling ill. It is therefore of utmost importance that
seat-belted women drivers than for belted male drivers. the task of defining the fundamentals of the new
It took the US national transport safety authority and ‘digital society’ is not left in the hands of developers
automakers more than thirty years to introduce tests alone. Instead, it has to be, at least partly, a function of
with women and child dummies. When it comes to public policy. And policymakers should set the necessary
digital technologies, we do not have that much framework conditions before AI advances much further.
time. Digital markets evolve very rapidly. A typical
traditional Fortune-500 company would take twenty
years to get to a market valuation of one billion of A ‘Hippocratic Oath’ for AI?
dollars. Google, Uber and Snapchat got there in slightly
A Hippocratic Oath has traditionally been sworn
more than eight, four and two years respectively.41
by physicians, requiring them to uphold specific
ethical standards, such as non-maleficence. In
It seems unlikely that market forces alone would
recent times, there have been calls to formulate a
be able to generate the necessary response to
Hippocratic Oath for developers and technologists,
effectively handle the issue of bias. Features like
in particular those working on AI. As the remit
extremely strong network and scale economies imply
of technology now covers areas that go to the
that competition often happens for the market rather
very heart of human well-being, many feel that a
than in the market. Established online platforms that
more robust ethical compass is needed to guide
are subject to very low competitive pressure both inside
development and ensure accountability. Even
the market or from potential new entrants are unlikely
industry leaders have made this demand.45
to pay a price in terms of loss of users if they increase
Commission’s planned initiative on ‘Industrial Data Europe does not lack centres of research excellence
Spaces’ 51 – modelled after the ‘German Industrial in Artificial Intelligence, as it in fact accounts for
Data Space’ – as a platform favouring secure data the largest share of top AI research institutions
exchange in business ecosystems on the basis of worldwide.58 However, universities are often left in
standards and by using collaborative governance a vacuum, lacking connections with other research
models. 52 institutions, without significant backing from public
funding, and in unsystematic relationships with
• Enabling infrastructure investment and companies. European universities’ AI labs do not have
designing a favourable regulatory framework the resources to scale up and become interconnected
for AI inputs. New regulation should unlock powerhouses capable of working on ambitious large-
investment and access to the key infrastructures scale research projects or commercial applications.
needed for developing AI solutions, namely telecom
To address this, the European Commission should
infrastructure and high-performance computing (HPC)
foster the creation of a permanent network of AI
facilities. A number of initiatives of the European
research institutions and back the scaling-up of AI
Commission’s Digital Single Market Strategy have
labs with appropriate public and private funding, as
exactly that purpose. The new proposed Telecoms
is done elsewhere. The US government, for instance,
code lays down measures to incentivise investment
has invested 800 million euro in unclassified AI
in fast and ultra-fast broadband connections as
research and development in 2015. The government
well as a rapid uptake of advanced wireless 5G
of South Korea is investing jointly with Korean
technologies. 53 Fifteen EU Member States have
companies some 730 million euro to support the
signed an agreement in support of the European
creation of a public-private AI research centres.59
Commission’s plans to establish a multi-government
Top European AI research needs to have the means
cooperation framework for acquiring and deploying
to compete with these initiatives – and do so quickly
an integrated next-generation supercomputing
before others pull ahead in markets in which first-
infrastructure. 54 These measures now require urgent
mover advantage is real. Policy action needs to foster
political prioritisation because without competitive
a conducive ecosystem, centred around a strong
high-performance computing capabilities and high-
and mutually beneficial relationship between EU
speed connectivity, Europe will quite literally not have
universities and AI companies.
the necessary speeds and bandwidths to build and
support the business models of the future. 55
Funding should also be used to incentivise the
creation of richer, more diverse development
• Promoting the development of AI hubs and and research environments for AI, to minimise
excellence in AI research. Innovation ecosystems the risk of biased outcomes. This could be done
bring together researchers and scientists with through scholarships for researchers from different
businesses and private investors, stimulating growth disciplines and of different gender or ethnicity.
through the aggregation of complementary skills
and resources, such as start-up incubators, fablabs,
• Supporting the creation of a European Artificial
co-working spaces and three-dimensional printing.
Intelligence Platform. Such a pan-European
The European Commission is investing 100 million
platform could play a role as advisory body, bringing
euro per year from 2016 to 2020 to create digital
together different stakeholders (representatives
innovation hubs across the EU in several business
from top universities and research institutions; EU,
fields.56 In order to boost the development of AI in
national and regional public authorities; enterprises,
Europe, a significant part of that funding should be
investors and local communities) from multiple
geared to support AI innovation within these digital
sectors (ICT, services, manufacturing, financial, etc.)
hubs. Germany’s Max Planck Society, for example,
to identify bottlenecks in the AI ecosystem and
is creating the world’s leading hub for AI around
advise on possible public policy measures to enable
Stuttgart and Tübingen, bringing together academic
faster growth in the development of AI technologies
institutions (two technical universities) and industry
in Europe. The stakeholder platform would have an
(six leading companies) to boost research in Artificial
instrumental role in spelling out the obstacles, be
Intelligence. Recently, Amazon decided to open an
they financial, institutional or regulatory, that slow
AI research centre in this region to benefit from the
down adoption of AI technologies, especially by SMEs
existing ecosystem.57
and the public sector.
Directive66 which exempts online intermediaries from merger control should be fine-tuned to capture
liabilities related to content they unawarely host on acquisitions which may have a significant impact
their platforms. That status was originally granted in on competition in the future but that today skip
order to promote growth in the sector, which held – and the scrutiny of authorities because they are
still holds – the promise of bringing huge benefits to below notification turnover thresholds.
society. However, it is time to acknowledge that those
technologies have now become so ubiquitous that, • Encouraging the public sector to lead by
on many counts, they serve a quasi-public purpose. It example. Europe’s public sector, including the
would be naïve to expect that mismatches between the European Commission itself, can play an important
objectives of these privately-run businesses and the role in demonstrating leadership and incentivising
public interest would not lead to serious consequences businesses to follow suit. This would require access
for society. While questioning the neutrality principle to modern computing facilities and revising human
enshrined in the e-Commerce Directive would seem resources policies in order to attract and retain people
premature at this stage, all these elements suggest with AI, machine learning and data analytics skills.
that public authorities need urgently to adapt traditional It would also suppose a thorough restructuring of
institutional and policy tools to the digital age. internal processes and hierarchical structures, which
experience suggests has a higher chance of success
• Address market distortions and power if led by external actors - so called ‘digital architects’
asymmetries. The digital age has on occasion created - or internal ad-hoc task forces with a mandate for
asymmetries between providers and users. On balance, innovation and disruption. AI-powered decisions
traditional policy tools developed in the analogue age taken by ‘augmented public officials’, be they high-
appear fit for dealing with such instances, but they court judges, police officials or European Commission
need to be adapted to the new digital environment in employees, should not stem from ‘black boxes’. They
order to be effective. That, for instance, has been the must be available for public auditing, testing and
logic underlying a number of initiatives launched by review, as well as subject to accountability standards. 68
the European Commission in recent years, including
the General Data Protection Regulation, but also parts
of the Digital Single Market Strategy, such as the 4) Steer: Guarantee a human-centric
e-Privacy Regulation, consumer protection for digital approach to AI
goods and services, and the ban of geo-blocking. 67
In order to succeed in the AI race while preserving its own
cultural preferences, Europe needs to address potential
• Among the most effective traditional tools is social risks and establish an EU AI quality branding
competition policy. A proper enforcement of merger distinguishing it from the lax approach exhibited by other
control, antitrust and state aid rules can prevent jurisdictions. This could be achieved through a well-defined
market distortions and the creation of bottlenecks action plan, led by the European Commission, to steer
in the digital value chain. By forcing companies to AI towards compatibility with EU principles. Such a
compete on the basis of merit, competition policy plan should focus on building the necessary expertise at EU
contributes to ensuring that market rewards are level to monitor the evolution of AI technologies in Europe,
distributed to players that innovate and offer the best as well as on gaining the legitimacy to establish quality
quality to their customers. It also empowers users. standards and the authority to enforce them (Figure 7).
Competition reduces the ability of suppliers to glean
value from customers through algorithmic-empowered
discrimination. For example, strong competition in the Figure 7: An EU action plan for a human-
insurance market lowers premiums for users, limiting centric Artificial Intelligence
the ability of insurers to extract value from their
clients through the use of predictive analysis. Adapting A European
competition policy, however, requires catching up Action Plan for
Human-Centric AI
with a fast-evolving business environment. Antitrust
enforcement, for instance, needs to accelerate while
antitrust tools must be refined in order to stop AI from Social system AI quality
analysis Monitoring standards
being used by companies to break the law, for example
by coordinating prices. Likewise, merger control should
take into account the implications of a reduction Global
in market competition, which might allow merged Enforcement multilataral
engagement on AI
companies to use AI technologies to discriminate
against their users or elicit them to hand over more
personal data to access their services. Importantly, Source: European Political Strategy Centre
The central elements of this action plan should include: would include periodic tests and retraining to ensure
that humans would still be able to perform the task in
• Monitoring and periodically reporting on the question in case of a technology breakdown.
general evolution of AI technology. Sophisticated
statistical indicators should be developed at EU level • Enforcement. The EU should be empowered with
to quantify the uptake of AI technology in all its forms, the necessary tools to effectively enforce the quality
not only robotics, but also the use of automated standards it defines on AI. Mechanisms should be
services based on machine learning. Based on this, developed to determine when technologies deviate
areas of concern or improvement could be identified, from those standards and to verify that requirements
prompting a discussion around the most effective are met at the moment of deployment and launch.
public policy measures to address them. There should be a public reporting of identified
violations of quality standards. Where appropriate,
• Introducing social-system analysis.69 Researchers identified potential infringements of applicable
and experts from different disciplines, government regulations – such as privacy, consumer protection,
and business representatives should assess the or competition laws – should be redirected to the
social and economic impact of the introduction of relevant enforcing authority.
new AI technologies, for different communities,
through different dimensions of analysis, be they
economic, social, historical, ethical, or anthropologic.
The potential effect of biased algorithms and the Military applications of Artificial
implications of discriminatory practices, should be Intelligence
assessed and the results of the assessment should
Advances in the fields of Artificial Intelligence are
inform public opinion ,as well as the definition of
changing the face of defence and warfare. Robots
potentially corrective regulatory measures.
with autonomous weapons capabilities have been
deployed in the Korean Demilitarized Zone, for
example, and the US and China are currently in
Setting Universal Ethical the lead in developing these weapon systems.
Standards But many see the prospect of autonomous lethal
The global engineering association IEEE has since weapons or killer robots with great concern.
2016 launched an initiative to recommend policy
guidelines to foster an ethically aligned design In August 2017, Elon Musk and Alphabet’s Mustafa
of Artificial Intelligence. An explicit goal of IEEE Suleyman led an initiative of 116 world’s leading
is to have an inclusive approach to cultures, for robotics and artificial pioneers to call for a ban of
example drawing insights from Buddhism or the development and use of killer robots. Stuart
Confucianism to address the risk of designing an Russell, founder and Vice-President of Bayesian
ethical code resting only on Western values and Logic, commented: ‘Unless people want to see
principles.70 new weapons of mass destruction – in the form
of vast swarms of lethal microdrones – spreading
around the world, it’s imperative to step up and
support the United Nations’ efforts to create a
• Defining AI quality standards, including the
treaty banning lethal autonomous weapons’. A
necessary levels of transparency for algorithmic
similar initiative is the Campaign to Stop Killer
processes, as well as obligations for private and public
Robots (https://www.stopkillerrobots.org), which is
entities using AI-powered technologies to ensure the
supported by a number of non-profit organisations.
absence of bias. Principles such as the need for AI to be
However, on balance, these efforts to are too
‘lawful by design’ should be promoted, so that the
marginal given the formidable threat that AI-
respect of laws is embedded in AI technologies such
powered weapons systems pose.
as algorithms when they are designed by developers.71
‘Lawful by design’ algorithms would overcome an
That is why it is urgent for the European Union
intrinsic problem of machine learning and neural
to take the lead in an international multilateral
networks technologies, particularly where these are able
discussion around the use of Artificial Intelligence
to ‘learn’ and evolve by themselves and often escape
for military purposes, and to promote global
the control of their initial creators. Standards should
solutions, including blanket bans. For a vivid
also embed a ‘human in the loop’ principle in new
illustration of the potentially destructive
AI technologies so that they are conceived to augment
power of lethal drones, watch this video http://
human abilities – not to fully substitute them but rather
autonomousweapons.org/slaughterbots/.
to complement them. A ‘human in the loop’ principle
Notes
1. Nilsson, N. J., ‘The quest for artificial intelligence’, in Cambridge 14. The State Council of The People’ s Republic of China, ‘Guidelines on
University Press, 2009. Artificial Intelligence Development’, July 2017.
2. Ng, L. S. A., ‘Why AI is the new electricity’, in Nikkei Asian Review 15. Government of Singapore, Prime Minister’s Office, Singapore AI
Online, 27 October 2016. Strategy (AI.SG), National Research Foundation ‘Artificial Intelligence
3. Bitvore Corporation, Applications of Material Intelligence. See https:// R&D Programme’, October 2017.
bitvore.com/competitive-intelligence-gathering/. 16. Government of Japan, ‘AI Research Centre’.
4. Gartner, ‘Top 10 Strategic Predictions for 2017 and Beyond. The 17. Zastrow, M. (2016). South Korea’s Nobel Dream. Nature, 534(7605),
Storm Winds of Digital Disruption’, October 2016. 19-22.
5. International Data Corporation (IDC), ‘Worldwide Semiannual 18. MIT Technology Review, ‘China has a new three year plan to rule AI’,
Cognitive /Artificial Intelligence Systems Spending Guide’, October 15 December 2017.
2016. 19. Government of Canada, Canadian Institute for Advanced Research,
6. PwC, ‘Sizing the prize – What’s the real value of AI for your business ‘Pan-Canadian Artificial Strategy’, March 2017.
and how can you capitalise?’, June 2017. 20. Executive Office of the President, National Science and Technology
7. Accenture, ‘Why Artificial Intelligence is the Future of Growth’, June Council Committee on Technology, ‘Preparing for the Future of
2016. Artificial Intelligence’, October 2016.
8. European Political Strategy Centre, ‘Enter the data economy’, 11 21. Republique Française, ‘#FranceIA: the national artificial intelligence
January 2017. strategy is underway’’, 26 January 2017.
9. Venture Scanner, ‘Artificial Intelligence Startup Highlights’, Q4 2017. 22. European Commission, ‘Europe’s Digital Progress Report’, 2017.
10. CB Insights, ‘The Race for AI: Google, Baidu, Intel, Apple in a Rush to 23. Organisation for Economic Cooperation and Development, Data-
Grab Artificial Intelligence Startups’, July 2017. Driven Innovation, ‘Big Data for Growth and Well-Being, October
11. MIT, ‘Using Artificial Intelligence to improve early breast cancer’, 2015.
October 2017. 24. Deloitte, ‘The smart factory: responsive, adaptive, connected
12. Artificial Intelligence for Digital Response. See http://aidr.qcri.org/. manufacturing’, August 2017.
13. BBC News, ‘What a Facebook experiment did to news in Cambodia’, 25. McKinsey&Company: ‘Car data: paving the way to value-creating
31 Octoer 2017, explains the power of social media as publishers mobility - Perspectives on a new automotive business model’, March
and businesses, using the example of Facebook’s ‘Explore’ 2016.
experience, whereby six countries had their posts restricted to the 26. McKinsey Global Institute, briefing note prepared for the EU Leaders
Explore Feed unless they paid a fee to appear in the general timeline. Tallinn Digital Summit, ‘10 imperatives for Europe in the age of AI
and automation’, September 2017.
27. The Economist, Wuzhen Institute, ‘China may match or beat America 50. Federal Government of Germany, Federal Ministry for Economic
in Artificial Intelligence’, July 2017. See https://www.economist. Affairs and Energy, the Federal Ministry of the Interior and the
com/news/business/21725018-its-deep-pool-data-may-let-it-lead- Federal Ministry of Transport and Digital Infrastructure Ministry for
artificial-intelligence-china-may-match-or-beat-america. Economic Affairs and Energy of Germany, ‘Digital Agenda’.
28. Organisation for Economic Cooperation and Development (OECD), 51. European Commission, ‘Legislative priorities for 2018-2019’, 14
Science, Technology and Industry Scoreboard 2017, ‘The Digital December 2017.
Transformation’, 2017. 52. Industrial Data Space Association. See http://www.
29. Wired, ‘Europe is leading the way in Artificial Intelligence and industrialdataspace.org/en/
Machine Learning’. See http://www.wired.co.uk/article/deep-tech- 53. European Commission, ‘Directive establishing the European
europe-hubs. Electronic Communications’, September 2016.
30. McKinsey Global Institute, ‘10 imperatives for Europe in the age of 54. European Commission, ‘EU Ministers commit to digitising Europe
AI and automation’, September 2017 , and ‘Europe’s economy: Three with high-performance computing power’, March 2017.
pathways to rebuilding trust and sustaining momentum’, January
2018. 55. European Political Strategy Centre, ‘Connected-Continent for a
Future-Proof Europe’, July 2016.
31. Shanghai News, ‘Shanghai will be at the heart of China’s artificial
intelligence’. See further: http://www.shanghai.gov.cn/shanghai/ 56. European Commission, Smart Specialisation Platform. See http://
node27118/node27818/u22ai87316.html. s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool.
32. Reiner, B. & Siegel, E., ‘Radiology reporting: returning to our image- 57. Financial Times, ‘Germany’s Cyber Valley aims to become the
centric roots’, in American Journal of Roentgenology, 187(5), 1151- leading Artificial Intelligence hub’. See https://www.ft.com/
1155, 2006. content/1d0b2770-7226-11e7-93ff-99f383b09ff9 & Ted Crunch
‘Amazon to open visually focused Artificial Intelligence research hub
33. European Commission, ‘Stronger protection, new opportunities in Germany’. See https://techcrunch.com/2017/10/23/amazon-to-
– Commission guidance on the direct application of the GDPR open-visually-focused-ai-research-hub-in-germany/.
Regulation as of 25 May 2018’, January 2018.
58. Of the top 100 AI research institutions, Europe has 32, the US has
34. GOFAIR Initiative, ‘A bottom-up international approach’, see https:// 30 and China has15. For a full list see Atomico’ State of European
www.go-fair.org/. Tech 2017.
35. Information Commissioner’s Office (UK ICO), ‘Big data, artificial 59. McKinsey Discussion Paper, ‘Artificial Intelligence, the Next Digital
intelligence, machine learning and data protection’, September 2017. Frontier?’, 2017.
36. Wired, ‘What we can learn from the epic failure of Google flu trends’. 60. Venture Beat, ‘The biggest roadblock to Artificial Intelligence
See https://www.wired.com/2015/10/can-learn-epic-failure-google- adoption is a lack of skilled workers’. See https://venturebeat.
flu-trends/. com/2017/11/04/the-biggest-roadblock-in-ai-adoption-is-a-lack-of-
37. Techcrunch, ‘Facebook to roll out global privacy settings hub – skilled-workers.
thanks to GDPR’. See https://techcrunch.com/2018/01/24/facebook- 61. The New York Times, ‘Tech giants are paying huge salaries for scarce
to-roll-out-global-privacy-settings-hub-thanks-to-gdpr/. AI talent’. See https://www.nytimes.com/2017/10/22/technology/
38. Pew Research Center, ‘Americans and Cybersecurity’. See http://www. artificial-intelligence-experts-salaries.html.
pewinternet.org/2017/01/26/americans-and-cybersecurity/. 62. European Commission, Europe’s Digital Progress Report 2017,
39. AI Now Institute, ‘AI Now Report’, November 2017. ‘Human Capital: Digital Inclusion and Skills’, 2017.
40. Atlassian, ‘State of Diversity Report 2017’, March 2017. 63. European Commission, ‘A New Skills Agenda for Europe’, June 2016.
41. World Economic Forum, ‘White Paper – Digital Transformation of 64. The Guardian, ‘Our minds can be hijacked’. See https://www.
Industries’, January 2016. theguardian.com/technology/2017/oct/05/smartphone-addiction-
42. For example, see Techcrunch on Facebook’s failure to enforce anti- silicon-valley-dystopia.
discriminatory policy https://techcrunch.com/2017/11/22/facebooks- 65. The Telegraph, ‘Facebook should identify children at risk of mental
ad-system-shown-failing-to-enforce-its-own-anti-discriminatory- health problems, Jeremy Hunt says’. See https://www.telegraph.co.uk/
policy/?utm_medium=TCnewsletter. news/2018/02/12/facebook-should-identify-children-risk-mental-
43. The Inquirer, ‘Google’s AI is already associating ethnic minorities health-problems/.
with negative sentiment’. See https://www.theinquirer.net/inquirer/ 66. Directive 2000/31/EC of the European Parliament and of the Council
news/3019938/googles-ai-is-already-associating-ethnic-minorities- of 8 June 2000 on certain legal aspects of information society
with-negative-sentiment. services, in particular electronic commerce, in the Internal Market
44. The Verge, ‘Twitter taught Microsoft’s AI chatbot to be a (‘Directive on electronic commerce’).
racist asshole in less than a day’. See https://www.theverge. 67. European Commission, ‘Shaping the Digital Single Market’, March 2015.
com/2016/3/24/11297050/tay-microsoft-chatbot-racist. 68. Crawford, K. and R. Calo, ‘There is a blind spot in AI research’, in
45. Smith, B., ‘The Future Computed: Artificial Intelligence and its role in Nature, 20 October 2016, doi: 10.1038/538311a.
society’, 2018, https://blogs.microsoft.com/blog/2018/01/17/future- 69. AI Now Institute, ‘AI Now Report’, November 2017.
computed-artificial-intelligence-role-society/
70. IEEE Standards Association, ‘The IEEE Global Initiative on Ethics of
46. AI Now Institute, ‘AI Now Report’, November 2017. Autonomous and Intelligent Systems’. See https://standards.ieee.org/
47. ‘European Commission, Communication on Building a European Data develop/indconn/ec/autonomous_systems.html.
Economy, January 2017. 71. An illustrative example could be that algorithms should be prevented
48. ‘European Commission, ‘Regulation on the Free Flow of non-personal from fostering collusion by helping companies to converge on same
Data’, September 2017. (non-competitive) price levels.
49. Inria Saclay– Île-de-France Research Centre, ‘Marc Schoenauer to
work with Cédric Villani to define an AI strategy for France’. See
https://www.inria.fr/en/centre/saclay/news/marc-schoenauer-to-work-
with-cedric-villani-to-define-an-ai-strategy-for-france.