White Paper On AI - AK Europe
White Paper On AI - AK Europe
White Paper On AI - AK Europe
June 2020
Digital
White Paper
On Artificial Intelligence – a European
approach to excellence and trust
Executive summary
The European Commission presented a White Paper The AK would also like to emphasise, that
on Artificial Intelligence (AI), which outlines key points accompanying measures in the field of education
for the future approach to these technologies. are as well necessary in the context of AI. On the one
hand to counteract the shortage of skilled workers,
The Austrian Federal Chamber of Labour (AK) and on the other hand to deal with any structural
welcomes the fact that this issue is being addressed changes in the labour market that may be caused
at the European level and we would like to use by the use of AI if jobs are lost as a result. Therefore,
this position paper to make a contribution to this measures in the field of education and further training
important discussion, which will have far-reaching (especially for women) should be taken right from the
effects on future life and work: beginning.
Artificial Intelligence will play an important role in the In this position paper, the AK would like to highlight
future and will be even more pervasive in all parts of three areas that are of particular importance and
life than it already is. These technologies have already deserve special attention in the context of Artificial
been integrated into numerous applications. Often Intelligence:
without us even noticing.
• AI and employees
They can support us, they can solve tasks
independently that would not be manageable without • AI and consumers
AI and the use of data, they can make our everyday
lives more comfortable and secure. However, they • AI and the environment
can also keep us under surveillance, they can make
decisions with far-reaching consequences for The use of Artificial Intelligence is particularly
us – without us being aware of it – and they can sensitive in these areas. While the White Paper
sometimes make mistakes. addresses these issues, further discussion is needed
to ensure that AI is properly managed.
This makes it all the more important to find a
framework that, on the one hand, makes the
advantages of Artificial Intelligence available and
distributes the benefits fairly, but, on the other hand,
minimises the risks and disadvantages and creates
transparency at the same time.
White Paper
On Artificial Intelligence – a European approach to excellence and trust 2
The AK’s position
1. AI in the working environment treated as “high risk”. Thereby interest groups should
always be given the right to have a say or veto.
The European Commission’s White Paper on In the following, the AK may briefly outline the
Artificial Intelligence discusses a series of structured fundamental challenges from the point of view of
European approaches to achieve “excellent” scientific workers and workers’ representatives:
breakthroughs and European technological leadership
in the development of Artificial Intelligence in the • Artificial Intelligence will dramatically change
European Union on the one hand, and to establish working conditions for employees and the first signs
broad confidence – mainly through the consideration of this are already visible (e.g. use of application or
of legislative measures – in the use and application of career tools, automated bonus calculations and
Artificial Intelligence in the European Union on the other. similar). The economic opportunities arising from
A number of existing programmes and approaches the use of Artificial Intelligence should be utilised, but
are brought together and, in some cases, redisplayed - because the technical possibilities for monitoring
in this report. As welcome as European initiatives in at the workplace and the use of employee data are
this certainly critical area are, it is regrettable that the increasing significantly - workers’ rights must also
White Paper does not adequately address the issues be protected.
of the working world and workers’ participation at
company and industry level – also in terms of a • Due to the increasing use of Artificial Intelligence
comprehensive European and national approach in the working world, this task will become
involving the most important stakeholders – in the more important than ever in the future and the
development, introduction and application of Artificial corresponding challenges are enormous: IT
Intelligence. The social partners are presented as systems (laptops, smartphones, computerised and
essential “stakeholders” in a “human-centred approach networked machines and equipment) are becoming
to AI “, which is undoubtedly to be welcomed. However, increasingly diverse and complex. The amount
neither the approach itself nor the type of involvement of usable worker data generated is growing
is further specified. exponentially, and the technical possibilities for
linking and analysing these data are becoming
The White Paper outlines some issues regarding work more and more sophisticated and predictive – even
and workers, which should be positively highlighted. including the creation of behavioural predictions
For example, the EC speaks of a “human-centred of workers (profiling) and the use of automated
approach to AI at work” (which should, however, decision-making in the area of personnel, e.g.
be defined more precisely), points to the danger personnel administration, personnel and career
of AI-powered monitoring of workers, as well as planning, personnel information systems etc.
discrimination (for example in the selection of
personnel), and it mentions the need for developing • The protection of workers from data processing
skills necessary to work in AI. Also the role of the that jeopardises their interests depends above all
social partners is appreciated in some measures. In on their rights of co-determination: These can be
addition “the use of AI applications for recruitment rights to consent or veto for individual workers with
processes as well as in situations impacting workers’ regard to data processing, but above all – in light
rights would always be considered ‘high-risk’ ”. In this of the position of inferiority of individual workers in
context, it would be preferable that the use of certain negotiations vis-à-vis the employer (see also the
AI applications should not be allowed in the working next point) – rights to information, co-determination,
world at all. Not only applications affecting workers’ consent and veto for works councils, unions and
rights, but rather all applications affecting the other interest groups for workers.
realities of work and working conditions should be
White Paper
On Artificial Intelligence – a European approach to excellence and trust 3
• By the way, it should be noted that any consent of In Action 3 (“Establish and support ... networks
employees to the use of their personal data cannot of leading universities and higher education
be considered as voluntary, because there are institutes..”), involvement of the social partners is
no equal contractual partners in an employment described as ‘crucial’ and the EC explicitly mentions
relationship. Workers in an employment relationship a human-centred approach, which is welcome and
hardly ever claim their rights – due to the imbalance should also be the case for Action 2. In addition,
of power – i.e. they do not make use of the it should be critically noted that – like in the draft
opportunity to lodge a complaint with the data “Digital Europe” programme – competence
protection authority (for example due to a violation requirements relate mainly to “high-level skills” in
of data protection laws) or to take legal action connection with AI professionals and no focus is
(for example because personal rights have been placed on application skills across the whole of the
interfered with or compromised too much). In order workforce or future employees who are to work with
to counteract this, it needs strong co-determination AI (this refers to “other actions” and “other areas” in
rights for works councils and unions to enhance section 1 of the Consultation-questionnaire).
the representation of workers’ interests (and an
explicit right to class action in order to facilitate the 2. Use of Artificial Intelligence in the working world
assertion of rights). should by definition be “critical”: The European
Commission proposes two cumulative criteria for
• It should also be pointed out that information assessing whether an AI application is critical:
provided on data processing is often inadequate • use in a “critical sector“ and
and usually formulated in a manner that is not • used in such a manner that significant risks
easily understood, so that transparency and are likely to arise
information for employees and their representative
bodies is rarely sufficient in practice (for example, For applications used in “recruitment processes
surveys of the employees affected and their as well as in situations impacting workers’ rights”,
representative bodies within the framework of the the EC considers that these are associated with
data protection impact assessment under the GDPR considerable risks in any case. This basic attitude
are also very inadequate or are omitted completely in is welcome, but the question arises as to how to
most cases known from experience). assess in advance, whether an application will affect
workers’ rights. Instead an approach should be
Three specific points are presented in the following adopted in which every application used in working
where the above-mentioned topics could be introduced life is in principle regarded as having a considerable
in the sense of a co-determined approach at company risk because it can have a massive impact on the
and industry level with the involvement of all relevant quality of work, working conditions and, not least, the
interest groups: quantity of work, as demonstrated several times in
the White Paper. Here, too, these applications must
1. Co-determined national centres: The Commission be subject to co-determination at company and
proposes to facilitate creation of excellence and industry level. This means that the assessment of
testing centres (Action 2), using the resources whether an application can be introduced, or whether
of the “Digital Europe” programme. The Digital it should be considered as not significantly risky after
Europe programme provides funding for the all, should necessarily be carried out in a structured
establishment of “digital innovation hubs”. Reference way involving the social partners (also in the sense
could be made here to the European Economic of a precautionary principle). It would be conceivable,
and Social Committee’s opinion on the Proposal for example, that this assessment could be reserved
for a Regulation of the European Parliament and for national social partners or co-determination at
of the Council establishing the Digital Europe company level, depending on the legal situation in
Programme 2021 - 2027, which states that social the relevant country. Moreover, certain applications
partners and civil society should be involved in in employment relationships, namely automated
the establishment of the centres/hubs and should decisions in individual cases and profiling, should
be given access to the innovation centres/hubs. be banned altogether because of their particularly
Furthermore, it would also be necessary to establish drastic impact on working conditions.
more far-reaching transparency and information
obligations towards the public. Automated decisions in individual cases and
This would ensure a direct dialogue with profiling in employment relationships are
stakeholders in the working world and a considered particularly critical by the AK: According
“bottom-up approach” from the outset in the to Article 8 of the EU Charter of Fundamental
sense of a “people-centred approach”. Rights, everyone has the right to the protection of
White Paper
On Artificial Intelligence – a European approach to excellence and trust 4
his or her data. Restrictions to the right are only cases and profiling in the employment relationship.
permissible to protect the predominant legitimate This must also apply to “merely” preparatory, “semi-
interests of another. Even then, however, only the automated” evaluations for human decisions.
most minimal form of interference with fundamental
rights is allowed. In practice, however, this 3. Transparency AND traceability/intelligibility as
fundamental right seldom receives the status it key requirements: In its White Paper the European
deserves in daily working life. Trends over the last Commission refers to the seven key requirements
five years in personnel administration or business for Artificial Intelligence, developed by the expert
organisation are moving in the direction of a data group it set up and welcomed by the Commission.
economy that require more and more data for more These are:
and more purposes. The situation for employees • Human agency and oversight,
under data protection law has not improved • Technical robustness and safety,
significantly, even as a result of the GDPR. This is • Privacy and data protection,
further reinforced by the EU-wide promotion of a • Transparency,
data-driven economy. This refers to data with and • Diversity, non-discrimination and fairness,
without personal reference and data from which • Societal and environmental wellbeing, and
personal reference has been removed, and which • Accountability.
have thus been anonymised. However, regarding the
latter category experts admit that algorithms can All these requirements are in principle welcome,
trace back almost any process of anonymisation but the aim should be in any case to translate
through progressive machine learning. In other these principles into a specific and binding
words: workers become re-identifiable. legal framework, also including use of Artificial
Intelligence in companies. Particularly in the field
Profiling, scoring and behaviour forecasting, of the working world, there is a need here for more
as well as automated decision making using specific instructions and opportunities for social
algorithms, machine learning and Artificial partners and works councils to comprehend
Intelligence, can certainly pose a massive threat to the effects of Artificial Intelligence on working
workers’ interests. Employee behaviour, personal conditions from the outset. Therefore, it would be
characteristics, etc. should only be analysed, advisable to extend the concept of transparency to
classified or forecasted under stricter security include the concept of traceability and intelligibility
measures. The two-step test proposed in the White for those affected. This, in turn, requires a “bottom-
Paper only partially addresses this concern. In our up approach” in the specific implementation, where
opinion, automated decisions in individual cases not only the technical principles and applications
and profiling in the employment relationship are subjected to co-determination processes, but in
are not necessary and must therefore not be which the effects of the use of Artificial Intelligence
permitted. Employees fear for the appreciation of on people at work and working conditions are
their human work: Are workers still perceived as examined from the outset from the point of a
individual persons or will human work in the future human-centred approach, and final deployment
be increasingly defined and evaluated as automated is only decided on the basis of the experience
and (easily) automatable processes? From this gained. At the very least, a clear commitment by the
arises the danger that with this translation of all European Commission to such an approach and a
areas of work into a “data world” in which, since clear recommendation to the Member States and
faith in technology is increasingly given the highest the social partners to implement such an approach
priority and primary consideration, one also arrives where possible would be desirable.
at an absolutely technology-centred and thus
inhumane concept of man in the work process.
The job performance of workers is increasingly 2. AI and consumers
expressed in figures, measured, compared, analysed,
and decisions and forecasts are made automatically It is of particular concern to us to ensure a high level
as a result. The human being in the workplace is of consumer protection for all applications based on
degraded to a mere measurable production and cost algorithms and Artificial Intelligence (AI) with which
factor; both the immaterial value of work and the consumers come into contact. The AK stresses the
dignity of the workers are lost. Protection for human importance of ensuring that consumers are given the
dignity and personal rights must also be ensured in best protection against the erosion of their basic rights
the performance of work. The European Union must and freedoms, lack of transparency, discrimination
make a clear commitment to this and therefore and other risks caused by immature analysis software.
prohibit both automated decisions in individual
White Paper
On Artificial Intelligence – a European approach to excellence and trust 5
Summary and evaluation of the White Paper from a The EU Commission makes no secret of the fact that
consumer perspective: benefits and risks are closely related. As correct as
the findings about the enormous risks are, as weak
Unregulated, AI can be a black box where data use, are, in our opinion, the legal instruments that the EU
logic and decisions remain non-transparent and Commission is ultimately considering, especially when it
incomprehensible. Without a strict legal framework, AI comes to personal data.
can not only be of benefit to society, but can also serve to
monitor the everyday behaviour of consumers on a mass The AK is against a system with two classes of
scale, to recognise individuals from already anonymised protection. Graded but binding rules are needed for all
data sets, to threaten diversity and freedom of opinion classes of risks.
when used as an information filter, and to expose
individuals to forecasts and classifying assignments The proposed risk-based approach does not offer
so that they may suffer a disadvantage. Supervisory adequate protection for consumers. According to this
authorities that are supposed to decide whether concept, binding obligations would only exist for “high
fundamental rights, product safety, and consumer risk” applications. For all other areas, the acquisition of a
rights are being observed, are challenged by complex voluntary quality mark would be sufficient. Consumers’
algorithms and machine learning. The EU Commission confidence and trust will not be gained with such a
does not ignore these risks in its White Paper and it two-class system of protection. This is because many
also lists a number of scenarios that illustrate threats consumer-relevant applications would simply be left to
(personal injury, misuse for criminal purposes, etc.). self-regulation by the sector.
However, since no disproportionate burdens should be
placed on developers and users, the EU Commission, Additional standards of protection, going beyond the
with its proposal for a risk test, is opting exclusively current EU legislation on consumer protection, data
for the regulation of particularly high-risk applications protection and security, product liability, etc., will only
and against (graduated) mandatory requirements for be available for particularly risky AI. Not only the scope
all AI applications. In AK’s view, this is not appropriate. of application must be particularly risky (e.g. health,
Consumer trust cannot be won with a regulation that transport), but also the application itself: the use of
leaves out essential areas of everyday consumer life. AI must have legal consequences or similar effects
for consumers, which would entail the risk of “legal
The European Commission’s White Paper is, therefore, infringements, loss of life or substantial material or
a first positive step, but falls far short of consumer immaterial harm”. Only AI affecting employees and
expectations in terms of fundamental rights and surveillance technologies such as facial recognition
discrimination protection, transparency, product safety would be considered risky in any case. In these areas,
and liability, monitoring and traceability of AI procedures. there will be new rules on data quality, documentation
Consumers want prevention rather than mere claims and information requirements, verifiability of procedures,
for compensation in the event of damage. What is troubleshooting, human supervision and authorisation
needed are prohibitions and instructions that guarantee procedures.
protection in all areas of life where the use of AI is
conceivable (the working world, education, financial The proposal does not offer consumers sufficient
services, health, use of the Internet and the Internet of protection against a lack of transparency,
Things, media, public safety, transport, etc.). In order to discrimination and manipulation.
protect human dignity and civil liberties (data protection,
privacy, freedom of information and expression), Our lives are influenced to a much greater extent by
preventive protection through strict regulation, expert automated procedures than the examples in the White
market surveillance and effective enforcement measures Paper illustrate. Not just air traffic and financial markets
are needed in all areas where consumers come into depend on complex algorithms. Also consumers are
contact with AI. This is possible without preventing categorised and evaluated algorithmically when they
innovation. search on the Internet, for targeted online advertising,
news and film recommendations, or for credit checks
Individual concerns of consumers: that determine conditions when concluding a contract.
The EU Commission is trivialising the risks if it merely
“AI must be trustworthy!” The AK welcomes the recommends a voluntary undertaking by industry. After
postulate of the EU Commission. In order to comply all, even when using “smart” consumer goods and digital
with this guiding principle, however, the level of services, they can be faced with a lack of transparency,
consumer protection must be raised significantly. violations of fundamental rights, discrimination and
manipulation of behaviour.
White Paper
On Artificial Intelligence – a European approach to excellence and trust 6
“Unfair” automated decisions, for example, are majority of consumers have so far remained unfulfilled.
difficult to prove and ward off. AK studies have shown Millions in personal data are still being “sucked” – de
the consequences of unintelligible credit scores facto unnoticed and uncontrolled – from the net.
(assessment of willingness and ability to pay). Or Development towards a data economy – insofar as
the risk of manipulation that arises from algorithmic personal or unreliably anonymised data are affected – is
recommendations - for example from voice assistants opposed by both the principle of data economy and the
like Alexa.Or the unfair effects of individualised online dictates of privacy by design or default. There is no legal
prices based on profiling. Whether these are all high-risk regulation as to when exactly data is considered to be
applications in the sense of the White Paper is unclear non-traceably anonymised. Many experts assume that
or can be doubted. Even in lower risk classes, there is particular persons can also be individually determined
a need for binding rules and prohibitions that ensure from anonymised data records by using advanced
transparency, freedom from discrimination and respect techniques. It must be defined when one can (still)
for fundamental rights. Here, too, it must be easy to speak of data without personal reference. With regard to
check compliance with requirements, and a supervisory other concerns, the AK refers to its detailed statement,
authority should be able to gain insight into the technical which was published at https://www.akeuropa.eu/de/
processes and through an approval procedure guarantee evaluation-der-datenschutz-grundverordnung-dsgvo
that no discriminatory decision criteria are used and that on the occasion of the EU Commission’s obligation to
no decisions are taken on the basis of these criteria that evaluate the GDPR two years after it came into effect.
would impair freedom of information and diversity of
opinion or that would violate data protection. No exemption from data protection for science and
research:
In the GDPR loopholes for the use of non-transparent
algorithms must be closed. The opening clause for Science, Research and Statistics
in Art 89 of the GDPR is too far-reaching. What, for
Currently, Article 22 of the GDPR only prohibits fully example, a privileged scientific institution is supposed
automated individual decisions, which have legal to be in terms of data protection law is not determined.
consequences or significantly affect consumers. There is no clear boundary for commercial activities
Protection should also be extended to “semi-automated” (Internet giants also do market research). In order to
decisions. Companies often argue that machines do keep the exemption from the GDPR for the purposes
not make decisions themselves, but “only” prepare of science and research within reasonable limits, there
human decisions. Automated evaluations are hardly is a need to define who the privileged institutions are. It
ever changed by employees afterwards (simply must be certified that the object of research is important
because of the high level of effort required to justify in terms of public interest. Those responsible must not
them). Furthermore, data subjects should be informed be released from the obligation to observe the rights
about any algorithm that works with consumer data of the persons affected. The consent of those affected
- regardless of the legal consequences or severe must always be obtained. Ideas of a “broad consensus”
interference with consumers, as is currently required by (consent to use without limiting the purpose) or protected
the GDPR. The permissions granted under Article 22 go “Playgrounds” not subject to data protection must be
much too far: algorithms are permitted, for example, if rejected. Individual consent should only be able to be
they are necessary for the conclusion or performance replaced by approval from the data protection authority
of contracts and the consumer affected is given the (DPA), and only if there is an outstanding public interest
opportunity to explain his point of view and challenge in the research subject and approval is difficult to obtain.
the decision. Their use in consumer contracts must be In these cases, those affected must at least be given the
reserved for specially justified cases (such as high risk right to object.
of non-repayment of loans). The use of the technology,
the data and logic used are still extremely difficult to Call for more training data for AI requires effective
understand for those affected, because even in the case assertion of rights:
of requests for information, much remains a business
secret. The GDPR contains general principles, which cannot
directly resolve the many legal conflicts between
Call for more training data for AI requires more effective confidentiality and exploitation interests. The pros and
data protection: cons – if data are required for legitimate purposes to the
extent that they are necessary or if there are overriding
78% of people interviewed in a Eurobarometer survey secrecy interests preventing their use – must always
thought that online providers had far too much customer be weighed up anew by courts and data protection
data and 73% always wanted to be asked for their authorities with each case. It is not just consumers,
express consent to use the data. The requests of a large but also, because of the effort involved, supervisory
White Paper
On Artificial Intelligence – a European approach to excellence and trust 7
authorities who are not only overburdened with the understand and explain their product. However, as
task of investigating illegal processing practices, the “data custodian”, a data controller must be able to
but also with the legal obligation to assess them. control data processing at any time. If AI itself decides
Such processes take too long. In light of the effort and which data it uses for which purposes and decides
duration involved, these are often not even considered. this outside the channels determined by developers,
This is detrimental to legal certainty. This also reduces this is fundamentally contrary to the legal principle of
confidence in the advantages of AI, as confirmed by the “accountability” (attribution, responsibility, liability). Such
EU Commission. The equipment of the data protection intent also conflicts with the obligation to indicate the
authorities does not meet the need to carry out exact purpose of use at the time the data is collected.
multiple supervisory tasks in a rapid, careful, technical
and investigative manner. Consumers and workers This crosses a red line for the AK: All decisions, products
experience it as a step backwards that prior authorisation and services based on algorithms must remain
requirements from previous Directive 95/46/EC have explainable and verifiable, especially with regard to
been abolished. The alternative requirements (purely discrimination, disadvantage, manipulation of behaviour
ex-post controls and more personal responsibility on the or fraud. Responsibility and liability must therefore be
part of data processors) have not proven to be effective regulated in a clear and dissuasive manner. In light of the
compared to the precautionary protection of the previous large number of parties involved (developers, producers,
directive.. The shift from an obligation to perform ex-ante users, service providers), consumers must not become
verification in sensitive cases to ex-post follow-up of playthings because of blurred responsibilities. In terms
infringements, including claims for damages, opens up of joint and several liability, they should be able to bring
serious gaps in protection if the process to assert rights injunctive relief and claims for damages against any
does not function quickly and smoothly. supplier in the value chain (with recourse options on the
supplier side).
AI is difficult to reconcile with key data protection
principles. This immanent conflict must be addressed An explicit ban on the use of AI-based facial
openly. recognition:
Contradictions between fundamental rights and With regard to the analysis of biometric features (e.g.
the actual handling of data are hardly resolvable. facial recognition software in public places),we would like
Machine learning requires the system to be fed with to express our disappointment. Working papers of the EU
vast amounts of training data. Self-learning systems Commission initially contained a multi-year prohibition
search for unrecognised patterns and connections in of an AI analysis of biometric features for both private
huge datasets. In the opinion of many experts, they can and public players in order to develop in the meantime
also re-identify individuals from allegedly anonymous a “sound methodology for assessing the consequences
data sets. Anyone who develops, distributes or uses of the technology and possible risk management
AI must respect the right to the protection of privacy measures”. It would send the wrong signal for the
and personal data enshrined in Articles 7 and 8 of protection of fundamental rights in the EU if the White
the ECHR. Principles of the General Data Protection Paper now published merely triggers a debate instead of
Regulation (GDPR), such as data minimisation, strict advocating an (at least temporary) ban on use.
limitation of purpose, no further processing for other AlgorithmWatch states that the majority of European
purposes incompatible with the original purpose, data police authorities that took part in a survey already use
protection-friendly pre-setting, etc., apply to all players in facial recognition software or plan to do so. Different
the AI value chain. If personal data is used, it is difficult countries use the technology differently, but almost
to reconcile AI with the obligation of data minimisation, everywhere there is a lack of transparency (https://
strict limitation of purpose and the requirement to algorithmwatch.org/story/polizei-gesichtserkennung-
consider privacy by design and default. europa).
There is a need for maximum transparency, human Automated facial recognition is used, for example,
surveillance and clear guidelines for attribution, to find missing children or to identify violent fans in
responsibility and liability for incorrect actions of AI - football stadiums. The technology raises massive data
even for applications outside the “high risk” category. protection concerns, which have also been described in
detail by organisations such as Privacy International or
The self-learning ability of systems is accompanied by a Bits of Freedom. An error rate of 1% means more or less
growing problem: The software developers themselves that if 10,000 people are subjected to facial recognition,
can no longer understand which logical path algorithms who are not even wanted by the police, then 100 of
take to reach a certain result. Transparency requirements them will still be marked as wanted. A test conducted in
reach their limits when producers themselves cannot London in 2018 revealed 104 matches, only two of which
White Paper
On Artificial Intelligence – a European approach to excellence and trust 8
were correct – all others were false positives. parties with joint liability can clarify the exact attribution
Involvement of stakeholders in violations of of responsibility in the recourse procedure. If consumers
fundamental rights: are not to be treated as guinea pigs, the objection of
typical development risks or compliance with technical
Data and privacy protection should always take standards must be dropped as grounds for exclusion
precedence over economic interests. But what of liability. All defective products must be identified
happens if interference in these rights is justified by through a publicly accessible register. The BEUC position
the vital interests of individuals, groups or society as a paper “Product Liability 2.0” contains excellent detailed
whole? Conflicts of interest are inevitable whenever AI proposals for the revision of the Directive (http://www.
applications in the health sector promise to improve beuc.eu/publications/product-liability-20-how-make-eu-
the detection, treatment and cure of diseases or when rules-fit-consumers-digital-age/html).
they promise better crime prevention or intelligence for
the security police. The price of this (potential) progress Extend outdated product safety directives with new
is high: it can jeopardise the interests of large sections risk situations (hacker attacks, damage from faulty AI,
of the population. In such situations, the majority of failed updates, etc.):
AI applications affecting fundamental rights require
ex ante approval by an independent body. In addition The 2001 Directive contains key consumer protection
to data protection authorities and technical experts, standards stipulating that manufacturers may only
representatives of the respective groups affected place safe products on the market. They must provide
(workers, consumers, patients, road users, etc.) must consumers with relevant information to enable them
also be involved. Because careful consideration must to assess and protect themselves against risks arising
be given to different interests, proportionality, values, from a product which are not immediately apparent
etc., also when clarifying legal issues. The decisions without appropriate warnings. Manufacturers also
taken can vary significantly, depending on how far the need to take measures to be able to identify risks
relevant party is affected and the respective ideological posed by their products and take precautions, including
background. The social acceptability of decisions for withdrawing the product from the market and recalling it
or against individual AI applications and accompanying from consumers. It should be clarified that the Directive
conditions is higher if these decisions are made with the applies to all products, services and software containing
broad participation of all affected groups. algorithms/AI. All risks associated with AI must be
covered by the Product Safety Directive. The European
Updating product liability rules: Commission wants to strengthen confidence in ICT
products and services through a future EU certification
The Product Liability Directive from 1985 has no for cyber security. In our view, this approach is a
answers for digital trends such as AI. Many offline welcome first step. However, a binding legal framework
products have been replaced by smart, digital goods must also ensure that AI-based applications must be
that generate a continuous flow of data that can be tested for cyber-security before they are even allowed on
analysed by AI. A revised Directive must be applicable the market.
to all physical and non-physical goods, digital services
and digital content. The term “defective” should also
include products that pose cyber security risks, do not 3. AI and the environment
receive required updates or are not GDPR-compliant.
The ability to learn and make autonomous decisions The White Paper refers to the (positive) effects that
should be considered “defective” if it causes harm to Artificial Intelligence can have on the careful use of
users or third parties. Data that is misused or otherwise resources and climate protection and thus also has the
unlawfully used or stolen should be considered as potential to contribute to tackling the most pressing
subject to reimbursement. The deductible of 500 euros problems such as climate change and environmental
and the maximum liability value of 70 million euros degradation.
should no longer apply. Injured people should only have
to make the damage credible and that there is a causal It is disappointing, however, that concrete measures
connection with the product. Which component of and demands in the White Paper fall far short of
which provider is responsible for this must not count expectations. On the contrary, the ball is simply batted
among the consumers’ obligation to provide evidence. back to the technology itself:
All companies involved in the value chain (hardware and
software manufacturers, AI developers, intermediary “AI can and should itself critically examine resource
platform providers, commercial AI users, etc.) should be usage and energy consumption and be trained to make
subject to joint liability, whereby consumers can make choices that are positive for the environment” (White
their claims against anyone of the parties involved. The Paper, p. 5)
White Paper
On Artificial Intelligence – a European approach to excellence and trust 9
Using AI for the environment, climate protection and However, the White Paper is completely lacking in
sustainable transport: specific goals and ideas on other issues related to
environmental and climate protection. The emphasis
There is an urgent need to analyse the interactions on resource-saving AI technology and climate-friendly
between the objectives of the Green Deal and the application scenarios in the context of energy, transport,
ambitions of the White Paper on Artificial Intelligence agriculture and production would therefore be essential.
more systematically.
Conclusion
This appears necessary for two main reasons. On
the one hand, technologies that operate with Artificial The basic orientation of the white paper is the creation
Intelligence and large amounts of data often have of a European strategy for AI excellence with the aim of
an enormous demand for resources and energy strengthening the industrial and commercial markets.
themselves. On the other hand, AI systems that are used To this end, in addition to the creation of European
sensibly and purposefully in the environmental sector data pools, the EU funding and research programmes
can also help to meet the challenges of today. in the field of AI are to be significantly increased. One
focus of the AI strategy is on SMEs, which the “Digital
Further consideration of this issue therefore also appears Europe” project aims to support with innovation centres.
to be urgently required in the White Paper and must not There should be at least one such centre in each
be left to market forces alone. Member State. The “Digital Europe” programme is to
provide funds of around EUR 4 billion to promote high-
The example of autonomous vehicles: performance and quantum computers, cloud computing
etc.
The White Paper deals only with technical aspects and
risk assessment in relation to autonomous vehicles. The Commission discusses in detail possible
amendments to the existing EU legal framework, taking
A targeted, environmentally friendly transport concept, AI into account. However, this should be ‘not excessively
which would also be possible, is not even mentioned. It prescriptive’ and, in order to avoid a “disproportionate
is precisely here that the discussion of environmentally burden” on SMEs, a risk-based approach should be
relevant issues would be particularly important. Artificial adopted. This is where the excessive emphasis on
Intelligence will play a major role in the future not only in market orientation becomes apparent, at the expense of
driverless vehicles, but also in all traffic flows and means a critical examination of the opportunities and effects on
of transport. society as a whole. In principle, the AK welcomes a joint
European strategy for this important technology of the
The use of Artificial Intelligence in private transport and in future. If each Member State has its own regulations with
driver-assistance systems does indeed have the potential regard to data protection, consumer rights, transparency,
to ensure greater safety and smoother traffic flows, etc., this will lead to a competition-induced downgrading
but on the other hand, it can also lead to an increase in of regulations on the use of AI – at the expense of
private transport and a rise in the volume of traffic due workers, consumers and the environment. However,
to automated vehicles. the yardstick should not be a “risk-based” approach that
follows the logic of profit, but rather the protection of
It would therefore be a matter of urgency to link all workers and consumers.
measures on Artificial Intelligence with a commitment
to environmental and climate protection. Nevertheless, concrete commitments and measures
are lacking in some sensitive areas. Particularly in
In the context of driverless vehicles and transport, it the area of co-determination by company and inter-
would therefore make sense, for example, to address company workers’ councils and unions in the use of
aspects such as the emphasis on the importance of Artificial Intelligence in the working world, consumer
public transportation and to set an example in the White protection and fundamental rights and in aspects of
Paper by placing Artificial Intelligence technology at the climate change and the environment, the White Paper
service of comprehensive resource- and environment- omits much or is too vague.
friendly transport concepts that cover all modes
of transport. Technology-driven innovations should With regard to workers’ point of view, particular attention
therefore also be given a European orientation in the is drawn to the aspect of possible “monitoring of the
context of Artificial Intelligence policy, with a focus behaviour of employees” by employers and the risks of
on (automated) transport based on public transport discrimination, for example in the context of technology-
(trains, buses, robot taxis) and car sharing. supported personnel selection. However, the effects
on human-machine interaction by the increased use of
White Paper
On Artificial Intelligence – a European approach to excellence and trust 10
AI and alienation issues are omitted. There is a danger • Ensuring co-determination rights for employees at
that algorithms or AI will increasingly define and dictate both company and industry level in the use of AI
the nature of work, determine work priorities and lead and involvement of the social partners – including
to a completely new form of dependency and control transparency and traceability as core requirements
through technology. There is therefore a lack of inclusion for AI.
of participation at company and industry level to
co-determine and actively manage the effects of AI on • Stressing the need for accompanying education and
work and employees, especially viewed in the light of training (especially for women).
the problem areas of AI addressed by the Commission:
complexity, opacity, unpredictability, semi-autonomous Consumers:
behaviour.
• Graduated, mandatory legal framework for all AI risk
AI and the algorithms on which it is based will be one of classes instead of a 2-tier level of protection. Soft
the technologies shaping the future of the coming years law and voluntary self-commitments are unsuitable
and decades. The Commission advocates a European for protecting consumer rights and strengthen trust.
AI strategy based on European values. However, this is Even in lower-risk classes, binding measures for
precisely where the basic problem of this White Paper transparency, non-discrimination, and protection of
is evident. Many risks and problems are flagged and a fundamental rights are needed.
possible adaptation of the EU regulatory framework is
also discussed. However, the overall picture reflects a • Closing loopholes in the General Data Protection
strong emphasis on competition, the internal market Regulation and strengthening provisions with regard
and free entrepreneurship. This is manifested by the to AI.
frequent emphasis on a “risk-based approach” and the
establishment of an “ecosystem of trust”. The social • All algorithm-based decisions, services and products
component or the values of a social union are not must remain explainable and verifiable, especially
given enough attention here, as can be seen from the with regard to undue discrimination, disadvantage,
underemphasis on the effects on the working world and manipulation of behaviour, or fraud.
on workers, as well as the unsatisfactory results from a
consumer perspective. • An explicit ban on the use of AI-based face
recognition.
Moreover, the White Paper has missed an opportunity to
put more emphasis on the role that AI should also play • No exemption from data protection for science and
in environmental and climate protection. Unfortunately, research.
the White Paper does not address the connection
between policy objectives on Artificial Intelligence and • Institutional involvement of the parties concerned in
the European “Green Deal”. weighing up interests in decisions regarding the (in)
admissibility of specific AI applications
From the AK’s point of view, the following problem
areas in the White Paper must therefore be • Rules for product liability and product safety fit for
supplemented or clarified: AI.
White Paper
On Artificial Intelligence – a European approach to excellence and trust 11
Contact us!
In Vienna: In Brussels:
Daniela Zimmer
T +43 (0) 1 501 651 2722
[email protected]
Michael Heiling
T +43 (0) 1 501 651 2665
michael@[email protected]
Martina Chlestil
T +43 (0) 1 501 651 2729
[email protected]
www.arbeiterkammer.at www.akeuropa.eu
About us
The Austrian Federal Chamber of Labour (AK) is by law representing the interests of about 3.8 million
employees and consumers in Austria. It acts for the interests of its members in fields of social-, educational-,
economical-, and consumer issues both on the national and on the EU-level in Brussels. Furthermore the
Austrian Federal Chamber of Labour is a part of the Austrian social partnership. The Austrian Federal Chamber
of Labour is registered at the EU Transparency Register under the number 23869471911-54.
The main objectivs of the 1991 established AK EUROPA Office in Brussels are the representation of AK vis-
à-vis the European Institutions and interest groups, the monitoring of EU policies and to transfer relevant
Inforamtion from Brussels to Austria, as well as to lobby the in Austria developed expertise and positions of the
Austrian Federal Chamber of Labour in Brussels.