0% found this document useful (0 votes)
58 views36 pages

CoESS Charter On The Use of AI in Security New

The CoESS Charter outlines ethical and responsible AI use in European private security services, establishing a framework of 10 essential requirements aligned with the EU AI Act. It aims to guide security companies in understanding and complying with AI regulations while promoting human-centric innovation. The document emphasizes risk management, data governance, human oversight, and engagement with public authorities among other key principles.

Uploaded by

liudziu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views36 pages

CoESS Charter On The Use of AI in Security New

The CoESS Charter outlines ethical and responsible AI use in European private security services, establishing a framework of 10 essential requirements aligned with the EU AI Act. It aims to guide security companies in understanding and complying with AI regulations while promoting human-centric innovation. The document emphasizes risk management, data governance, human oversight, and engagement with public authorities among other key principles.

Uploaded by

liudziu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Acting as the voice of the security industry

Confederation of European Security Services

Charter
on the ethical and responsible use of Artificial
Intelligence in the European private security services

11 October 2024
Copyright: Special thanks to the active contributors to this Charter:

Unless stated to the contrary, all materials and information Carolina Garcia Cortés, Innovation Manager, Prosegur
are copyrighted materials owned by CoESS (Confederation Cornelius Toussaint, CEO, Condor Group
of European Security Services). All rights are reserved. Daniel Sandberg, Director of Artificial Intelligence,
Duplication or sale of all or any part of it is not permitted. Securitas Group
Permission for any other use must be obtained from Graham Evans, Technical Officer, BSIA
CoESS. Any unauthorised use of any materials may violate Helena Eriksvik, Head of Global Legal Data & Privacy,
copyright laws, trademark laws, the laws of privacy and Securitas Group
publicity, and communications regulations and statutes. Pauline Norstrom, CEO Anekanta®AI, and representative of
To the fullest extent possible at law we (and all our BSIA
sister, parent, subsidiary and member companies and Victoria Ferrera Lopez, Regulatory Affairs Group Senior
organisations) exclude all liability for any loss or damage Manager, Verisure
(including direct, indirect, economic or consequential loss Wim Bartsoen, Chief Digital Security Officer, Securitas Group
or damage) suffered by you as a result of using the contents
of this manual.

Disclaimer: About CoESS:

This Charter was developed by a dedicated Expert Group of The Confederation of European Security Services (CoESS)
CoESS for the European private security industry. It focuses acts as the voice of the private security industry, covering
on guidance for deployers of AI systems only. 22 countries in Europe and representing 45,000 companies
with 2 million security officers. Private security services
This document shall provide security companies with provide a wide range of services, both for private and
a first understanding of the EU AI Act and important public clients, ranging from Critical Infrastructure to public
codes of conduct prior to and during the use of an AI spaces, supply chains and government facilities. CoESS is
system. The information provided in this Charter does not recognised by the European Commission as the European
replace system and use case specific risk and regulatory employers’ organisation representative. We are actively
assessments that should be conducted by the deployer to involved in European Sectoral Social Dialogue and multiple
ensure compliance with the EU AI Act. EU Expert Groups – including SAGAS, SAGMAS, LANDSEC,
the EU Operators Forum for the Protection of Public Spaces
and the EU Ports Alliance.
EU Transparency Register Number: 61991787780-18

Design & graphics:


https://blog.acapella.be/

Photo credits:
© AdobeStock: 713733409: Milan, 913052673*: suratin, 874669432*: Andres
Mejia, 588772865: NicoElNino, 846542130*: ImageFlow, 802446835*: ERiK,
728100169*: Miumzlik, 355680792 and 516647240: .shock, 794014906*:
Natanong, 720464800*: inthasone, 725689364*: sandsun, 777449543*:
Bartek, 185898613: Kadmy, 732479109*: ALL YOU NEED studio, 861484312*:
ALEXSTUDIO, 824935213: pressmaster, 326350464: PX Media
* Generated with AI

© iStock: 2130201321: Suriya Phosri, 1472578503: Pakpoom Makpan,


1428421517: Galeanu Mihai, 1168365129: metamorworks
Executive Summary

This Charter, developed in alignment with the EU


Artificial Intelligence Act and the core values of CoESS,
establishes a framework of 10 essential requirements
for the responsible and ethical deployment of Artificial
Intelligence (AI) by European private security companies.

RISK MANAGEMENT: adopt appropriate and targeted risk management


measures.

DATA GOVERNANCE: uphold diligent data governance, ensuring the


use of trustworthy data and strict compliance with GDPR.

HUMAN OVERSIGHT: Equip staff with the necessary training and


policies to meet human oversight requirements, proportionate to the
specific AI use case.

RESILIENCE MEASURES: achieve robust physical and cyber protection


for company assets, AI systems, and associated infrastructure.

RECORD-KEEPING: document the AI systems’ operational performance.

TRANSPARENCY AND EXPLAINABILITY: set in place adequate


transparency measures that guarantee compliance with GDPR and the
EU AI Act, and aim for adequate levels of explainability.

FUNDAMENTAL RIGHTS IMPACT ASSESSMENT: Conduct


assessments on the potential impact on fundamental rights, even if
its not a legal obligation, but when there are plausible concerns about
unlikely but possible rights impacts.

DUE DILIGENCE: follow due diligence policies when buying AI systems

WORKERS’ INVOLVEMENT: foster awareness among workers on the


use of AI in your company and set in place mechanisms for addressing concerns,
particularly if using high-risk AI.

ENGAGE WITH PUBLIC AUTHORITIES: work actively with competent


authorities to get additional guidance and to clarify legal uncertainties and
compliance requirements.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 3
Executive Summary 3

Introduction 6

Chapter I: Definition of AI and use cases in


the European private security services 8
I. What is AI ? Striving for a definition 8

II. Cross-cutting criteria to differentiate between


low-risk and high-risk AI 10

III. The EU AI Act and legal compliance: low-risk vs. high-risk AI 11

IV. Examples of possible low-risk and high-risk AI use cases  14

Chapter II: Opportunities and risks of


AI deployment in security services 18
I. Opportunities 18

II. Risks 20

4 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Chapter III: Values and Requirements 25
I. CoESS’ transversal values for the ethical and responsible use of AI 25

II. First steps to ensure an ethical and responsible use of AI 26

III. Requirements for the ethical and responsible use of AI 27

Chapter IV: Checklist  33

Annex: Repository of useful guidelines


and standards 34

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 5
Introduction

The integration of Artificial


Intelligence (AI) into security
services is expected to play
an important role in the ever-
evolving transformation of the
security industry.

From data-enabled risk analysis to integrated video It is important to help companies understand the impact
surveillance, AI systems can be deployed in many of the EU AI Act on their business operations. Many
different use cases in the security services, bringing companies may not even be aware if they use AI today
benefits to workers, customers, companies, and public in their services. But as of now, every security company,
security. Nevertheless, some use cases come with risks. big and small, will have to know if they deploy an AI
system and what to do. But there is more.
The European Union (EU) has therefore regulated the
development and deployment of so-called “high-risk” AI The Confederation of European Security Services (CoESS)
in the EU AI Act. Security companies that operate in the and its members stand for human-centric innovation for
EU and integrate AI systems into their services will have the public good and a steadfast commitment to ethics,
to start complying with most aspects of the Regulation responsibility, and compliance.
as of 02 August 2026 , but with some provisions applying
as soon as 02 February 2025 (see timeline graphic).

Timeline: Application of the EU AI Act

6 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
This Charter shall therefore not only help companies
comply with the EU AI Act, but also integrate AI systems
in a responsible and ethical way that goes beyond
compliance. To this end, this Charter is structured in
four chapters:

Æ CHAPTER I provides orientation for companies to Æ CHAPTER III establishes requirements for AI
help them identify AI systems and “high-risk” use deployers in the security industry that address
cases in the security services, based on legal and pertinent risks, both according to the legal
other cross-cutting criteria. obligations of the EU AI Act and CoESS’ value set.

Æ CHAPTER II offers an overview of opportunities Æ CHAPTER IV sets out an easy-to-understand


and risks that are associated with the use of AI in checklist for companies on the steps to take when
the security services. planning to deploy an AI system today.

DISCLAIMER

This Charter was developed by a dedicated Expert Group of CoESS for the European private security industry.
It focuses on guidance for deployers of AI systems only.

This document shall provide security companies with a first understanding of the EU AI Act and important
codes of conduct prior to and during the use of an AI system. The information provided in this Charter does
not replace system and use case specific risk and regulatory assessments that should be conducted by
the deployer to ensure compliance with the EU AI Act.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 7
Chapter I: Definition of AI and use cases in
the European private security services

I. What is AI ?
Striving for a definition AUTONOMY2: an AI system can accomplish a task with a
varying range of human involvement, from being partly
to fully autonomous. This is both the core asset and risk
The first question every deployer associated with the use of AI. Depending on the output
will have to answer is: am I using AI? of the task and level of human oversight, even simple,
The legal definition of AI is, however, fully autonomous systems can pose a substantial risk
to fundamental rights.
a complex exercise with different
approaches around the globe. This
Charter promotes compliance in
particular with the EU AI Act, so we
will refer in this document to the AI
definition in EU law.
In the EU AI Act, the EU strongly aligns the legal definition
of AI with the one of the Organisation for Economic ADAPTIVENESS: Another core asset, but also risk, of
Cooperation and Development (OECD). As per the EU many AI systems is their ability to self-learn and to adapt
AI Act’s Article 3.1, an AI system is defined as: or evolve. They evolve based on input from users i.e.
after the design and deployment phase. AI therefore has
an inherent risk that the system is processing data in a
“a machine-based system that is designed to
way that is often referred to as a “blackbox”, reducing
operate with varying levels of autonomy and
the explainability of the AI system’s output.
that may exhibit adaptiveness after deployment
and that, for explicit or implicit objectives, infers,
from the input it receives, how to generate outputs
such as predictions, content, recommendations,
or decisions that can influence physical or virtual
environments”

This legal definition is quite complex. It is therefore


helpful to have a look at the OECD AI Principles1, an
intergovernmental Standard on AI, and explain its
different aspects :

R. Stuart, K. Perset, M. Grobelnik (2023): Updates to the OECD’s definition of an AI system explained. Available here:
1

https://oecd.ai/en/wonk/ai-system-definition-update
additional references available for the definition of autonomy in AI systems: EN ISO/IEC 22989 and Recital 12 of the EU AI Act
2

https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

8 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
OBJECTIVES: an AI system can have different objectives. OUTPUT: The developer (and potentially those
Explicit objectives are usually the result of the rules set who deploy the system) determines the intended
by the developer (and possibly also deployer) – e.g. a functionalities of the AI system and the types of
drone autonomously transporting an object from A to B. outputs it will generate - such as predictions, content,
But there are also AI systems, which have only implicit recommendations, or decisions. To reach an output, an
objectives, such as General Purpose AI based on Large AI system processes its input based on rules, instructions,
Language Models (LLM). and algorithms created by its developers and potentially
refined by those who deploy it. High-risk applications
typically involve outputs that have significant real-world
impacts and operate with a high level of automation,
leaving little human oversight.

INPUT: AI systems are based on input, which is necessary


for them to generate an output. Input can include sets of
rules and algorithms defined by the developer, training
data used by the developer to further evolve the AI
system, additional instructions by the deployer, and ENVIRONMENT: Environments, which feed the AI
data received by the environment, which may further systems with input, and are subject to its output, can be
contribute to the self-learning of the system. both physical (e.g. detection and verification of objects
and natural persons) and virtual (e.g. in the analysis of
business operations).

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 9
II. Cross-cutting criteria to However, every assessment of an AI system and use case
differentiate between low-risk is unique and, within the EU, subject to the definition
of low-risk and high-risk AI in the EU AI Act.
and high-risk AI
Another, more extensive approach to define different
The different aspects that can explain the functioning criteria and characteristics of AI can be found in EN
of AI systems can also be used as interdependent, ISO/IEC Standard 23053:2022 (Framework for Artificial
cross-cutting criteria, which may provide a very first Intelligence Systems Using Machine Learning).
orientation for deployers on:

Æ how to know whether a system is itself an AI product


or an AI safety component of a product. “Every assessment of an AI
system and use case
Æ how to distinguish between low-risk and high-risk
AI systems and use cases. is unique”

IF the AI system produces autonomous outputs in a physical


environment, it is likely to be classified as high-risk. The EU AI
AUTONOMY
Act therefore makes human oversight mandatory for high-risk
AI systems and use cases.

IF the AI system’s decision-making process is based on logical


self-learning in a “blackbox” and evolves over time, it can lead
ADAPTIVENESS
to an increased lack of explainability and is more likely to be
categorised as high-risk.

IF the objectives of the AI system have an impact on natural persons


OBJECTIVES or are implicit, it becomes more likely that the AI system and use
case are categorised as high-risk.

IF the input is based on personal data of natural persons then


INPUT compliance with GDPR is key and the risk of being categorised
as high-risk increases.

IF the output of the AI system poses a risk of harm to the health,


safety or fundamental rights of natural persons, including by
OUTPUT
materially influencing the outcome of a decision-making process,
it is likely to be classified as high-risk.

IF the output affects an environment that includes a natural person,


ENVIRONMENT
the likelihood that the AI system and use case are high-risk is higher.

10 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
III. The EU AI Act and legal 1. LOW-RISK AI SYSTEMS AND USE CASES
compliance: low-risk vs. 2. PROHIBITED AI PRACTICES
high-risk AI
3. HIGH-RISK AI SYSTEMS AND USE CASES

Different AI systems and use cases


bring different risks. The EU AI Act
1. Low-risk AI
follows a risk-based approach and
regulates mostly high-risk AI systems
and use cases. With this Chapter,
we aim to provide deployers of
AI systems in European private
security services with an initial
understanding of the approach
Low-risk are generally those AI systems and use cases
taken by the EU AI Act. Please note
which do not fall in the ‘prohibited’ and ‘high-risk’
that the European Commission’s EU categories. The EU AI Act further clarifies that an AI
AI Office will develop guidelines for system is also generally classified as low-risk if it is
AI system definition, prohibitions intended to:

and high-risk classification. Æ perform a narrow procedural task or mere


preparatory task of high-risk AI use cases;
To start with: every deployer of AI systems in the EU
has to comply with the EU AI Act. Æ improve the result of a previously completed human
activity;
That’s however the only simple part. Because legal
obligations for deployers of AI systems (see as of page 27) Æ not replace or influence the previously completed
differ depending on the risk associated with the system human assessment, without proper human review.
and use case in question. The EU AI Act distinguishes
between the following categories of AI systems and Within the low-risk category, the EU AI Act further
use cases: distinguishes between systems with minimal risk,
without legal obligations, and certain AI systems with
a transparency risk, which have to comply with certain
transparency obligations3.

“Every deployer of AI
systems in the EU
has to comply with
the EU AI Act”

AI systems that interact with natural persons, but would qualify as low-risk (such as LLM and chatbots), have to comply with certain transparency
3

obligations which are set out Article 50 of the EU AI Act. For example, the concerned natural person must be informed that it is interacting with an AI
system. All AI systems that are neither prohibited nor qualify as “high-risk” (see page 13) or systems with a transparency risk are considered to be of
minimal risk. Their deployers do not have to comply with extensive legal obligations of the EU AI Act, but they are encouraged to apply voluntary codes of
conduct – to be developed by EU bodies, Member States, or representative bodies.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 11
2. Prohibited AI practices

The EU AI Act defines in its Article 5 those AI systems


and use cases, which bring an unacceptable risk to the
fundamental rights of European citizens. These are
therefore prohibited and can neither be marketed nor
used in the EU as of 02 February 2025. These include:

Æ Profiling to assess or predict the risk of a person


committing a crime;

Æ Creation of facial recognition databases through


automated image searches on CCTV or the internet;

Æ Social scoring leading to unfavourable treatment


of natural persons;

Æ Biometric categorisation of unlawfully acquired


datasets;

Æ Real-time remote biometric identification in public


spaces, such as use of Facial Recognition Technology
(FRT), with important exemptions in the area of law
enforcement4;

Æ Emotion recognition at the workplace or educational


institutions (except use for medical and safety
reasons).

The EU AI Office will publish further guidelines on AI


system definition and prohibitions.

Exemptions exist for the use of real-time remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their
4

behalf. These are systems that automatically identify a natural person without its consent. Member States can fully or partially allow the use of such
technologies in public spaces, within limits outlined in the EU AI Act (such as judicial authorisations), if they are used for the targeted search of a victim of
abduction or a person that is suspected of having committed a criminal offence (as defined in Annex II of the EU AI Act), as well as for the prevention of a
specific, substantial and imminent threat to the life or physical safety of citizens, such as a terrorist attack. Member States can however also set in place
more restrictive rules. Regulation can hence differ from one EU Member State to another.

12 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
3. High-risk AI verification and authentication systems (e.g. as part
of access control or to unlock a mobile device – as
described as of page 15) are not high-risk AI.
Š as safety components in the management and
operation of critical infrastructure.
Š to evaluate a person’s access to (vocational)
education and training.
Š in employment and workers management, such as
for recruitment purposes or for decision-making
related to working conditions, task allocation,
This is the most important category, because the EU performance evaluation and contractual
AI Act defines rules for the deployment of high-risk relationships.
AI systems. Š to evaluate and classify emergency calls.
Š for risk assessments, such as evaluations of law
enforcement authorities or on their behalf to assess
“The EU AI Act defines rules the risk of a natural person to become a victim of
criminal offence.
for the deployment of
high-risk AI systems”
“Deployers, but also
Once the deployer has determined whether an AI system
is at use, it is important to assess whether it classifies
developers and distributors
as high-risk as per the EU AI Act’s definition in Article 6: of high-risk AI systems, will
1. the AI system is itself a product or a safety
have to comply with the
component of a product which is (1) different provisions of the EU
covered by pre-existing legislation listed
in Annex I of the EU AI Act and (2) required
AI Act as of 02 August 2026”
to go through a third-party assessment.
These provisions are further outlined in
Examples: As per Annex I, this concerns AI-enabled Chapter III of this Charter as of page 27.
drones covered by Regulation 2018/1139, AI systems
used in Aviation Security equipment covered by But how to know for sure whether an AI system is high-
Regulation 300/2008, as well as AI-enabled wireless risk or falls into a respectively regulated use case?
devices subject to Directive 2014/535.
It is the responsibility of an AI system provider to
2. a
nd/or it is deployed in high-risk sectors document the assessment whether an AI system is
as defined in Annex III of the EU AI Act. high-risk or not before the system is placed on the
market or put into service.
Examples include AI systems that are intended to
be used : But it also depends on the use case. High-risk products
and use cases are defined for deployers in the EU AI
Š for biometric identification and emotion recognition, Act in Article 6 and Annexes I & III, but the Regulation
and which do not fall in the scope of prohibited is complex.
practices. This includes biometric identification
systems, which identify a natural person with a time The European Commission will therefore develop
delay (not in real-time) and without their active guidance on the implementation of high-risk
involvement through the comparison of a person’s classification. Meanwhile, our cross-cutting criteria
biometric data with the biometric data contained can provide deployers with a first orientation.
in a reference database (see page 16)6. Biometric

The EU AI Office is expected to publish further guidelines on the interplay between the EU AI Act’s high-risk definition and existing product-related
5

legislation.
The EU AI Office is expected to publish further guidelines on AI system definition, prohibitions and high-risk classification.
6

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 13
IV. Examples of possible low-risk Æ Respective vulnerability assessments of a facility,
and high-risk AI use cases based on current security plans in place.

Such risk analysis can drive informed decision-making


DISCLAIMER and help make security services more effective. Security
companies can provide recommendations to offer a
This document shall provide security companies targeted package of solutions, such as staffing and
with a first understanding of possible low- deployment of specific technologies.
risk and high-risk systems and use cases.
The information provided in this Charter does 2. Analysis of business operations
not replace system and use case specific risk
assessments that should be conducted by AI applications can also analyse
the deployer to ensure compliance with the the efficiency of internal
EU AI Act. business operations, based
on data such as:

Æ Peak usage periods of


certain business services.

Æ Facility management, for example energy-efficiency


low-risk

levels of buildings, products and car fleets.

Æ Visitor tracking.

Many AI use cases in security services can be expected Æ Data on occupational health and safety incidents.
not to qualify as high-risk. Keeping the EU AI Act and our
cross-cutting criteria in mind, the following use cases Æ Impact of services provided for marketing purposes.
in security services can be expected to rather fall into
the low-risk category: Internal business operations, and hence services offered
to clients, can be adapted to be more cost-efficient,
1. Risk analysis ecological and safer. Service impact assessments can
be used for marketing purposes.
By scraping through huge
amounts of non-personal 3. Crowd management
data from existing security
infrastructure, such as video AI-enabled CCTV can be used
cameras, AI systems can to track the number of people
provide clients with concrete present at an event,
intelligence about their automatically identify
security measures and recommendations for locations with a high density
improvement. When deploying such AI applications, of visitors, analyse movement
security companies can provide very rapidly concrete, patterns of a crowd, as well
data-based intelligence and predictive analysis on trends as bottlenecks that can create risks to the safety and
and patterns such as: security of visitors. Staff on the ground can then take
respective measures – e.g. in access control or directing
Æ Historical patterns of visitor flows and movement. crowd movement.

Æ Peak times / days / months of visitors and criminal These systems can be highly useful at mass events, such
offences in the facility or neighbourhood. as football games or festivals, and are valuable to guide
first responders and emergency aid during an incident.
However, the deployment of such systems is likely to

14 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
qualify as high-risk use cases if they start including respective recommendation to the security officer
personal data in their analysis and output (for example, if in the MARC. This can reduce the time to respond
the system starts collecting biometric data and combines to actual incidents, and improve both business
it with non-personal data to produce certain outputs, operation efficiency and the level of protection

AI use cases
e.g. biometric identification or categorisation). provided to a client.

4. Biometric verification Æ Object analysis: similar to the alarm triage, AI


systems can quickly analyse and classify certain
Biometric verification systems detected objects. For example, they can analyse
are very distinct from whether a certain vehicle is allowed to be in a
biometric identification restricted zone (e.g. based on a database/whitelist
systems: of “approved” license plates). In a more critical
case, an AI-enabled drone detection system
Æ Verification systems may quickly analyse whether a drone carries a
confirm that a specific potentially hazardous payload and analyse its speed
person is who they claim to be by comparing and probable time of impact. Such intelligence can
biometric data of that individual to previously tremendously improve response time and decision-
provided biometric data (Question: Is it you?). making in counter-drone measures.

Æ Identification systems identify an unknown person Æ Behaviour detection: CCTV can be empowered by
without their consent (Question: who is it? See AI applications to identify suspicious behaviours,
page 16). which are associated with criminal offences – e.g.
movement patterns and other activities. AI systems
The EU AI Act’s Annex III therefore classifies biometric can then provide a respective alarm, to be evaluated
identification systems as “high-risk”, and not verification by security personnel on the ground or in a MARC
systems. Biometric verification can present a substantial before taking preventive action. Such use cases
enhancement of effectiveness and efficiency of access enhance security measures and enable rapid
control, particularly at sensitive facilities such as Critical responses in case of critical situations.
Infrastructure.
The use of AI in video surveillance systems can however
5. Other use cases for AI-enabled analytics of non- quickly fall in the category of high-risk depending on
personal data the data that is used, the level of human oversight /
autonomy, and the output that is provided by the system.
The list of possible use cases
of AI-enabled data analytics
could be continued endlessly.
Particularly as part of video
surveillance services, the
potential to provide more
effective, accurate and
quicker services is
tremendous:

Æ Alarm triage: AI-enabled video cameras can


be used to help triage real from false alarms in
monitoring and alarm receiving centres (MARC).
For instance, a camera may distinguish, based on
shape and movement patterns, whether a reindeer
just entered a facility’s surveilled perimeter, or a
human being. The system can hence vet the alarm,
determine whether it is likely false, and provide a

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 15
a “black-box” based on the comparison with biometric
data. The input is based on personal data, possibly
without the data subject’s consent. Their objectives are
explicit, but their output can have legal consequences
for natural persons. This makes the human oversight
of this technology, which addresses all these risks,
particularly important.

Real-time biometric identification deployments are


High-risk AI use cases can provide substantial value in therefore to a big extent prohibited by the EU AI Act,
security services, but must guarantee compliance with and the use of a time-delayed identification has been
the EU AI Act and this Charter. made subject to additional safeguards in comparison
with other high-risk AI7. For additional guidance, the
The EU AI Act lays down many product categories and British Security Industry Association (BSIA) has published
use cases, which automatically qualify as high-risk. an ethical and legal use guide for FRT available at
Based on this legal definition (see page 13), which can https://www.bsia.co.uk/, which is highly useful for
also be mirrored against our cross-cutting criteria, the a deployment that does not only guarantee mere
following use cases in security services can be expected compliance with law, but also adheres to important
to fall into the high-risk category: ethical values8.
high-risk

1. Biometric identification 2. Emotion recognition

The EU AI Act clearly identifies Emotion recognition systems


biometric identification identify emotions or
technologies as either largely intentions of a natural
prohibited (real-time persons based on their
identification in public biometric data. Such
spaces) or high-risk AI (post- technologies function in a
remote identification) similar way to other
systems. FRT systems are a typical biometric behaviour detection systems, but their use is based on
identification technology. For the purpose of the evaluation of biometric data of a natural person,
identification, FRT systems make a comparison between who may not have given consent to its use. Their
an identified facial map of a natural person and a deployment can be more efficient than low-risk AI-
database of biometric data to which the natural person enabled behaviour detection systems. But similar to
may not have given consent (in contrast with biometric biometric identification systems, they fulfil all cross-
verification or authentication system, which are based cutting high-risk AI criteria and are clearly identified in
on data subject consent and are used in access control the EU AI Act as either prohibited (e.g. at the workplace
or when opening your mobile phone). and in educational institutions) or high-risk AI systems
with enhanced transparency obligations.
Deploying biometric identification systems in public
spaces can bring substantial value for the search of 3. Prohibited items detection in Aviation Security
terrorists and other specific persons of interest, and are
hence of great benefit to law enforcement authorities A typical example for AI-
and public security. They can, however, also come with enabled systems in aviation
substantial risks to EU citizens’ fundamental rights – security are “Automated
particularly because they can be used without the data Prohibited Item Detection
subject’s explicit consent. Systems” (APIDS). These
systems automatically
Looking at our cross-cutting criteria, they provide identify prohibited items in aviation security based on
autonomous recommendations that are developed in images and data they have been fed with by the

For example, no decision shall be taken based solely on the output of these systems, and the output shall always be reviewed by at least two adequately
7

qualified and authorised natural persons, except if Member States believe this requirement to be disproportionate in law enforcement use cases.
Further to the BSIA’s Guide, a new British Standard (BS 9347) is to be released which guides the security industry user towards safe and trustworthy policies
8

for verification and identification, throughout the supply chain for facial recognition technology.

16 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
“High-risk AI use cases can provide substantial
value in security services, but must guarantee
compliance with the EU AI Act and this Charter”

AI use cases
developers. The deployment of AI-enabled aviation 5. HR management
security equipment can significantly enhance security
measures and operational effectiveness at airports – if The use of so-called algorithmic
coupled with adequate human oversight. But AI-enabled management at the
detection systems like APIDS can also pose substantial workplace can substantially
risks, particularly due to the environment they operate support task allocation and
in: failing to detect a prohibited item in the aviation recruitment – especially in
security environment can have substantial consequences companies with a high
for public security. The EU AI Act therefore classifies number of employees.
them automatically as high-risk9, which also appears
logic if we apply our cross-cutting criteria. Æ Task allocation: AI-enabled analysis of business
operations can provide recommendations to
4. AI-enabled drones management on worker allocation in different
services and shifts. There’s hence a big potential
AI is today an essential safety to optimise the organisation of work, which
component in unmanned can contribute to productivity gains benefiting
vehicles – especially in the companies and workers. At the same time, AI
case of autonomous drones. systems cannot consider the work done by workers
AI algorithms enable drones from a human performance perspective like a
to operate autonomously, human manager can, taking into account soft skills
reducing the need for human and inter-personal relationships. Human oversight
intervention. Drones can further include AI-enabled is therefore key.
sensors and detection systems that provide real-time
intelligence to a security officer or to the autonomously Æ Recruitment: If they are based on trustworthy
flying drone itself. AI-enabled drones can help remote data, AI-enabled analytics can help better match
security officers take informed decisions in real-time. job profiles with prospective candidates – for the
Officers can monitor large areas with several drones at benefit of companies, job seekers and inclusive
the same time, without having to pilot them all and workplaces.
thereby make surveillance tasks much more efficient.
Also, the integration of AI systems can make drone Such use cases and associated opportunities bring
operations safer, as they can help the drone adapt to likewise risks and have a potential impact on the worker,
changing flight conditions, such as entrance into no-fly both in terms of task allocation as well as opportunities
zones and weather. But the enhanced level of autonomy on the job market. Particularly in recruitment, depending
and risks to the physical environment make it necessary on the programming of the system, AI can also lead to
to make the deployment of AI-enabled drones subject a systemic discrimination of certain workers groups.
to specific rules. For example, a malfunctioning Outputs of the AI system can impact future career
autonomous drone poses a risk both to the people on prospects, livelihoods of these persons and workers’
the ground as well as to vehicles in the air. The EU AI rights. The use of AI for HR management is therefore
Act therefore categorises all AI systems as “high-risk” automatically categorised as high-risk AI when these
that are a safety component of a product or which are systems are intended to be used for the recruitment
themselves a product subject to the EU Drone Regulation, or selection of natural persons, or for management
and which are required to go through a third-party decisions affecting the workers’ contractual relationship
assessment10. and task allocation. Importantly, deployers of these AI-
systems must ensure, as per the EU AI Act, information
of affected workers and their representatives prior to
its deployment.

9
Like all AI systems which are part of products regulated by Regulation No 300/2008 on common rules in the field of civil aviation security and required to go
through a third-party assessment. The EU AI Office is expected to publish further guidelines on the interplay between the EU AI Act’s high-risk definition and
existing product-related legislation.
The EU AI Office is expected to publish further guidelines on the interplay between the EU AI Act’s high-risk definition and existing product-related
10 

legislation.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 17
Chapter II: Opportunities and risks of AI
deployment in security services

Our use case examples show that I. Opportunities


the deployment of AI can bring
many benefits to public security and 1. Higher security performance through
European citizens. The integration synergy with human-centric services
of AI into security services
The integration of AI in security solutions is not an
transforms security concepts, end in itself. When integrating AI into services, it
improves operational resilience shall bring added value by ensuring complementarity
of companies, makes security and synergy of people and technologies – providing
workers’ missions safer, and leads security workers with a “sixth sense” and translating it
into an unprecedented level of security.
to more effective security services.
But while AI technology holds great 1.1. Data-enabled identification, triage, and
potential to empower security mitigation of security risks in real time

actors to better identify and counter New capabilities for detection, triage and response
criminal activity, its deployment also time to suspicious movements, intrusions or
requires diligent risk assessments. anomalies stand out as a fundamental advantage
of integrating AI in security services:
This Chapter looks into the most
important opportunities that AI- Š Object-analysis and prohibited item detection
enabled services can bring to systems based on AI can quickly analyse and
European citizens, businesses and classify detected objects or hazards.
Š Behaviour detection and emotion recognition
workers, but also into main risk systems (optic, acoustic) can help identify unusual
drivers and undesired outcomes. behaviours and speed up intervention for a
better protection of public spaces and Critical
Infrastructure.

“When integrating AI into Š AI-enabled drones and C-UAS technology provide


a highly valuable additional tool in the guarding
services, it shall bring and remote surveillance of Critical Infrastructure
added value by ensuring and public spaces, especially in large perimeters
(e.g. train tracks, pipelines, offshore energy
complementarity and infrastructure, etc.).
synergy of people and Š Remote biometric identification systems can
bring substantial value for the targeted search of
technologies”

18 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
terrorists, other specific persons of interest and Biometric verification systems support workers in
vulnerable people. authenticating people and enforcing access control,
Š AI can help triage real from false alarm in MARCs particularly at sensitive facilities such as Critical
and maintain consistent service quality levels. Infrastructure.

All these use cases offer useful intelligence in real time Automated drones can help remote security officers take
that can provide security officers with an additional informed decisions in real time. Officers can monitor
“sense”, improve security measures through enhanced large and/or multiple areas with drone swarms without
decision-making, and shorten response time in case of an having to pilot all. Operations safety is supported
incident. Security services become more sophisticated through AI-enabled consideration of external perimeters,
and reach a new level of “intelligence”. such as weather conditions.

1.2. Enhanced adaptiveness of security solutions Alarm triage prevents security officers from repeatedly
validating disturbing false alarms (e.g. in certain weather
AI renders security services more agile and can adapt conditions such as snow fall) and helps them focus on
services to clients’ needs in real time. their main tasks – avoiding information overload. Any
kind of AI-enabled data analytics and optical/acoustic
AI-enabled risk analysis can drive informed decision- sensors can reduce burden in security workers’ standard
making and target security solutions to the specific tasks and support decision-making.
needs of a client. They ultimately strengthen the
resilience of a client’s facility, prevent future incidents 1.4. Enhanced data protection and cybersecurity
through predictive analysis, and enhance safety of
workers and security officers. AI can enhance data protection and cybersecurity
through supporting analysts in accelerated threat
In crowd management, AI can provide security service and data access anomaly detection – saving
providers with rapid intelligence in challenging valuable response time. AI can substantially
environments and informed decision-making in real support cyber risk assessments and help detect
time. They make crowd management more effective, phishing, malware, and other malicious activities.
enable quick decisions and adaptiveness of security
and safety measures, and substantially enhance event
security. 2. Benefits to companies and workers

1.3. Workers’ empowerment through automation Benefits of AI in security services are not mutually
exclusive: many use cases are not only an opportunity
AI empowers security workers with new insights for public security and the protection of clients, but also
and information thanks to the automation of tasks. for security companies and workers.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 19
2.1. Better safety and protection of workers repetitive tasks gave them a greater opportunity of
spending time on more strategic tasks12. Furthermore,
A key asset of the use of AI is an enhanced level tasks related to AI-enabled services may attract new
of protection of security workers. AI-enabled risk worker groups, which are currently underrepresented
analysis can take into account occupational hazards in the European security services, including women
of security workers. AI enables workers to increasingly and young people.
detect and validate risks remotely. AI-enabled drones
and robots do not only provide security officers with 2.4. Optimisation of business operations and
a better overview of potential risks, but prevent them competitiveness
from entering hazardous environments.
Data-driven business operation optimisation can
2.2. Promotion of inclusive workplaces enhance operational resilience, help maintain quality
in service, and allow intelligence-driven investments
The OECD11 underlines that the use of algorithmic - increasing competitiveness in the industry. AI can
management at the workplace can help increase support operational processes to become more cost-
diversity, inclusion, equality, and non-discrimination. efficient, ecological and safer with benefits for workers,
Trustworthy data is thereby crucial. The use of AI at security companies and clients, e.g. by better crew
the workplace must be based on relevant and high- scheduling or patrol route planning.
quality data to fight bias or discrimination in the
workplace. Algorithmic management can then promote
more objective assessments of job applications, II. Risks
performance evaluations and bring better opportunities
for recognition and promotion for workers who have Alongside these opportunities, it is important to
traditionally suffered from bias in the labour market. recognise that the deployment of AI can entail risks.
Striking a balance between leveraging AI’s potential
2.3. New job opportunities and mitigating its risks requires careful consideration of
ethical, legal, societal, and security-related implications
The enhanced empowerment and protection of of the deployment in question.
workers in AI-enabled services can make the security
services profession more attractive. In its research, This Chapter gives a short overview of important
OECD found out that in sectors such as manufacturing categories of risks that are associated with the use of
or finance, the reduction of workers’ time spent on AI in security services.

OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
11 

OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
12 

20 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Risk drivers

When we talk about risks related to the deployment of For deployers, there exist five main risk drivers, which
AI, the public discourse often focuses on the possible reinforce each other and should be addressed holistically
negative impact of its use. We should however first focus before and during the deployment of AI:
on the risk drivers.

1. Lack of diligent AI is not any kind of technology. The absence of a diligent, use case specific risk management
risk management process during the life-cycle of an AI system’s use can lead to non-compliance with
processes relevant law (such as the EU AI Act or GDPR), and unexpected risks to the health, safety
or fundamental rights of citizens and security staff.

2. Use of Using untrustworthy data in AI deployments can lead to amplified biases in AI decisions,
untrustworthy and unreliable outputs and fundamental rights risks. It undermines explainability, accountability
biased data sets and trust in AI systems, potentially resulting in substantial reputational risks for deployers.

3. Lack of human Human oversight is central to the deployment of AI in security services. A lack thereof can
oversight be a consequence of understaffing or staff that isn’t adequately trained or managed to
effectively operate the system in the specific use case. Inadequate human oversight can
lead to a loss of explainability of the AI system’s functioning and output. Staff can over-rely
on the system’s output such as false positives or negatives. A lack of human oversight does
not only limit, misguide and/or undermine human autonomy, but is a significant driver of
risks to the health, safety and fundamental rights of citizens.

4. Lack of AI systems and their algorithms shall be resilient against physical manipulation and
resilience cyberattacks. Otherwise, their functioning and outputs can be influenced and disabled –
leading to substantial risks, particularly in security service use cases.

5. Lack of AI A dedicated AI governance policy should assign the accountable officer and set out a clear
governance chain of processes and responsibilities – with the ultimate responsibility and accountability
for good or misuse of AI to sit with the Board or governing body of the deployer. Without such
a policy, there is a risk that those legally accountable may claim ‘plausible deniability’ and
that management levels in the deployers’ company took action without Board involvement.

Risk categories Š concerns about data privacy and data sets used
for the input of the AI system;
These risk drivers can translate in manifold, use case Š infringement of important fundamental rights due
specific, material and immaterial risks. We summarise to the system’s output.
them here in different categories.
Violations of fundamental rights can be material or
1. Risks to fundamental rights of citizens immaterial, including physical, psychological, societal,
or economic harm. Risks associated with the use of AI
In the public debate, the general use of AI raises fears for law enforcement purposes often include fears of
that fundamental rights may be disrespected due to: mass surveillance and intrusive surveillance; breaches
of data privacy; discriminatory security practices; and
Š a loss of human autonomy and explainability of accountability in case of a system’s malfunctioning and
AI systems; related consequences13. Safeguards against these risks
Š mistrust against different use cases and their are addressed in the EU AI Act and crucial for companies’
intended purposes / objectives; ethical and legal use of AI in the security services (see
Chapter III).

https://www.europarl.europa.eu/doceo/document/TA-9-2022-0140_EN.html
13 

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 21
2. Risks to workers’ rights and occupational to generalise the potential risk and jeopardise or slow
health and safety down its further development.

The use of AI can also present risks for workers. This 4. Security risks
particularly concerns risks associated with algorithmic
worker management tools14 15 and include: The use of AI in security services inherently holds
security risks, especially if risk drivers are not
Š Discrimination of workers due to the use of adequately addressed. Inadequate human oversight,
biased datasets in the deployment of algorithmic as well as a lack in physical and cyber resilience of the
worker management systems during recruitment system, can lead to important malfunctioning of the
processes, contract renewals, task allocation, and system with repercussions on public security:
access to training.
Š Perception of a high-stress work environment Š Inadequately qualified staff may not be aware of
including of a more intense work pace, increased false negatives with important consequences, e.g.
complexity of tasks and information flows, and in airport security.
the feeling of constant monitoring, surveillance Š Malicious actors may manipulate an AI system,
and evaluations. both physically and by means of cyberattacks, to
Š Reduction of human interaction with colleagues introduce a malfunction of the system and prepare
and supervisors if workers are asked to increasingly a criminal action18.
work in isolation. Š Inaccuracy and bias of training models as well as
complexity of systems can lead to incorrect claims
The EU AI Act and other European legislation sets in and outputs of the AI system (hallucinations19),
place safeguards against these risks. leading to undesirable consequences and
undermining trust in the system20.
3. Reputational risks for companies Š AI provides malicious actors with new tools and can
be used for deep fakes21 and cyberattacks22. They
Trust is central to the work of security services and could further hack data in the AI system, such as
to the use of AI. The EU has published in 2019 Ethics biometric data, to undertake socially engineered
Guidelines on the trustworthy use of AI, highlighting that cyberattacks and circumvent security protocols.
each deployment must be legally sound, ethical, and
robust in order to build trust16. Data from 2023 however 5. Secondary effects
confirms that public trust in the technology is still low17.
The use of AI systems can have secondary effects,
In Europe, public and media interest in AI has been which are easy to overlook prior to their deployment
high in the past years, also due to several incidents without proper risk management procedures. For
(see page 23). Often the spotlight falls on organisations example, the use of smart GPS systems can significantly
that get things wrong. If a company lacks transparency improve traffic flow in a city but may likewise lead to an
about its use of AI, deploys inadequately skilled staff increased and undesired use of minor roads in residential
for AI oversight, and badly manages risks, leading to areas. The same goes for data-enabled security risk
incidents, it can quickly be singled out as unethical and analysis. While it can significantly enhance protection
uncaring towards workers, customers, and citizens. As and security of a certain premise, it could likewise have
with any new technology, a single incident may be used an impact on rents and insurance premiums in certain
neighbourhoods.

Baiocco, S., Fernández-Macías, E., Rani, U. and Pesole, A., The Algorithmic Management of work and its implications in different contexts, Seville: European
14 

Commission, 2022, JRC129749


OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
15 

https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
16 

https://kpmg.com/xx/en/home/insights/2023/09/trust-in-artificial-intelligence.html#:~:text=AI%20trust%20and%20acceptance,depend%20on%20the%20AI%20
17 

application.&text=Three%20in%20five%20(61%20percent,wary%20about%20trusting%20AI%20systems.&text=67%20percent%20report%20low%20to%20
moderate%20acceptance%20of%20AI.
18 
https://csrc.nist.gov/pubs/ai/100/2/e2023/final
AI hallucinations define a situation where an AI system creates nonsensical, bizarre and inaccurate output. This may occur due to imperfect system
19 

modelling, complex interactions in deep learning systems, or if the system perceives patterns or objects that are either non-existent or imperceptible to a
human.
https://www.economist.com/science-and-technology/2024/02/28/ai-models-make-stuff-up-how-can-hallucinations-be-controlled
20 

https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
21 

https://www.wired.com/story/here-come-the-ai-worms/
22 

22 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
HOW WE CAN LEARN FROM PAST INCIDENTS AND EXISTING RISKS

Clearview AI

In 2020, the New York Times23 disclosed that Clearview AI, a US-based facial recognition software firm, amassed
more than 3 billion facial images from social media, including additional data such as individuals’ names, and
stored them in a database. Access to the database was sold to law enforcement agencies, which allowed them
to instantly identify an individual through a photo. Data Protection Agencies (DPA) raised important ethical and
GDPR-related concerns about this business model, with DPAs in France and Germany instructing the company
to cease its business activities and erase all personal data24.

Childcare benefits scandal in the Netherlands

From 2013 to 2019, Dutch tax authorities utilized a self-learning algorithm to develop risk profiles aimed at
detecting childcare benefits fraud. Acting upon the system’s recommendations, authorities penalized families even
on mere suspicion of fraud. Consequently, tens of thousands of families, often from lower-income backgrounds
or ethnic minorities, were plunged into poverty due to substantial debts owed to the tax authority. The Dutch
DPA outlined several breaches of the EU data protection regulations and imposed a fine of €3.7 million fine
on the tax authority25.

Physical intervention or cyberattacks manipulating behaviour of AI systems

Computer scientists from the US National Institute for Standards and Technology warn that AI systems can
malfunction if an adversary finds a physical or cyber way to manipulate its decision making26. Autonomous
vehicles learn from street images where and how to drive, while chatbots analyse conversation records to
predict responses. However, training data can be poisoned by corrupted data. Malicious actors can conduct
a cyberattack on AI systems to access sensitive information in order to misuse it. Input to the system can be
physically altered to confuse or manipulate the system.

The UK Post Office Scandal

The UK Post Office’s IT system, Horizon, erroneously accused hundreds of post office operators of financial
discrepancies between 2000 and 2014, which had not been caused by human negligence but faults in the
IT software. Over 900 employees were convicted of theft, fraud and false accounting – leading to innocent
individuals facing false accusations and prosecutions. Numerous postmasters had reported issues with the
software to management, and even the software provider was aware of buds. Nevertheless, concerns were
not heard by the UK Post Office. Although the Horizon software was not an IT system, this particular incident
showcases the importance of human oversight, qualitative data and system algorithms, AI governance policies
and risk management processes.

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
23 

https://www.law.kuleuven.be/citip/blog/clearview-ai-illegally-collecting-and-selling-our-faces-in-total-impunity-part-ii/
24 

https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/#:~:text=In%202019%20it%20was%20
25 

revealed,on%20the%20system’s%20risk%20indicators.
https://csrc.nist.gov/pubs/ai/100/2/e2023/final
26 

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 23
“Trust is central to the
work of security services
and to the use of AI”
WHAT WE CAN LEARN

These examples show how quickly the use of


AI can derail into concrete risks. It is therefore
important to address risk drivers holistically
from the start:

1. Risk management processes throughout


the life cycle of an AI system’s use are key
to identify and address use case specific
risks and ensure compliance. For the use
of FRT, the BSIA published a helpful “Guide
to the ethical and legal use of Automated
Facial Recognition”27.

2. The use of AI systems that are based


on trustworthy data and algorithms is
critical to trustworthy output and ruling
out fundamental rights violations.

3. Qualitative human oversight with


adequately skilled staff is key to ensure
that a human can always evaluate the AI
system’s recommendation and take a final,
independent, decision.

4. High levels of physical protection and


cyber resilience through the AI system’s
life cycle are crucial to protect citizens,
users, and clients from the malfunctioning
of an AI system.

5. Clear reporting lines, processes and


responsibilities as part of an AI governance
policy are crucial for deployers to take
action in case of misfunctioning or misuse
of an AI system.

https://www.bsia.co.uk/zappfiles/bsia-front/public-guides/form_347_automated_facial%20recognition_a_guide_to_ethical_and_legal_use-compressed.pdf
27 

24 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Chapter III: Values and Requirements

CoESS’ MISSION The deployment of AI should follow a value-based


code of conduct. This Chapter defines these values and
Our mission is to support growth in the security gives on overview of the main requirements to help AI
services industry through the promotion of deployers in security services fulfil them – based on the
high-quality solutions and professionalism, EU AI Act, the OECD AI Principles28, and the EU Ethics
based on the selection and development Guidelines for Trustworthy AI29.
of skilled staff and technology. This goal is
achieved through the promotion of qualitative I. C
 oESS’ transversal values for the
training and working conditions of workers;
ethical and responsible use of AI
compliance with regulation and industry
standards; highest safety levels for workers,
clients, and citizens; as well as trust of citizens CoESS defines the following eight transversal,
and authorities in the industry. interdependent values for the ethical and responsible
use of AI in the security services:

CoESS promotes the integration of 1. Respect of Fundamental Rights: the European


AI systems into security services as security services industry shall be a leader in the
ethical and responsible use of AI. Deployers of AI shall
part of its mission. The integration at all times ensure the full respect of the EU Charter
of AI is not only about adding of Fundamental Rights30.
technology to security concepts, but
about ensuring complementarity 2. Promoting Diversity, Equality, Inclusion and
Non-Discrimination: the use of AI by security service
and optimisation of people and providers shall advance diversity, equality, inclusion
technology to achieve new levels and non-discrimination31.
of quality and efficiency in security
3. Human-centric AI: The deployment of AI systems
services. shall always be overseen by adequately trained staff,
proportionate to the use case. The use of AI systems
The use of AI solutions should abide by the principle of in security services shall empower workers, allowing
technological neutrality, aiming for the best and most them to make informed decisions based on new
reliable security performance, whether it be reached insights and information. Human-centric AI also
by humans or a combination with AI. means a better protection of people, fostering their

https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
28 

https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
29 

The Charter outlines the fundamental rights protected within the EU. It covers civil, political, economic, and social rights, including the right to the integrity
30 

of the person, right to liberty and security, protection of personal data, respect of private life, and freedom of movement, expression and information. The
Charter prohibits discrimination based on various grounds such as race, gender, religion, and sexual orientation. It guarantees rights such as fair working
conditions, workers’ right to information, right of collective bargaining and action, right to good administration, and high levels of consumer protection.
The Charter is available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:12012P/TXT
31
In line with the EU Charter of Fundamental Rights and the EU Sectoral Social Partner Statement on Diversity, Equality, Inclusion
and Non-Discrimination in the Private Security Services CoESS and UNI Europa (2024), available at: https://coess.org/download.
php?down=Li9kb2N1bWVudHMvdW5pLWV1cm9wYS1jb2Vzcy1qb2ludC1zdGF0ZW1lbnQtZGl2ZXJzaXR5LWVxdWFsaXR5LWluY2x1c2lvbi1maW5hbC5wZGY.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 25
fundamental rights and principles related to non- II. F
 irst steps to ensure an ethical
discrimination, transparency and privacy. For CoESS, and responsible use of AI
the human-centric deployment of AI also means that
AI shall become a public good and serve European
citizens in every dimension. The core purpose of this Charter is to provide deployers
of AI systems in the security services with guidance
4. Transparency and explainability: the on legal and voluntary requirements for the ethical
deployment of AI shall be transparent to the and responsible use of AI which address the risks
stakeholders concerned, as appropriate to its use identified in Chapter II, based on CoESS’ transversal
case. The explainability of the functioning and output value-set. Before deciding upon deploying an AI system
of AI systems is crucial, not only for the ethical and and setting in place adequate measures to guarantee
responsible use of AI, but also public understanding its responsible and ethical use, the deployer should
and trust. take three preparatory steps in a multi-stakeholder
approach32:
5. Data privacy: data governance along the
deployment’s value chain shall ensure the protection Step 1: Identify the AI system
of European citizens’ data privacy rights enshrined
in the EU Charter of Fundamental rights and GDPR. As a first step, the deployer should identify whether
they are planning to actually use an AI system before
6. Physical and cyber resilience and safety: purchasing it. Although AI systems should be labelled
AI systems and their use in security services shall as such by the provider. This may not always be the
be safe, resilient and secure to prevent, withstand case especially for systems that are marketed before
and overcome incidents. Systems shall work in a enforcement of the EU AI Act. The deployer should
repeatable and predictable way, and a consistent level therefore consider examining with the provider,
of quality services shall be ensured throughout the internally and/or externally, the underlying technology
AI systems deployment. Unintentional material and that is applied in a certain use case. Based on the EU AI
immaterial harm to workers and affected stakeholders Act’s “AI Definition” (see page 8), it likely qualifies as an
shall be minimised and prevented. AI system if it incorporates machine learning algorithms
or deep learning models to produce outputs such as
7. Accountability: the entire value chain of the predictions, content, recommendations and decisions
development and deployment of AI systems shall based on data input. A review of our cross-cutting criteria
be accountable, according to their roles and legal (see page 10) can help.
requirements, for the proper functioning of AI systems.
Step 2: Assess applicable legal requirements
8. Sustainability: the deployment of AI shall and evaluate whether the AI system and the
holistically contribute to the United Nations’ use case qualify as low-risk or high-risk
Sustainable Development Goals, promoting inclusive as per the EU AI Act
growth, (ecologically) sustainable development and
well-being. It shall be ensured that AI solutions are Before each use, the deployer shall define the purpose
sustainable and environmentally friendly – duly and intended outcome of the AI systems’ use and
considering operations’ impact on the environment. conduct an assessment to understand if the AI system
and use case qualify as low- or high-risk. This assessment
is crucial to comply with the EU AI Act and to use the

“The deployment of AI should AI responsibly and ethically. It should start with the
following questions:
follow a value-based
code of conduct”

CoESS recommends to assess and implement these requirements in a multi-stakeholder approach, including (among others) the responsible project
32 

managers, regulatory affairs managers and compliance experts, technical AI experts, data protection officers, HR, security experts both in physical
and cybersecurity, as well as business unit managers of the service segment concerned. Such teams should be diverse, also to detect potential biases
throughout the operations of an AI system. AI policies and codes of conduct must be a company board priority.

26 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Step 3: Assess the added value of
deploying the AI system

Before using the AI system, the deployer should evaluate


if the integration of AI in the specific use case adds any
value, and identify its specific purpose and intended
outcome. Next, the deployer should evaluate potential
advantages, disadvantages and unintended outcomes
of integrating AI into the service in question, assessing
these against the overarching goal of improving its
quality and effectiveness. This assessment should
1. Is the AI system or use case prohibited as per consider factors such as field-effectiveness, impact on
the EU AI Act (see page 12)? working conditions and decision-making, qualification
of workers and the need for upskilling, as well as cost-
2. Does my company qualify only as a deployer or effectiveness.
also provider of the AI system? If the deployer adds
their name on the AI system or makes modifications III. R
 equirements for the ethical and
to the system or its intended use, then they would
responsible use of AI
be qualified as an AI system provider33 as per the
EU AI Act, making them subject to additional legal
obligations in the case of high-risk AI systems. DISCLAIMER
3. Does the AI system or use case qualify as high-
This document shall provide security companies
risk? Our cross-cutting criteria can help to make a
with a first understanding of the EU AI Act
first assessment. To have legal certainty, the deployer
and important codes of conduct prior to and
should check:
during the use of an AI system. The information
a. Whether the AI system in question is CE marked provided in this Charter does not replace
and registered in an official, publicly available EU system and use case specific risk and regulatory
database (available by 02 August 2026). assessments that should be conducted by
the deployer to ensure compliance with the
b. Whether the use case falls into any of the high- EU AI Act
risk categories defined in Annex III of the EU AI Act
(see page 13).
AI actors are accountable for the proper functioning of AI
4. Are different AI systems combined in one use systems, based on their roles, the context, and consistent
case (e.g. installation of crowd management systems with the state of art. To ensure compliance with our
on an AI-enabled drone) and, if so, what is the impact value-set, the EU AI Act and other relevant legislation,
on the low-risk vs. high-risk categorisation of the this Charter recommends that deployers set out an AI
use case? governance policy that assigns legal responsibility and
accountability for the good or misuse of AI to the Board
5. If the AI system or use case does not qualify as or governing body of the deployer. Furthermore, the
high-risk, does the system in the use case interact deployer shall develop in a multi-stakeholder approach34
with natural persons and therefore does it bear an internal code of conduct which sets in place the
transparency risks (see page 12)? following measures:

If the deployer adds their name or trademark on an AI system, makes a substantial modification to it or changes the intended purpose of the AI system
33 

(in comparison to the providers’ instructions of use), the deployer may furthermore classify as a provider of a high-risk AI system as per the EU AI Act’s
Article 25, and hence comply with a much larger range of legal obligations that outlined in this Charter.
CoESS recommends to assess and implement these requirements in a multi-stakeholder approach, including (among others) the responsible project
34 

managers, regulatory affairs managers and compliance experts, technical AI experts, data protection officers, HR, security experts both in physical
and cybersecurity, as well as business unit managers of the service segment concerned. Such teams should be diverse, also to detect potential biases
throughout the operations of an AI system. AI policies and codes of conduct must be a company board priority.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 27
RISK MANAGEMENT Æ Adequate cyber and physical security of datasets,
Risk management systems are a legal including personal data and sensitive data, must
obligation for high-risk AI as per EU AI Act, be ensured – including of relevant datacentres.
Art. 9
Æ The deployment of AI in security services must be
Risks associated with the deployment of an AI system based on trustworthy, robust, and qualitative data.
shall be adequately managed throughout its entire To this end, due diligence policies and procedures
lifecycle and according to its intended purpose and in selecting AI systems shall ensure that providers
context of use. To this end, the deployer shall establish have trained, validated and tested the AI system
a risk management system to: with input data that meets certain quality criteria
and excludes potential bias, compliant with the
Æ ensure compliance with relevant law, including EU AI Act. To the extent that the deployer exercises
GDPR and the EU AI Act. control over the input data, they shall ensure that
it is relevant in view of the intended purpose of the
Æ identify and analyse known, reasonably foreseeable, high-risk AI system.
and other possible risks that the AI system can
pose to health, safety or fundamental rights of Data governance shall further ensure traceability of data
EU citizens and workers, as well as the security of processing, explainability of the AI system’s output,
clients when it is used according to its intended auditability and accountability.
purpose, but also under conditions of reasonably
foreseeable misuse; HUMAN OVERSIGHT
Many measures related to human oversight
Æ adopt appropriate and targeted, technically and are a legal obligation for deployers of
physically feasible risk management measures high-risk AI systems as per the EU AI Act,
designed to minimise the identified risks to a Art. 4, 14 and 26
reasonably acceptable level;
Adequate human oversight of any AI system is central
Æ establish fall-back procedures and other adequate to the fulfilment of the values set out in this Charter.
mitigation and control measures addressing risks The EU AI Act therefore also rightly foresees in its Article
that cannot be eliminated. 4 that already as of 02 February 2025, deployers of AI
systems shall take measures to ensure, to their best
These measures shall consider all requirements listed in extent, a sufficient level of AI literacy of their staff.
this Charter and adequately address the risk drivers and For the European security services, CoESS underlines
risk categories identified in Chapter II. Many international that responsible staff must be enabled to fulfil the
Standards35 and Guidelines36 exist to help operators requirements set out in this Charter, appropriate and
conduct risk management processes. proportionate to the specific use case.

DATA GOVERNANCE
Data governance is a legal obligation for
high-risk AI as per EU AI Act, Art. 10 & 26

Deployers shall set in place diligent data handling


practices:

Æ Data governance must guarantee full compliance


with GDPR.

Such as ISO/IEC 23894 “Artificial Intelligence – Guidance on Risk Management” and ISO/IEC 42001”Artificial Intelligence Management System”.
35 

These include the EU Self-Assessment List for Trustworthy AI, available at: https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-
36 

artificial-intelligence-altai-self-assessment; the UK Portfolio of AI assurance techniques, including the Anekanta AI Risk Intelligence System for biometric
and high-risk AI, available at https://www.gov.uk/ai-assurance-techniques; and the US National Institute of Standards and Technology AI Risk Management
Framework, available at https://www.nist.gov/itl/ai-risk-management-framework

28 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Æ remain aware of the possible tendency of
automatically relying or over-relying on the output
produced by an AI system;

Æ correctly interpret the high-risk AI system’s output


and take autonomous decisions;
Deployers shall take appropriate technical and
organisational measures to ensure they use AI systems in Æ decide, in any particular situation, not to use the
accordance with the instructions for use accompanying AI system or to otherwise disregard, override or
the systems. reverse its output;

The deployer shall further ensure that staff overseeing Æ intervene in the operation of the AI system and set
AI systems is empowered through adequate training, in place relevant fall back procedures and other
qualification, technical and operational measures, adequate mitigation and control measures in case
including delegation of authority, to: of an incident;

Æ comprehend instructions for use, the intended Æ inform the provider and relevant public authorities
purpose and use case of an AI system, and in case of an incident.
conclusions drawn from risk management;
In the case of high-risk AI deployments, the EU AI Act’s
Æ understand the relevant capacities and limitations Article 14 requires diligent policies, processes and
of the AI system; policies of the deployer to ensure legal compliance.
Special human oversight provisions exist for biometric
Æ be aware of the level of accuracy, robustness identification use cases38.
and cybersecurity of the AI system – including
any known and foreseeable circumstances that Deployers shall strongly take into account the potential
may have an impact on accuracy, robustness and need to upskill workers prior to the deployment of
cybersecurity; AI. The security industry should work closely with
public authorities to prepare for the integration of AI
Æ know conditions of reasonably foreseeable misuse, systems into services and, if needed, adapt training
which may lead to risks to the health and safety or frameworks that reflect requirements for AI literacy,
fundamental rights of affected persons due to the skills, qualification, and licensing requirements. Social
system’s output; Dialogue can play an important role to lead this process
and to ensure the responsible use of AI in the workplace,
Æ be able to explain the intended purpose of data for the benefit of occupational health, safety and job
collection to affected persons and to provide quality.
information on how the AI’s output was reached;

Æ be aware of changes to the AI system which may


impact its performance37; “Social Dialogue can play an
important role to ensure the
Æ be able to duly monitor the AI system’s operation,
including the detection and management responsible use of AI in
of anomalies, dysfunctions and unexpected the workplace.”
performance;

People trained in human oversight can also reduce the risk of biased decisions arising from previously unbiased AI systems that became biased during
37 

their use. As appropriate in the use case, workers should be trained to align the AI deployment with human-centred values throughout the operation.
In the case of biometric identification use cases, the EU AI Act Article 14.5 prescribes that no action or decision is taken by the deployer on the basis
38 

of system’s output unless it has been separately verified and confirmed by at least two natural persons with the necessary competence, training and
authority. Exemptions from this provision exist for use cases that fulfil law enforcement purposes.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 29
RESILIENCE Æ Cyberattacks on the AI system during its operation
Measures related to AI system accuracy, can “poison” the training data set or models or
robustness and cybersecurity are a legal present important data privacy and protection
obligation for deployers of high-risk AI as risks. Standards40 and guidelines41 exist in the field
per the EU AI Act, Art. 15 of cyber resilience, which can be useful for deployers
of AI systems.
In line with the EU AI Act and other relevant
legislation, the AI system provider has the Especially in the security services sector, exemplary
responsibility to design and develop their products physical and cyber resilience of the AI system is
in a way that appropriately ensures its accuracy, crucial to avoid incidents and put the reputation
resilience and cybersecurity. of the deployer at risk. To overcome incidents and
ensure business continuity, deployers shall set in place
Deployers shall, however, also take the necessary fall-back procedure and contingency plans. Security
technical, operational and organisational measures to officers may have to replace the AI system’s operation
respond to pertinent physical and cybersecurity risks, in in the respective use case. Data can be stored across
line with a previous risk assessment, intended purpose geographically dispersed locations to minimize the
and use case environment. AI systems shall be robust, impact of localized physical disruptions and cyber-
secure and safe throughout their entire lifecycle so that, attacks.
in conditions of normal use, foreseeable use or misuse,
or other adverse conditions, they function appropriately, RECORD-KEEPING
repeatably and predictably, and do not pose unreasonable Record-keeping is a legal obligation for
safety risk. It is therefore important that physical and high-risk AI as per EU AI Act, Art. 12 & 26
cyber risks be addressed in a holistic way39.
Automatic record-keeping is, as per the EU AI Act, a
Æ Physical manipulation of AI systems can lead to mandatory technical feature of high-risk AI systems
erroneous outputs and respectively significant and deployers, as far it is under the latter’s control. It
security and safety risks. Physical protection is important for deployers to ensure the respect of values
measures can include access control and access related to traceability, explainability, and accountability.
monitoring to physical AI hardware, infrastructure In proportion to the intended purpose of the system, the
and data storage; maintenance of optimal documentation of AI systems’ operational performance
environmental conditions for the AI systems’ ensures that the deployer can trace, explain and justify
functioning; and implementation of secure how decisions are made. Importantly, accountability
procedures for AI system disposal. Staff should be demands clear records to establish who is responsible
properly trained to implement these measures. A for AI operations (and potential modifications to it),
special focus should be further given to data centre ensuring transparency and (voluntary) compliance with
security and resilience. regulatory requirements.

The EU AI Act foresees a record-keeping period of at


least six months when high-risk AI systems are deployed
and sets additional obligations when deploying remote
biometric identification systems.

See for further information the White Paper of CoESS and the International Security Ligue on “Cyber-Physical Security and Critical Infrastructure”, available
39 

at https://www.coess.eu/.
ISO Standard “ISO/IEC CD 27090 Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems”
40 

addresses AI system cybersecurity risks.


Such as the EU Joint Research Centres “Guiding Principles to address cybersecurity requirements for high-risk AI systems”, available at https://op.europa.
41 

eu/en/publication-detail/-/publication/7d0a4007-51dd-11ee-9220-01aa75ed71a1/language-en. CoESS and Euralarm have published more general


Cybersecurity Guidelines for the Security Industry, available at https://www.coess.eu/.

30 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
TRANSPARENCY AND Æ Explainability: It is further important that deployers
EXPLAINABILITY are in the position to explain the intended purpose
Diverse transparency measures are a legal of data collection to affected persons and to
obligation for high-risk and limited-risk AI provide clear and simple information on how the
as per EU AI Act, Art. 13, 26, 49, 50 and 71 AI’s decisions were reached in proportion to the
use case and with respect to intellectual property,
Compliance with GDPR is a key aspect of the ethical privacy and security. To this end, deployers should,
and responsible use of AI. But there is more: AI if needed, request the developer to provide model
deployers shall commit to transparency, explainability cards or otherwise human readable guidance on
and responsible disclosure regarding AI systems. how the AI system makes decisions.

Æ Transparency towards persons that are exposed Deployers should further promote understanding
to the AI system: humans must always be aware among the public and policymakers on the use of AI
that they are interacting with an AI system and/or in security services. This can be achieved by promoting
are subject to its output in a way that is lawful and this Charter.
in proportion to the specific use case. Transparency
information should be meaningful, appropriate FUNDAMENTAL RIGHTS IMPACT
to the context, consistent with the state of the ASSESSMENT
art, and be accessible for people with disabilities. Fundamental Rights Impact Assessments
Affected persons must be enabled to understand are a legal obligation for deployers of
and, if necessary, challenge, the output and related high-risk AI as per the EU AI Act, Art. 27
decisions. Worker representatives must be informed
about the deployment of AI systems in workers’ In addition to the legally mandatory data protection
management. As appropriate to the use case and AI impact assessment as per the GDPR, Article 25,
system, the deployer should set in place transparent deployers of high-risk AI who are public authorities or
and accessible complaint mechanisms that respect who provide services on behalf of public authorities
specific rights and remedies for individuals must conduct a Fundamental Rights Impact
unlawfully impacted by AI systems. Assessment as per Article 27 of the EU AI Act before
using a high-risk AI system for the first time. Such an
Æ Transparency towards the greater public: when assessment must cover:
using high-risk AI systems on behalf of public
authorities, the deployer must register it in a Æ a description of the deployer’s processes in which
European publicly accessible database42 as per Art. the high-risk AI system will be used in line with its
49 and 71 of the EU AI Act. To increase transparency intended purpose;
and public trust in the use of AI in security services
and, if deemed adequate and safe in the specific use Æ a description of the period of time within which, and
case, deployers may voluntarily register any high- the frequency with which, the high-risk AI system
risk AI deployment, also if they are not deploying is intended to be used;
it on behalf of a public authority.
Æ the categories of persons likely to be affected by
Æ Transparency towards authorities: Deployers its use in the specific context;
should make AI documentation available for
inspection by competent authorities to ensure Æ the specific risks of harm likely to impact affected
compliance with legal requirements. Deployers persons;
of post-remote biometric identification systems
must submit annual reports to relevant market Æ a description of human oversight measures in line
surveillance and data protection authorities. with instructions for use;

This EU database is regulated as per Article 71 of the EU AI Act. Information to be entered by deployers of AI systems is listed in Annex VIII. The database
42 

shall be established and managed by the European Commission in the course of xxx.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 31
Æ risk management measures, incl. internal I NVOLVE WORKERS IN THE
governance and complaint mechanisms. INTEGRATION OF AI INTO SERVICES

Deployers of high-risk AI systems who provide services In addition to legal obligations to inform workers
on behalf of public authorities must notify their national about the use of AI at the workplace , the deployer
authorities about this assessment and shall repeat should actively involve security officers in the
the assessment if they consider that any of these deployment of AI systems into services.
elements are not up to date during use. Deployers
who do not deploy high-risk AI systems on behalf of This could include awareness raising activities, including
public authorities should also consider doing such an seminars, webcasts and other information material,
assessment if they have reasons to believe that their use to provide transparency on which AI systems are used
case may come with unlikely, but possible, fundamental and on why and how they are intended to be used. As
rights impacts. The EU AI Office may develop guidelines part of human oversight measures, and appropriate
to help deployers fulfil their legal obligations. to each use case, workers must receive an adequate
understanding of what can and cannot be expected
As appropriate to each use case, deployers may consult from the system in order not to overwhelm workers
affected stakeholder groups on such Fundamental Rights and avoid complacency. Benefits of AI risk analysis
Impact Assessments. should reach each employee, e.g. through the sharing
of statistics and recommendations with regards to
DUE DILIGENCE operational health and safety.

Deployers of AI systems should follow due Deployers can establish dedicated contact points where
diligence policies when buying AI systems: workers can raise ethical concerns over the functioning
and use of certain AI systems, protected as per relevant
Æ verify that the AI system is trained on high-quality, labour law and possibly via an ethics or internal review
diverse, and representative datasets. committee.

Æ confirm the AI system’s conformity with important I N CASE OF DOUBT: REACH OUT TO
cybersecurity requirements, at least those set out COMPETENT AUTHORITIES
in relevant law such as the EU AI Act and the EU
Cyber Resilience Act. Internal codes of conduct and AI policies should
foresee that deployers work actively with competent
Æ use only AI systems that are transparent in their authorities in case of doubts about legal certainty
decision-making processes and include adequate and requirements set out in this Charter. Also, if the
instructions for use43 which allow, among others, deployer has reason to consider that the use of the AI
an easy understanding of the system’s intended systems may present any material or immaterial risk
purposes and necessary human oversight to affected persons, they should inform the system
measures; the level of accuracy, including its providers and relevant market surveillance authority,
metrics, robustness and cybersecurity; as well as and suspend the deployment of the system. In case
foreseeable circumstances and misuses that can of an incident with a high-risk AI system, competent
lead to fundamental rights risks. authorities must be informed.

Providers of high-risk AI system must register their


products in the EU Database referred to in the EU AI Act’s
Article 71. Deployers shall use only high-risk systems “This Charter recommends
that are duly registered. that deployers set out an AI
governance policy”

For high-risk AI system compliant with the EU AI Act provisions in Article 13.
43 

32 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Chapter IV: Checklist

The EU AI Act will become applicable in a stepped But if you are already deploying AI systems in your
approach, with most provisions applying as of services, or are planning to do so, there are ways to be
02 August 2026. Much of the implementation of, and on top of that wave and use this Charter to set in place
compliance with, the EU AI Act depends however AI governance frameworks that guarantee ethical and
on Guidelines to be published by the European responsible use of AI in your services.
Commission’s EU AI Office, Standards to be developed
by CEN/CENELEC, the enforcement framework to be set Here’s a checklist that can help you:
up by national authorities.

Set up an internal AI governance & leadership team, processes and responsibilities


in a multi-stakeholder approach.

Identify the possible AI systems in question as well as their intended use and purpose
in your service offering.

Get an understanding of the legal frameworks and standards applicable to your use
case and confirm compliance deadlines.

Assess the possible risk profile of your AI system and use case, respective legal
obligations and the added value of deploying the AI system in the specific use case.

Engage with your national authorities and / or legal experts and confirm your
internal assessment.

Get inspired by this Charter and build an internal code of conduct.

Purchase your AI system in a due diligence approach and upskill your AI leadership
team.

Conduct a risk assessment and set in place a risk management process for each
individual use case.

Prepare adequate measures for each individual use case according to the values and
requirements set out in this Charter AND THE EU AI ACT, and upskill your workforce
respectively if needed.

Continously review your AI governance, monitor the regulatory environment and


incident involving high-risk AI, and engage with the EU AI Office, regulatory bodies in
your country, Standardisation bodies, industry associations and the AI community to
stay informed about the latest trends, guidelines, standards and legal developments.

coess.eu CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services 33
Annex: Repository of
useful guidelines
and standards

Æ Legal text of the EU AI Act:


https://eur-lex.europa.eu/eli/reg/2024/1689/oj

Æ EU Guidance
Š Follow the EU AI Office:
https://digital-strategy.ec.europa.eu/en/policies/ai-office
Š Follow the EU AI Pact:
https://digital-strategy.ec.europa.eu/en/policies/ai-pact
Š Follow the European AI Alliance:
https://digital-strategy.ec.europa.eu/en/policies/european-ai-alliance
Š EU Guidelines on Trustworthy AI:
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Š EU Joint Research Centre “Guiding Principles to address cybersecurity requirements for high-
risk AI systems”:
https://op.europa.eu/en/publication-detail/-/publication/7d0a4007-51dd-11ee-9220-
01aa75ed71a1/language-en
Š EU Self-Assessment List for Trustworthy AI :
https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-
intelligence-altai-self-assessment

Æ International Guidance
Š OECD’s definition of an AI system:
https://oecd.ai/en/wonk/ai-system-definition-update
Š International Association of Privacy Professionals (IAPP) AI Resource Center:
https://iapp.org/
Š UK Portfolio of AI assurance techniques:
https://www.gov.uk/ai-assurance-techniques
Š US National Institute of Standards and Technology AI Risk Management Framework:
https://www.nist.gov/itl/ai-risk-management-framework

Æ Security Industry Guidance


Š British Security Industry Association (BSIA): Automated Facial Recognition – A guide to
ethical and legal use:
https://www.bsia.co.uk/
Š CoESS & Euralarm Cybersecurity Guidelines for the Security Industry:
https://www.coess.eu

34 CoESS Charter on the ethical and responsible use of Artificial Intelligence in the European private security services coess.eu
Æ European Standards
Follow the CEN-CENELEC Joint Technical Committee 21 “Artificial Intelligence”:
https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/

Æ International Standards
Š ISO/IEC 5339:2024 “Information technology — Artificial intelligence — Guidance for AI
applications”:
https://www.iso.org/standard/81120.html
Š ISO/IEC TS 8200:2024 “Information technology — Artificial intelligence — Controllability of
automated artificial intelligence systems”:
https://www.iso.org/standard/83012.html
Š ISO/IEC 22989:2022 “Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology”:
https://www.iso.org/standard/74296.html
Š ISO/IEC 23894 “Artificial Intelligence – Guidance on Risk Management”:
https://www.iso.org/standard/77304.html
Š ISO/IEC TR 24028:2020 “Information technology — Artificial intelligence — Overview of
trustworthiness in artificial intelligence”:
https://www.iso.org/standard/77608.html
Š ISO/IEC TR 24030:2024 “Information technology — Artificial intelligence — Use cases”:
https://www.iso.org/standard/84144.html
Š ISO/IEC TR 24368:2022 “Information technology — Artificial intelligence — Overview of
ethical and societal concerns”:
https://www.iso.org/standard/78507.html
Š ISO/IEC TR 27563:2023 “Security and privacy in artificial intelligence use cases — Best
practices”:
https://www.iso.org/standard/80396.html
Š ISO 30434:2023 “Human resource management — Workforce allocation”:
https://www.iso.org/standard/68711.html
Š ISO/IEC 38507:2022 “Information technology — Governance of IT — Governance implications
of the use of artificial intelligence by organizations”:
https://www.iso.org/standard/56641.html
Š ISO/IEC 42001 ”Artificial Intelligence Management System”:
https://www.iso.org/standard/81230.html
Acting as the voice of the security industry
Confederation of European Security Services

Confederation of European Security Services


Avenue des Arts 56
B-1000 Brussels
Belgium

You might also like