ISO 42001 Summary

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

ISO/IEC 42001 is a new international standard published jointly by the International

Organization for Standardizations (ISO) and the International Electrotechnical


Commission (IEC) in December 2023. It is a voluntary standard. It is also an auditable
and certifiable standard, which means that if you choose to implement it or adopt it,
someone from outside your organization, usually called a certifying body (CB) can come
into your organization to assess whether your AI system meets the controls set by the
standard.

Why certification in AI systems is important?

If a third party can attest that your AI system is conforming to a set of principles that
have been agreed upon by industry peers and industry experts (expressed in the form of
an international standard), then this is a good news for you. It sends a positive message
about your AI system, your management practices, your responsible approach, and your
overall AI culture. More importantly, it helps your team develop responsible practices
that over time enhance stakeholders' trust, reduce legal exposure, and shield your
product and services from risks that are inherent to ML technology.

What does “required” mean?

As I said, the standard is voluntary. But if you choose to implement it, then you have to
conform to all the controls listed by the standard. In ISO language, a required standard
requires a third-party assessment to determine whether the controls required by the
42001 are present in your organization. This is different from a non-required standards,
called technical reports, which are simply a set of guideline practices that you may or
may not choose to implement.

The EU AI Act: A New Legal Requirement

The EU AI Act has introduced a new certification requirement for certain AI systems.
While certification in general is driven primarily by market forces, in the EU and for
certain AI system, it is now a legal requirement. The Act classifies AI systems into 4
different categories:

 Unacceptable risk: Systems that cross certain risk thresholds are simply prohibited.
Because they violate fundamental human rights and go against the basic EU values, these
systems are not allowed and cannot be developed. Examples of these AI systems are
social credit scoring systems, systems that use biometric identification such as facial
recognition, and systems known in law enforcement as predictive policing. These
systems are illegal under article 5 of the Act.
 High-risk AI systems: Systems that present a higher risk of harm must be assessed to
make sure that developers and users of these systems have assessed their risk of harm.
Examples of these systems are systems used to make decisions about access to education
and employment, such as HR decisions. There is an agreement among experts that most
AI systems used in business functions such as HR, finance, and marketing, as well as,
systems used in certain industries such as public safety and law enforcement, fall under
the high-risk category. Certification for these systems is required by law (Article 6 of the
Act).
 Limited Risk: AI systems that have a limited harmful impact on EU citizens’ rights and
safety do not require conformity assessment. Providers and users of these systems need to
make sure that their systems follow transparency requirements.
 Minimal Risk: AI systems with minimal risks are subjected to industry protocols. They
include all AI systems, minus the first two categories mentioned above.

So practically speaking, you are required to conduct a conformity assessment for high-
risk AI systems. What falls under this category are AI applications used in the following
management functions: HR, Accounting, Finance, Customer Service, IT support, and
Legal. Even for AI with limited risk (category 3 and 4) you need to consider certification.
While it is not legally required, it is suggested as a mean to boost transparency and
trustworthiness.

Who can certify my organization?

The first thing you need to know is that ISO/IEC is currently working on another
standard that will clarify who and how certification can be conducted (ISO/IEC 42006
Requirements For Bodies Providing Audit And Certification Of Artificial Intelligence
Management Systems.) This standard, which is under development, will define the
certification process and will clarify the conditions under which an AI system can be
audited. Most likely, it will address requirements of who can certify AI systems, the type
of knowledge and skills required of people conducting the certification, and the rules
and processes under which a certification can be issued. Traditionally, and for other
required ISO standards, such as ISO 9001 (Quality management systems), ISO 45001
(occupational health and safety management systems), and ISO/IEC 27001
(Cybersecurity management system), traditional certification bodies have dominated the
certification market. But for AI, it will most likely change as more consulting companies
get into the certification game.

How do I certify my company?

Before I answer this question, I need to mention that, outside of ISO/IEC standards,
there are many national and regional organizations developing their own standards. In
the USA for instance, the National Institute for Standards and Technology has published
the NIST AI Risk Management Framework 1.0. For Europe, and to address the
requirement of the EU AI Act, two European organizations are currently developing
standards to address the new EU Act requirements. These two organizations are the
European Committee for Standardization (CEN) and the European Committee for
Electrotechnical Standardization (CENELEC). The good news is that there is a
tremendous amount of coordination between ISO/IEC and CEN-CENELEC to define a
common certificate process so if you want to start with 42001 you should be able to
meet future European standards requirements.

So if you want to start certifying your organization, here are the five steps I suggest:

1. Appoint a team to lead the certification process

Most organizations would start by appointing a compliance and an audit officer but I
would suggest that you go beyond that. Appoint a cross-functional team that is as
diverse as possible to cover all the possible issues. AI is not an IT issue. It is not a
business issue either. It is an everything issue that goes beyond traditional
organizational boundaries.

2. Run an inventory of your current ML systems

This is a critical step. As a starting point, you need to know where the traps are. Some of
the risks may be coming from your own AI applications, but some of them may be
coming from vendor applications. As a user, you also have legal exposure, so you must
know about the risks of the provider’s applications as well. Ideally, your vendors’
selection process should have already identified those risks. But if this has not been
done yet, then this is a critical step to start with. Your AI procurement contract must
identify all foreseeable risks to safety and fundamental rights, and ways of mitigating
those risks. Here is a good document to start with on how to design an AI procurement
strategy. If you are in the U.S. and you work with the government sector, here is a nice
procurement framework that helps you promote responsible AI principles. And if you
are based in the UK or you do business with the UK government, here is the latest
update on AI procurement policy from the Cabinet Office.

3. Engage your senior leadership.

Without leadership support the certification process will stall. AI certification is new to
everyone and it is complex because the technology itself is complex. Very few
organizations meet all the requirements, so you are going to need to work on many
things that are beyond your immediate control and for which senior leadership’s early
involvement and commitment is critical.

4. Adopt responsible AI Practices


Explainability, fairness, nondiscriminatory treatment, robustness, transparency, privacy
protection, and accountability are some of the AI-responsible practices that are widely
accepted. Take advantage of the certification process to embed these principles into
your AI management practices. Teach those in charge of the AI system about the
importance of these principles, and their long-term benefits. Show your data scientists
the ROI of these principles on the long-term sustainability of your AI products.

A Concluding Thought

ISO/IEC 42001 is a new standard. Many organizations will rush into the certification
process to check the box. But I suggest that you take your time to build the foundations
of a sound AI governance system. Buy the standard and read the requirements yourself.
Work with certifying bodies (or those who prepare you for the certification) to help your
team understand the complexity of AI governance. Genuinely engage with them to build
a strong foundation. The certification process is just the beginning.

You might also like