50% found this document useful (2 votes)
487 views126 pages

ISO42001BOK

The document serves as a comprehensive guide for establishing, implementing, and improving an AI management system in accordance with ISO 42001:2023. It outlines the unique considerations and requirements for organizations utilizing AI systems, emphasizing the importance of governance, risk management, and continuous improvement. The guidelines aim to help organizations responsibly manage AI technologies while aligning with existing management structures and standards.

Uploaded by

ominostanco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
487 views126 pages

ISO42001BOK

The document serves as a comprehensive guide for establishing, implementing, and improving an AI management system in accordance with ISO 42001:2023. It outlines the unique considerations and requirements for organizations utilizing AI systems, emphasizing the importance of governance, risk management, and continuous improvement. The guidelines aim to help organizations responsibly manage AI technologies while aligning with existing management structures and standards.

Uploaded by

ominostanco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

CERTIFIED ISO 42001:2023 LEAD

AUDITOR (BOK)
A comprehensive guide to the knowledge and skills required for ISO 42001:2023

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
1
TABLE OF CONTENTS
INTRODUCTION ..................................................................................................... 4
1. SCOPE .................................................................................................................. 6
2. NORMATIVE REFERENCES ................................................................................ 6
3. TERMS AND DEFINITIONS ................................................................................ 6
4. CONTEXT OF THE ORGANIZATION ................................................................ 11
4.1 UNDERSTANDING THE ORGANIZATION AND ITS CONTEXT .................. 11
4.2 UNDERSTANDING THE NEEDS AND EXPECTATIONS OF INTERESTED
PARTIES ............................................................................................................. 13
4.3 DETERMINING THE SCOPE OF THE AI MANAGEMENT SYSTEM ............ 13
4.4 AI MANAGEMENT SYSTEM......................................................................... 13
5. LEADERSHIP ..................................................................................................... 14
5.1 LEADERSHIP AND COMMITMENT ............................................................ 14
5.2 AI POLICY .................................................................................................... 14
5.3 ROLES, RESPONSIBILITIES AND AUTHORITIES ...................................... 15
6. PLANNING ......................................................................................................... 15
6.1 ACTIONS TO ADDRESS RISKS AND OPPORTUNITIES ............................. 15
6.2 AI OBJECTIVES AND PLANNING TO ACHIEVE THEM .............................. 19
6.3 PLANNING OF CHANGES ............................................................................ 19
7. SUPPORT ........................................................................................................... 20
7.1 RESOURCES ................................................................................................. 20
7.2 COMPETENCE.............................................................................................. 20
7.3 AWARENESS ............................................................................................. 20
7.4 COMMUNICATION ................................................................................... 20
7.5 DOCUMENTED INFORMATION .............................................................. 21
8. OPERATION ...................................................................................................... 22
8.1 OPERATIONAL PLANNING AND CONTROL .............................................. 22
8.2 AI RISK ASSESSMENT ................................................................................. 22
8.3 AI RISK TREATMENT .................................................................................. 22
8.4 AI SYSTEM IMPACT ASSESSMENT ............................................................. 23

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
2
9. PERFORMANCE EVALUATION ......................................................................... 23
9.1 MONITORING, MEASUREMENT, ANALYSIS AND EVALUATION ............... 23
9.2 INTERNAL AUDIT ....................................................................................... 23
9.3 MANAGEMENT REVIEW ............................................................................. 24
10. IMPROVEMENT .............................................................................................. 25
10.1 CONTINUAL IMPROVEMENT ................................................................... 25
10.2 NONCONFORMITY AND CORRECTIVE ACTION ..................................... 25
ANNEX A ................................................................................................................ 26
ANNEX B ................................................................................................................ 30
ANNEX C ................................................................................................................ 62
ANNEX D................................................................................................................ 65
BIBLIOGRAPHY .................................................................................................... 68

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
3
INTRODUCTION
Artificial intelligence (AI) is increasingly applied across all sectors utilizing
information technology and is expected to be one of the main economic drivers. A
consequence of this trend is that certain applications can give rise to societal
challenges over the coming years.
This document intends to help organizations responsibly perform their role with
respect to AI systems (e.g. to use, develop, monitor or provide products or services
that utilize AI). AI potentially raises specific considerations such as:
• The use of AI for automatic decision-making, sometimes in a non-
transparent and non-explainable way, can require specific management
beyond the management of classical IT systems.
• The use of data analysis, insight and machine learning, rather than human-
coded logic to design systems, both increases the application opportunities
for AI systems and changes the way that such systems are developed,
justified and deployed.
• AI systems that perform continuous learning change their behaviour during
use. They require special consideration to ensure their responsible use
continues with changing behaviour.
This document provides requirements for establishing, implementing,
maintaining and continually improving an AI management system within the
context of an organization. Organizations are expected to focus their application of
requirements on features that are unique to AI. Certain features of AI, such as the
ability to continuously learn and improve or a lack of transparency or
explainability, can warrant different safeguards if they raise additional concerns
compared to how the task would traditionally be performed. The adoption of an AI
management system to extend the existing management structures is a strategic
decision for an organization.
The organization’s needs and objectives, processes, size and structure as well as
the expectations of various interested parties influence the establishment and
implementation of the AI management system. Another set of factors that
influence the establishment and implementation of the AI management system are
the many use cases for AI and the need to strike the appropriate balance between
governance mechanisms and innovation. Organizations can elect to apply these
requirements using a risk-based approach to ensure that the appropriate level of
control is applied for the particular AI use cases, services or products within the
organization’s scope. All these influencing factors are expected to change and be
reviewed from time to time.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
4
The AI management system should be integrated with the organization’s processes
and overall management structure. Specific issues related to AI should be
considered in the design of processes, information systems and controls. Crucial
examples of such management processes are:
• determination of organizational objectives, involvement of interested
parties and organizational policy;
• management of risks and opportunities;
• processes for the management of concerns related to the trustworthiness of
AI systems such as security, safety, fairness, transparency, data quality and
quality of AI systems throughout their life cycle;
• processes for the management of suppliers, partners and third parties that
provide or develop AI systems for the organization.
This document provides guidelines for the deployment of applicable controls to
support such processes.
This document avoids specific guidance on management processes. The
organization can combine generally accepted frameworks, other International
Standards and its own experience to implement crucial processes such as risk
management, life cycle management and data quality management which are
appropriate for the specific AI use cases, products or services within the scope.
An organization conforming with the requirements in this document can generate
evidence of its responsibility and accountability regarding its role with respect to
AI systems.
The order in which requirements are presented in this document does not reflect
their importance or imply the order in which they are implemented. The list items
are enumerated for reference purposes only.
Compatibility with other management system standards
This document applies the harmonized structure (identical clause numbers, clause
titles, text and common terms and core definitions) developed to enhance
alignment among management system standards (MSS). The AI management
system provides requirements specific to managing the issues and risks arising
from using AI in an organization. This common approach facilitates
implementation and consistency with other management system standards, e.g.
related to quality, safety, security and privacy.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
5
1. SCOPE
This document specifies the requirements and provides guidance for establishing,
implementing, maintaining and continually improving an AI (artificial intelligence)
management system within the context of an organization.
This document is intended for use by an organization providing or using products
or services that utilize AI systems. This document is intended to help the
organization develop, provide or use AI systems responsibly in pursuing its
objectives and meet applicable requirements, obligations related to interested
parties and expectations from them.
This document is applicable to any organization, regardless of size, type and
nature, that provides or uses products or services that utilize AI systems.

2. NORMATIVE REFERENCES
The following documents are referred to in the text in such a way that some or all
of their content constitutes requirements of this document. For dated references,
only the edition cited applies. For undated references, the latest edition of the
referenced document (including any amendments) applies.
ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial
intelligence concepts and terminology

3. TERMS AND DEFINITIONS


For the purposes of this document, the terms and definitions given in ISO/IEC
22989 and the following apply.
ISO and IEC maintain terminology databases for use in standardization at the
following addresses:
x ISO Online browsing platform: available at https://www.iso.org/obp
x IEC Electropedia: available at https://www.electropedia.org/
3.1 organization
person or group of people that has its own functions with responsibilities,
authorities and relationships to achieve its objectives (3.6)
Note 1 to entry: The concept of organization includes, but is not limited to, sole-
trader, company, corporation, firm, enterprise, authority, partnership, charity or

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
6
institution or part or combination thereof, whether incorporated or not, public or
private.
Note 2 to entry: If the organization is part of a larger entity, the term
“organization” refers only to the part of the larger entity that is within the scope of
the AI management system (3.4).
3.2 Interested party
person or organization (3.1) that can affect, be affected by, or perceive itself to be
affected by a decision or activity
Note 1 to entry: An overview of interested parties in AI is provided in ISO/IEC
22989:2022, 5.19.
3.3 top management
person or group of people who directs and controls an organization (3.1) at the
highest level
Note 1 to entry: Top management has the power to delegate authority and provide
resources within the organization.
Note 2 to entry: If the scope of the management system (3.4) covers only part of
an organization, then top management refers to those who direct and control that
part of the organization.
3.4 management system
set of interrelated or interacting elements of an organization (3.1) to establish
policies (3.5) and objectives (3.6), as well as processes (3.8) to achieve those
objectives
Note 1 to entry: A management system can address a single discipline or several
disciplines.
Note 2 to entry: The management system elements include the organization’s
structure, roles and responsibilities, planning and operation.
3.5 policy
intentions and direction of an organization (3.1) as formally expressed by its top
management (3.3)
3.6 objective result to be achieved
Note 1 to entry: An objective can be strategic, tactical, or operational.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
7
Note 2 to entry: Objectives can relate to different disciplines (such as finance,
health and safety, and environment). They can be, for example, organization-wide
or specific to a project, product or process (3.8).
Note 3 to entry: An objective can be expressed in other ways, e.g. as an intended
result, as a purpose, as an operational criterion, as an AI objective or by the use of
other words with similar meaning (e.g. aim, goal, or target).
Note 4 to entry: In the context of AI management systems (3.4), AI objectives are
set by the organization (3.1), consistent with the AI policy (3.5), to achieve specific
results.
3.7 risk effect of uncertainty
Note 1 to entry: An effect is a deviation from the expected — positive or negative.
Note 2 to entry: Uncertainty is the state, even partial, of deficiency of information
related to, understanding or knowledge of, an event, its consequence, or likelihood.
Note 3 to entry: Risk is often characterized by reference to potential events (as
defined in ISO Guide 73) and consequences (as defined in ISO Guide 73), or a
combination of these.
Note 4 to entry: Risk is often expressed in terms of a combination of the
consequences of an event (including changes in circumstances) and the associated
likelihood (as defined in ISO Guide 73) of occurrence.
3.8 process
set of interrelated or interacting activities that uses or transforms inputs to deliver
a result
Note 1 to entry: Whether the result of a process is called an output, a product or
a service depends on the context of the reference.
3.9 competence
ability to apply knowledge and skills to achieve intended results
3.10 documented information
information required to be controlled and maintained by an organization (3.1) and
the medium on which it is contained
Note 1 to entry: Documented information can be in any format and media and
from any source.
Note 2 to entry: Documented information can refer to:
x the management system (3.4), including related processes (3.8);

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
8
x information created in order for the organization to operate
(documentation);
x evidence of results achieved (records).
3.11 performance measurable result
Note 1 to entry: Performance can relate either to quantitative or qualitative
findings.
Note 2 to entry: Performance can relate to managing activities, processes (3.8),
products, services, systems or organizations (3.1).
Note 3 to entry: In the context of this document, performance refers both to
results achieved by using AI systems and results related to the AI management
system (3.4). The correct interpretation of the term is clear from the context of its
use.
3.12 continual improvement
recurring activity to enhance performance (3.11)
3.13 effectiveness
extent to which planned activities are realized and planned results are achieved
3.14 requirement
need or expectation that is stated, generally implied or obligatory
Note 1 to entry: “Generally implied” means that it is custom or common practice
for the organization (3.1) and interested parties (3.2) that the need or expectation
under consideration is implied.
Note 2 to entry: A specified requirement is one that is stated, e.g. in documented
information (3.10).
3.15 conformity
fulfilment of a requirement (3.14)
3.16 nonconformity
non-fulfilment of a requirement (3.14)
3.17 corrective action
action to eliminate the cause(s) of a nonconformity (3.16) and to prevent
recurrence
3.18 audit

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
9
systematic and independent process (3.8) for obtaining evidence and evaluating it
objectively to determine the extent to which the audit criteria are fulfilled
Note 1 to entry: An audit can be an internal audit (first party) or an external audit
(second party or third party), and it can be a combined audit (combining two or
more disciplines).
Note 2 to entry: An internal audit is conducted by the organization (3.1) itself, or
by an external party on its behalf.
Note 3 to entry: “Audit evidence” and “audit criteria” are defined in ISO 19011.
3.19 measurement
process (3.8) to determine a value
3.20 monitoring
determining the status of a system, a process (3.8) or an activity
Note 1 to entry: To determine the status, there can be a need to check, supervise
or critically observe.
3.21 control
<risk> measure that maintains and/or modifies risk (3.7)
Note 1 to entry: Controls include, but are not limited to, any process, policy, device,
practice or other conditions and/or actions which maintain and/or modify risk.
Note 2 to entry: Controls may not always exert the intended or assumed
modifying effect.
[SOURCE: ISO 31000:2018, 3.8, modified — Added <risk> as application domain ]
3.22 governing body
person or group of people who are accountable for the performance and
conformance of the organization
Note 1 to entry: Not all organizations, particularly small organizations, will have a
governing body separate from top management.
Note 2 to entry: A governing body can include, but is not limited to, board of
directors, committees of the board, supervisory board, trustees or overseers.
[SOURCE: ISO/IEC 38500:2015, 2.9, modified — Added Notes to entry.]
3.23 Information Security
preservation of confidentiality, integrity and availability of information

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
10
Note 1 to entry: Other properties such as authenticity, accountability, non-
repudiation and reliability can also be involved.
[SOURCE: ISO/IEC 27000:2018, 3.28]
3.24 AI system impact assessment
formal, documented process by which the impacts on individuals, groups of
individuals, or both, and societies are identified, evaluated and addressed by an
organization developing, providing or using products or services utilizing artificial
intelligence
3.25 Data quality
characteristic of data that the data meet the organization’s data requirements for
a specific context [SOURCE: ISO/IEC 5259-1:— ), 3.4]
3.26 Statement of applicability
documentation of all necessary controls (3.23) and justification for inclusion or
exclusion of controls
Note 1 to entry: Organizations may not require all controls listed in Annex A or
may even exceed the list in Annex A with additional controls established by the
organization itself.
Note 2 to entry: All identified risks shall be documented by the organization
according to the requirements of this document. All identified risks and the risk
management measures (controls) established to address them shall be reflected in
the statement of applicability.

4. CONTEXT OF THE ORGANIZATION


4.1 Understanding the organization and its context
The organization shall determine external and internal issues that are relevant to
its purpose and that affect its ability to achieve the intended result(s) of its AI
management system.
The organization shall determine whether climate change is a relevant issue.
The organization shall consider the intended purpose of the AI systems that are
developed, provided or used by the organization. The organization shall determine
its roles with respect to these AI systems.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
11
NOTE 1 To understand the organization and its context, it can be helpful for the
organization to determine its role relative to the AI system. These roles can include,
but are not limited to, one or more of the following:
x AI providers, including AI platform providers, AI product or service
providers;
x AI producers, including AI developers, AI designers, AI operators, AI testers
and evaluators, AI deployers, AI human factor professionals, domain
experts, AI impact assessors, procurers, AI governance and oversight
professionals;
x AI customers, including AI users;
x AI partners, including AI system integrators and data providers;
x AI subjects, including data subjects and other subjects;
x relevant authorities, including policymakers and regulators.
A detailed description of these roles is provided by ISO/IEC 22989. Furthermore,
the types of roles and their relationship to the AI system life cycle are also
described in the NIST AI risk management framework.[29] The organization’s
roles can determine the applicability and extent of applicability of the
requirements and controls in this document.
NOTE 2 External and internal issues to be addressed under this clause can vary
according to the organization’s roles and jurisdiction and their impact on its ability
to achieve the intended outcome(s) of its AI management system. These can
include, but are not limited to:
a) external context related considerations such as:
1) applicable legal requirements, including prohibited uses of AI;
2) policies, guidelines and decisions from regulators that have an impact on the
interpretation or enforcement of legal requirements in the development and
use of AI systems;
3) incentives or consequences associated with the intended purpose and the
use of AI systems;
4) culture, traditions, values, norms and ethics with respect to development
and use of AI;
5) competitive landscape and trends for new products and services using AI
systems;
b) internal context related considerations such as:
1) organizational context, governance, objectives (see 6.2), policies and
procedures;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
12
2) contractual obligations;
3) intended purpose of the AI system to be developed or used.
NOTE 3 Role determination can be formed by obligations related to categories of
data the organization processes (e.g. personally identifiable information (PII)
processor or PII controller when processing PII). See ISO/IEC 29100 for PII and
related roles. Roles can also be informed by legal requirements specific to AI
systems.
4.2 Understanding the needs and expectations of interested
parties
The organization shall determine:
x the interested parties that are relevant to the AI management system;
x the relevant requirements of these interested parties;
x which of these requirements will be addressed through the AI management
system.
NOTE Relevant interested parties can have requirements related to climate
change.
4.3 Determining the scope of the AI management system
The organization shall determine the boundaries and applicability of the AI
management system to establish its scope.
When determining this scope, the organization shall consider:
x the external and internal issues referred to in 4.1;
x the requirements referred to in 4.2.
The scope shall be available as documented information.
The scope of the AI management system shall determine the organization’s
activities with respect to this document’s requirements on the AI management
system, leadership, planning, support, operation, performance, evaluation,
improvement, controls and objectives.
4.4 AI management system
The organization shall establish, implement, maintain, continually improve and
document an AI management system, including the processes needed and their
interactions, in accordance with the requirements of this document.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
13
5. LEADERSHIP
5.1 Leadership and commitment
Top management shall demonstrate leadership and commitment with respect to
the AI management system by:
x Ensuring that the AI policy (see 5.2) and AI objectives (see 6.2) are
established and are compatible with the strategic direction of the
organization;
x ensuring the integration of the AI management system requirements into
the organization’s business processes;
x ensuring that the resources needed for the AI management system are
available;
x communicating the importance of effective AI management and of
conforming to the AI management system requirements;
x ensuring that the AI management system achieves its intended result(s);
x directing and supporting persons to contribute to the effectiveness of the AI
management system;
x promoting continual improvement;
x supporting other relevant roles to demonstrate their leadership as it applies
to their areas of responsibility.
NOTE 1 Reference to “business” in this document can be interpreted broadly to
mean those activities that are core to the purposes of the organization’s existence.
NOTE 2 Establishing, encouraging and modelling a culture within the organization,
to take a responsible approach to using, development and governing AI systems
can be an important demonstration of commitment and leadership by top
management. Ensuring awareness of and compliance with such a responsible
approach and in support of the AI management system through leadership can aid
the success of the AI management system.
5.2 AI policy
Top management shall establish an AI policy that:
a) is appropriate to the purpose of the organization;
b) provides a framework for setting AI objectives (see 6.2);
c) includes a commitment to meet applicable requirements;
d) includes a commitment to continual improvement of the AI management
system.
The AI policy shall:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
14
x be available as documented information;
x refer as relevant to other organizational policies;
x be communicated within the organization;
x be available to interested parties, as appropriate.
Control objectives and controls for establishing an AI policy are provided in A.2 in
Table A.1. Implementation guidance for these controls is provided in B.2.
NOTE Considerations for organizations when developing AI policies are provided
in ISO/IEC 38507.
5.3 Roles, responsibilities and authorities
Top management shall ensure that the responsibilities and authorities for relevant
roles are assigned and communicated within the organization.
Top management shall assign the responsibility and authority for:
a) ensuring that the AI management system conforms to the requirements of this
document;
b) reporting on the performance of the AI management system to top management.
NOTE A control for defining and allocating roles and responsibilities is provided in
A.3.2 in Table A.1. Implementation guidance for this control is provided in B.3.2.

6. PLANNING
6.1 Actions to address risks and opportunities
6.1.1 General
When planning for the AI management system, the organization shall consider the
issues referred to in 4.1 and the requirements referred to in 4.2 and determine the
risks and opportunities that need to be addressed to:
x give assurance that the AI management system can achieve its intended
result(s);
x prevent or reduce undesired effects; — achieve continual improvement.
x The organization shall establish and maintain AI risk criteria that support:
x distinguishing acceptable from non-acceptable risks;
x performing AI risk assessments;
x conducting AI risk treatment; — assessing AI risk impacts.
NOTE 1 Considerations to determine the amount and type of risk that an
organization is willing to pursue or retain are provided in ISO/IEC 38507 and
ISO/IEC 23894.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
15
The organization shall determine the risks and opportunities according to:
x the domain and application context of an AI system;
x the intended use;
x the external and internal context described in 4.1.
NOTE 2 More than one AI system can be considered in the scope of the AI
management system. In this case the determination of opportunities and uses is
performed for each AI system or groupings of AI systems.
The organization shall plan:
a) actions to address these risks and opportunities;
b) how to:
1) integrate and implement the actions into its AI management system
processes;
2) evaluate the effectiveness of these actions.
The organization shall retain documented information on actions taken to identify
and address AI risks and AI opportunities.
NOTE 3 Guidance on how to implement risk management for organizations
developing, providing or using AI products, systems and services is provided in
ISO/IEC 23894.
NOTE 4 The context of the organization and its activities can have an impact on
the organization’s risk management activities.
NOTE 5 The way of defining risk and therefore of envisioning risk management
can vary across sectors and industries. The definition of risk in 3.7 allows a broad
vision of risk adaptable to any sector, such as the sectors mentioned in Annex D. In
any case, it is the role of the organization, as part of risk assessment, to first adopt
a vision of risk adapted to its context. This can include approaching risk through
definitions used in sectors where the AI system is developed for and used, such as
the definition from ISO/IEC Guide 51.
6.1.2 AI risk assessment
The organization shall define and establish an AI risk assessment process that:
a) is informed by and aligned with the AI policy (see 5.2) and AI objectives (see
6.2);
NOTE When assessing the consequences as part of 6.1.2 d) 1), the
organization can utilize an AI system impact assessment as indicated in 6.1.4.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
16
b) is designed such that repeated AI risk assessments can produce consistent, valid
and comparable results;
c) identifies risks that aid or prevent achieving its AI objectives;
d) analyses the AI risks to:
1) assess the potential consequences to the organization, individuals and
societies that would result if the identified risks were to materialize;
2) assess, where applicable, the realistic likelihood of the identified risks;
3) determine the levels of risk;
e) evaluates the AI risks to:
1) compare the results of the risk analysis with the risk criteria (see 6.1.1);
2) prioritize the assessed risks for risk treatment.
The organization shall retain documented information about the AI risk
assessment process.
6.1.3 AI risk treatment
Taking the risk assessment results into account, the organization shall define an AI
risk treatment process to:
a) select appropriate AI risk treatment options;
b) determine all controls that are necessary to implement the AI risk treatment
options chosen and compare the controls with those in Annex A to verify that no
necessary controls have been omitted;
NOTE 1 Annex A provides reference controls for meeting organizational objectives
and addressing risks related to the design and use of AI systems.
c) consider the controls from Annex A that are relevant for the implementation of
the AI risk treatment options;
d) identify if additional controls are necessary beyond those in Annex A in order to
implement all risk treatment options;
e) consider the guidance in Annex B for the implementation of controls determined
in b) and c);
NOTE 2 Control objectives are implicitly included in the controls chosen. The
organization can select an appropriate set of control objectives and controls from
Annex A. The Annex A controls are not exhaustive and additional control objectives
and controls can be needed. If different or additional controls are necessary
beyond those in Annex A, the organization can design such controls or take them

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
17
from existing sources. AI risk management can be integrated in other management
systems, if applicable.
f) produce a statement of applicability that contains the necessary controls [see b),
c) and d)] and provide justification for inclusion and exclusion of controls.
Justification for exclusion can include where the controls are not deemed
necessary by the risk assessment and where they are not required by (or are
subject to exceptions under) applicable external requirements.
NOTE 3 The organization can provide documented justifications for excluding any
control objectives in general or for specific AI systems, whether those listed in
Annex A or established by the organization itself. g) formulate an AI risk treatment
plan.
The organization shall obtain approval from the designated management for the
AI risk treatment plan and for acceptance of the residual AI risks. The necessary
controls shall be:
x aligned to the objectives in 6.2;
x available as documented information;
x communicated within the organization;
x available to interested parties, as appropriate.
The organization shall retain documented information about the AI risk treatment
process.
6.1.4 AI system impact assessment
The organization shall define a process for assessing the potential consequences
for individuals or groups of individuals, or both, and societies that can result from
the development, provision or use of AI systems.
The AI system impact assessment shall determine the potential consequences an
AI system’s deployment, intended use and foreseeable misuse has on individuals
or groups of individuals, or both, and societies.
The AI system impact assessment shall take into account the specific technical and
societal context where the AI system is deployed and applicable jurisdictions.
The result of the AI system impact assessment shall be documented. Where
appropriate, the result of the system impact assessment can be made available to
relevant interested parties as defined by the organization.
The organization shall consider the results of the AI system impact assessment in
the risk assessment (see 6.1.2). A.5 in Table A.1 provides controls for assessing
impacts of AI systems.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
18
NOTE In some contexts (such as safety or privacy critical AI systems), the
organization can require that discipline-specific AI system impact assessments
(e.g. safety, privacy or security impact) be performed as part of the overall risk
management activities of an organization.
6.2 AI objectives and planning to achieve them
The organization shall establish AI objectives at relevant functions and levels.
The AI objectives shall:
a) be consistent with the AI policy (see 5.2);
b) be measurable (if practicable);
c) take into account applicable requirements;
d) be monitored;
e) be communicated;
f) be updated as appropriate;
g) be available as documented information.
When planning how to achieve its AI objectives, the organization shall determine:
x what will be done;
x what resources will be required;
x who will be responsible;
x when it will be completed;
x how the results will be evaluated.
NOTE A non-exclusive list of AI objectives relating to risk management is provided
in Annex C. Control objectives and controls for identifying objectives for
responsible development and use of AI systems and measures to achieve them are
provided in A.6.1 and A.9.3 in Table A.1. Implementation guidance for these
controls is provided in B.6.1 and B.9.3.
6.3 Planning of changes
When the organization determines the need for changes to the AI management
system, the changes shall be carried out in a planned manner.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
19
7. SUPPORT
7.1 Resources
The organization shall determine and provide the resources needed for the
establishment, implementation, maintenance and continual improvement of the AI
management system.
NOTE Control objectives and controls for AI resources are provided in A.4 in Table
A.1. Implementation guidance for these controls is provided in Clause B.4.
7.2 Competence
The organization shall:
x determine the necessary competence of person(s) doing work under its
control that affects its AI performance;
x ensure that these persons are competent on the basis of appropriate
education, training or experience;
x where applicable, take actions to acquire the necessary competence, and
evaluate the effectiveness of the actions taken.
Appropriate documented information shall be available as evidence of
competence.
NOTE 1 Implementation guidance for human resources including consideration of
necessary expertise is provided in B.4.6.
NOTE 2 Applicable actions can include, for example: the provision of training to,
the mentoring of, or the reassignment of currently employed persons; or the hiring
or contracting of competent persons.
7.3 Awareness
Persons doing work under the organization’s control shall be aware of:
x the AI policy (see 5.2);
x their contribution to the effectiveness of the AI management system,
including the benefits of improved AI performance;
x the implications of not conforming with the AI management system
requirements.
7.4 Communication
The organization shall determine the internal and external communications
relevant to the AI management system including:
x what it will communicate;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
20
x when to communicate;
x with whom to communicate;
x how to communicate.
7.5 Documented information
7.5.1 General
The organization’s AI management system shall include:
a) documented information required by this document;
b) documented information determined by the organization as being necessary
for the effectiveness of the AI management system.
NOTE The extent of documented information for an AI management system can
differ from one organization to another due to:
x the size of organization and its type of activities, processes, products and
services;
x the complexity of processes and their interactions;
x the competence of persons.
7.5.2 Creating and updating documented information
When creating and updating documented information, the organization shall
ensure appropriate:
x identification and description (e.g. a title, date, author or reference
number); — format (e.g. language, software version, graphics) and media
(e.g. paper, electronic);
x review and approval for suitability and adequacy.
7.5.3 Control of documented information
Documented information required by the AI management system and by this
document shall be controlled to ensure:
a) it is available and suitable for use, where and when it is needed;
b) it is adequately protected (e.g. from loss of confidentiality, improper use or loss
of integrity).
For the control of documented information, the organization shall address the
following activities, as applicable:
x distribution, access, retrieval and use;
x storage and preservation, including preservation of legibility;
x control of changes (e.g. version control);
x retention and disposition.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
21
Documented information of external origin determined by the organization to be
necessary for the planning and operation of the AI management system shall be
identified as appropriate and controlled.
NOTE Access can imply a decision regarding the permission to view the
documented information only, or the permission and authority to view and change
the documented information.

8. OPERATION
8.1 Operational planning and control
The organization shall plan, implement and control the processes needed to meet
requirements, and to implement the actions determined in Clause 6, by:
x establishing criteria for the processes;
x implementing control of the processes in accordance with the criteria.
The organization shall implement the controls determined according to 6.1.3 that
are related to the operation of the AI management system (e.g. AI system
development and usage life cycle related controls).
The effectiveness of these controls shall be monitored and corrective actions shall
be considered if the intended results are not achieved. Annex A lists reference
controls and Annex B provides implementation guidance for them.
Documented information shall be available to the extent necessary to have
confidence that the processes have been carried out as planned.
The organization shall control planned changes and review the consequences of
unintended changes, taking action to mitigate any adverse effects, as necessary.
The organization shall ensure that externally provided processes, products or
services that are relevant to the AI management system are controlled.
8.2 AI risk assessment
The organization shall perform AI risk assessments in accordance with 6.1.2 at
planned intervals or when significant changes are proposed or occur.
The organization shall retain documented information of the results of all AI risk
assessments.
8.3 AI risk treatment
The organization shall implement the AI risk treatment plan according to 6.1.3 and
verify its effectiveness.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
22
When risk assessments identify new risks that require treatment, a risk treatment
process in accordance with 6.1.3 shall be performed for these risks.
When risk treatment options as defined by the risk treatment plan are not
effective, these treatment options shall be reviewed and revalidated following the
risk treatment process according to 6.1.3 and the risk treatment plan shall be
updated.
The organization shall retain documented information of the results of all AI risk
treatments.
8.4 AI system impact assessment
The organization shall perform AI system impact assessments according to 6.1.4
at planned intervals or when significant changes are proposed to occur.
The organization shall retain documented information of the results of all AI
system impact assessments.

9. PERFORMANCE EVALUATION
9.1 Monitoring, measurement, analysis and evaluation
The organization shall determine:
x what needs to be monitored and measured;
x the methods for monitoring, measurement, analysis and evaluation, as
applicable, to ensure valid results;
x when the monitoring and measuring shall be performed;
x when the results from monitoring and measurement shall be analysed and
evaluated.
Documented information shall be available as evidence of the results.
The organization shall evaluate the performance and the effectiveness of the AI
management system.
9.2 Internal audit
9.2.1 General
The organization shall conduct internal audits at planned intervals to provide
information on whether the AI management system:
a) conforms to:
1) the organization’s own requirements for its AI management system;
2) the requirements of this document;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
23
b) is effectively implemented and maintained.
9.2.2 Internal audit programme
The organization shall plan, establish, implement and maintain (an) audit
programme(s), including the frequency, methods, responsibilities, planning
requirements and reporting.
When establishing the internal audit programme(s), the organization shall
consider the importance of the processes concerned and the results of previous
audits.
The organization shall:
a) define the audit objectives, criteria and scope for each audit;
b) select auditors and conduct audits to ensure objectivity and the impartiality of
the audit process;
c) ensure that the results of audits are reported to relevant managers.
Documented information shall be available as evidence of the implementation of
the audit programme(s) and the audit results.
9.3 Management review
9.3.1 General
Top management shall review the organization’s AI management system, at
planned intervals, to ensure its continuing suitability, adequacy and effectiveness.
9.3.2 Management review inputs
The management review shall include:
a) the status of actions from previous management reviews;
b) changes in external and internal issues that are relevant to the AI management
system;
c) changes in needs and expectations of interested parties that are relevant to the
AI management system;
d) information on the AI management system performance, including trends in:
1) nonconformities and corrective actions;
2) monitoring and measurement results;
3) audit results;
e) opportunities for continual improvement.
9.3.3 Management review results

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
24
The results of the management review shall include decisions related to continual
improvement opportunities and any need for changes to the AI management
system.
Documented information shall be available as evidence of the results of
management reviews.

10. IMPROVEMENT
10.1 Continual improvement
The organization shall continually improve the suitability, adequacy and
effectiveness of the AI management system.
10.2 Nonconformity and corrective action
When a nonconformity occurs, the organization shall:
a) react to the nonconformity and as applicable:
1) take action to control and correct it;
2) deal with the consequences;
b) evaluate the need for action to eliminate the cause(s) of the nonconformity, so
that it does not recur or occur elsewhere, by:
1) reviewing the nonconformity;
2) determining the causes of the nonconformity;
3) determining if similar nonconformities exist or can potentially occur;
c) implement any action needed;
d) review the effectiveness of any corrective action taken;
e) make changes to the AI management system, if necessary.
Corrective actions shall be appropriate to the effects of the nonconformities
encountered.
Documented information shall be available as evidence of:
x the nature of the nonconformities and any subsequent actions taken;
x the results of any corrective action.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
25
ANNEX A
(normative)
Reference control objectives and controls
A.1 General
The controls detailed in Table A.1 provide the organization with a reference for
meeting organizational objectives and addressing risks related to the design and
operation of AI systems. Not all the control objectives and controls listed in Table
A.1 are required to be used, and the organization can design and implement their
own controls (see 6.1.3).
Annex B provides implementation guidance for all the controls listed in Table A.1.
Table A.1 — Control objectives and controls
A.2 Policies related to AI
Objective: To provide management direction and support for AI systems according to
business requirements.
Topic Control
A.2.2 AI policy The organization shall document a policy for
the development or use of AI systems.
A.2.3 Alignment with other The organization shall determine where other
organizational policies policies can be affected by or apply to, the
organization’s objectives with respect to AI
systems.
A.2.4 Review of the AI policy The AI policy shall be reviewed at planned
intervals or additionally as needed to ensure its
continuing suitability, adequacy and
effectiveness.
A.3 Internal organization
Objective: To establish accountability within the organization to uphold its responsible
approach for the imple-mentation, operation and management of AI systems
Topic Control
A.3.2 AI roles and Roles and responsibilities for AI shall be defined
responsibilities and allocated according to the needs of the
organization.
A.3.3 Reporting of concerns The organization shall define and put in place a
process to report concerns about the
organization’s role with respect to an AI system
throughout its life cycle.
A.4 Resources for AI systems
Objective: To ensure that the organization accounts for the resources (including AI
system components and assets) of the AI system in order to fully understand and
address risks and impacts.
Topic Control

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
26
A.4.2 Resource The organization shall identify and document
documentation relevant resources required for the activities at
given AI system life cycle stages and other AI-
related activities relevant for the organization.
A.4.3 Data resources As part of resource identification, the
organization shall document information about
the data resources utilized for the AI system.
A.4.4 Tooling resources As part of resource identification, the
organization shall document information about
the tooling resources utilized for the AI system.
A.4.5 System and computing As part of resource identification, the
resources organization shall document information about
the system and computing resources utilized
for the AI system.
A.4.6 Human resources As part of resource identification, the
organization shall document information about
the human resources and their competences
utilized for the development, deployment,
operation, change management, maintenance,
transfer and decommissioning, as well as
verification and integration of the AI system.
A.6 AI system life cycle
A.6.1 Management guidance for AI system development
Objective: To ensure that the organization identifies and documents objectives and
implements processes for the responsible design and development of AI systems.
Topic Control
A.6.1.2 Objectives for The organization shall identify and document
responsible objectives to guide the responsible
development of AI development AI systems, and take those
system objectives into account and integrate measures
to achieve them in the development life cycle.
A.6.1.3 Processes for The organization shall define and document the
responsible AI system specific processes for the responsible design
design and development and development of the AI system.
A.6.2 AI system life cycle
Objective: To define the criteria and requirements for each stage of the AI system life
cycle
Topic Control
A.6.2.2 I system requirements he organization shall specify and document
and specification requirements for new AI systems or material
enhancements to existing systems.
A.6.2.3 Documentation of AI The organization shall document the AI system
system design and design and development based on
development organizational objectives, documented
requirements and specification criteria.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
27
A.6.2.4 I system verification and The organization shall define and document
validation verification and validation measures for the AI
system and specify criteria for their use.
A.6.2.5 AI system deployment The organization shall document a deployment
plan and ensure that appropriate requirements
are met prior to deployment.
A.6.2.6 I system operation and The organization shall define and document the
monitoring necessary elements for the ongoing operation
of the AI system. At the minimum, this should
include system and performance monitoring,
repairs, updates and support.
A.6.2.7 AI system technical The organization shall determine what AI
documentation system technical documentation is needed for
each relevant category of interested parties,
such as users, partners, supervisory authorities,
and provide the technical documentation to
them in the appropriate form.
A.6.2.8 AI system recording of The organization shall determine at which
event logs phases of the AI system life cycle, record
keeping of event logs should be enabled, but at
the minimum when the AI system is in use.
A.7 Data for AI systems
Objective: To ensure that the organization understands the role and impacts of data in
AI systems in the application and development, provision or use of AI systems
throughout their life cycles.
Topic Control
A.7.2 Data for development The organization shall define, document and
and enhancement of AI implement data management processes related
system to the development of AI systems.
A.7.3 Acquisition of data The organization shall determine and
document details about the acquisition and
selection of the data used in AI systems.
A.7.4 Quality of data for AI The organization shall define and document
systems requirements for data quality and ensure that
data used to develop and operate the AI system
meet those requirements.
A.7.5 Data provenance The organization shall define and document a
process for recording the provenance of data
used in its AI systems over the life cycles of the
data and the AI system.
A.7.6 Data preparation The organization shall define and document its
criteria for selecting data preparations and the
data preparation methods to be used.
A.8 Information for interested parties of AI systems
Objective: To ensure that relevant interested parties have the necessary information to
understand and assess the risks and their impacts (both positive and negative).

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
28
Topic Control
A.8.2 System documentation The organization shall determine and provide
and information for the necessary information to users of the AI
users system.
A.8.3 External reporting The organization shall provide capabilities for
interested parties to report adverse impacts of
the AI system.
A.8.4 Communication of The organization shall determine and
incidents document a plan for communicating incidents
to users of the AI system.
A.8.5 Information for The organization shall determine and
interested parties document their obligations to reporting
information about the AI system to interested
parties.
A.9 Use of AI systems
Objective: To ensure that the organization uses AI systems responsibly and per
organizational policies.
Topic Control
A.9.2 Processes for The organization shall define and document the
responsible use of AI processes for the responsible use of AI systems.
systems
A.9.3 Objectives for The organization shall identify and document
responsible use of AI objectives to guide the responsible use of AI
system systems.
A.9.4 Intended use of the AI The organization shall ensure that the AI system
system is used according to the intended uses of the AI
system and its accompanying documentation.
A.10 Third-party and customer relationships
Objective: To ensure that the organization understands its responsibilities and
remains accountable, and risks are appropriately apportioned when third parties are
involved at any stage of the AI system life cycle
Topic Control
A.10.2 Allocating The organization shall ensure that
responsibilities responsibilities within their AI system life cycle
are allocated between the organization, its
partners, suppliers, customers and third
parties.
A.10.3 Suppliers The organization shall establish a process to
ensure that its usage of services, products or
materials provided by suppliers aligns with the
organization’s approach to the responsible
development and use of AI systems.
A.10.4 Customers The organization shall ensure that its
responsible approach to the development and
use of AI systems considers their customer
expectations and needs.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
29
ANNEX B
(normative)
Implementation guidance for AI controls
B.1 General
The implementation guidance documented in this annex relates to the controls
listed in Table A.1. It provides information to support the implementation of the
controls listed in Table A.1 and to meet the control objective, but organizations do
not have to document or justify inclusion or exclusion of implementation guidance
in the statement of applicability (see 6.1.3).
The implementation guidance is not always suitable or sufficient in all situations
and does not always fulfil the organization’s specific control requirements. The
organization can extend or modify the implementation guidance or define their
own implementation of a control according to their specific requirements and risk
treatment needs.
This annex is to be used as guidance for determining and implementing controls
for AI risk treatment in the AI management system defined in this document.
Additional organizational and technical controls other than those included in this
annex can be determined (see AI system management risk treatment in 6.1.3). This
annex can be regarded as a starting point for developing organization-specific
implementation of controls.
B.2 Policies related to AI
B.2.1 Objective
To provide management direction and support for AI systems according to
business requirements.
B.2.2 AI policy Control
The organization should document a policy for the development or use of AI
systems.
Implementation guidance
The AI policy should be informed by:
x business strategy;
x organizational values and culture and the amount of risk the organization is
willing to pursue or retain;
x the level of risk posed by the AI systems;
x legal requirements, including contracts;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
30
x the risk environment of the organization;
x impact to relevant interested parties (see 6.1.4).
The AI policy should include (in addition to requirements in 5.2):
x principles that guide all activities of the organization related to AI;
x processes for handling deviations and exceptions to policy.
The AI policy should consider topic-specific aspects where necessary to provide
additional guidance or provide cross-references to other policies dealing with
these aspects. Examples of such topics include:
x AI resources and assets;
x AI system impact assessments (see 6.1.4);
x AI system development.
Relevant policies should guide the development, purchase, operation and use of AI
systems.
B.2.3 Alignment with other organizational policies
Control
The organization should determine where other policies can be affected by or
apply to, the organization’s objectives with respect to AI systems.
Implementation guidance
Many domains intersect with AI, including quality, security, safety and privacy. The
organization should consider a thorough analysis to determine whether and where
current policies can necessarily intersect and either update those policies if
updates are required or include provisions in the AI policy.
Other information
The policies that the governing body sets on behalf of the organization should
inform the AI policy. ISO/IEC 38507 provides guidance for members of the
governing body of an organization to enable and govern the AI system throughout
its life cycle.
B.2.4 Review of the AI policy
Control
The AI policy should be reviewed at planned intervals or additionally as needed to
ensure its continuing suitability, adequacy and effectiveness.
Implementation guidance
A role approved by management should be responsible for the development,
review and evaluation of the AI policy, or the components within. The review

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
31
should include assessing opportunities for improvement of the organization’s
policies and approach to managing AI systems in response to changes to the
organizational environment, business circumstances, legal conditions or technical
environment.
The review of AI policy should take the results of management reviews into
account.
B.3 Internal organization
B.3.1 Objective
To establish accountability within the organization to uphold its responsible
approach for the implementation, operation and management of AI systems.
B.3.2 AI roles and responsibilities
Control
Roles and responsibilities for AI should be defined and allocated according to the
needs of the organization.
Implementation guidance
Defining roles and responsibilities is critical for ensuring accountability
throughout the organization for its role with respect to the AI system throughout
its life cycle. The organization should consider AI policies, AI objectives and
identified risks when assigning roles and responsibilities, in order to ensure that
all relevant areas are covered. The organization can prioritize how the roles and
responsibilities are assigned. Examples of areas that can require defined roles and
responsibilities can include:
x risk management;
x AI system impact assessments;
x asset and resource management;
x security;
x safety;
x privacy;
x development;
x performance;
x human oversight;
x supplier relationships;
x demonstrate its ability to consistently fulfil legal requirements;
x data quality management (during the whole life cycle).

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
32
Responsibilities of the various roles should be defined to the level appropriate for
the individuals to perform their duties.
B.3.3 Reporting of concerns
Control
The organization should define and put in place a process to report concerns about
the organization’s role with respect to an AI system throughout its life cycle.
Implementation guidance
The reporting mechanism should fulfil the following functions:
a) options for confidentiality or anonymity or both;
b) available and promoted to employed and contracted persons;
c) staffed with qualified persons;
d) stipulates appropriate investigation and resolution powers for the persons
referred to in c);
e) provides for mechanisms to report and to escalate to management in a timely
manner;
f) provides for effective protection from reprisals for both the persons concerned
with reporting and investigation (e.g. by allowing reports to be made
anonymously and confidentially);
g) provides reports according to 4.4 and, if appropriate, e); while maintaining
confidentiality and anonymity in a), and respecting general business
confidentiality considerations;
h) provides response mechanisms within an appropriate time frame.
NOTE The organization can utilize existing reporting mechanisms as part of this
process.
Other information
In addition to the implementation guidance provided in this clause, the
organization should further consider ISO 37002.
B.4 Resources for AI systems
B.4.1 Objective
To ensure that the organization accounts for the resources (including AI system
components and assets) of the AI system in order to fully understand and address
risks and impacts.
B.4.2 Resource documentation

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
33
Control
The organization should identify and document relevant resources required for
the activities at given AI system life cycle stages and other AI-related activities
relevant for the organization.
Implementation guidance
Documentation of resources of the AI system is critical for understanding risks, as
well as potential AI system impacts (both positive and negative) to individuals or
groups of individuals, or both, and societies. The documentation of such resources
(which can utilize, for instance, data flow diagrams or system architecture
diagrams) can inform the AI system impact assessments (see B.5).
Resources can include, but are not limited to:
x AI system components;
x data resources, i.e. data used at any stage in the AI system life cycle;
x tooling resources (e.g. AI algorithms, models or tools);
x system and computing resources (e.g. hardware to develop and run AI
models, storage for data and tooling resources);
x human resources, i.e. people with the necessary expertise (e.g. for the
development, sales, training, operation and maintenance of the AI system)
in relation to the organization’s role throughout the AI system life cycle.
Resources can be provided by the organization itself, by its customers or by third
parties.
Other information
Documentation of resources can also help to determine if resources are available
and, if they are not available, the organization should revise the design
specification of the AI system or its deployment requirements.
B.4.3 Data resources
Control
As part of resource identification, the organization should document information
about the data resources utilized for the AI system.
Documentation on data should include, but is not limited to, the following topics:
— the provenance of the data;
— the date that the data were last updated or modified (e.g. date tag in metadata);
— for machine learning, the categories of data (e.g. training, validation, test and
production data);

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
34
— categories of data (e.g. as defined in ISO/IEC 19944-1);
— process for labelling data;
— intended use of the data;
— quality of data (e.g. as described in the ISO/IEC 5259 series ));
— applicable data retention and disposal policies;
— known or potential bias issues in the data; — data preparation.
B.4.4 Tooling resources
Control
As part of resource identification, the organization should document information
about the tooling resources utilized for the AI system.
Implementation guidance
Tooling resources for an AI system and particularly for machine learning, can
include but are not limited to:
x algorithm types and machine learning models;
x data conditioning tools or processes;
x optimization methods;
x evaluation methods;
x provisioning tools for resources;
x tools to aid model development;
x software and hardware for AI system design, development and deployment.
Other information
ISO/IEC 23053 provides detailed guidance on the types, methods and approaches
for various tooling resources for machine learning.
B.4.5 System and computing resources
Control
As part of resource identification, the organization should document information
about the system and computing resources utilized for the AI system.
Implementation guidance
Information about system and computing resources for an AI system can include
but is not limited to:
x resource requirements of the AI system (i.e. to help ensure the system can
run on constrained resource devices);

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
35
x where the system and computing resources are located (e.g. on-premises,
cloud computing or edge computing);
x processing resources (including network and storage);
x the impact of the hardware used to run the AI system workloads (e.g. the
impact to the environment either through use or the manufacturing of the
hardware or cost of using the hardware).
The organization should consider that different resources can be required to allow
continual improvement of AI systems. Development, deployment and operation of
the system can have different system needs and requirements.
NOTE ISO/IEC 22989 describes various system resource considerations.
B.4.6 Human resources
Control
As part of resource identification, the organization should document information
about the human resources and their competences utilized for the development,
deployment, operation, change management, maintenance, transfer and
decommissioning, as well as verification and integration of the AI system.
Implementation guidance
The organization should consider the need for diverse expertise and include the
types of roles necessary for the system. For example, the organization can include
specific demographic groups related to data sets used to train machine learning
models, if their inclusion is a necessary component of the system design. Necessary
human resources can include but are not limited to:
x data scientists;
x roles related to human oversight of AI systems;
x experts on trustworthiness topics such as safety, security and privacy;
x AI researchers and specialists, and domain experts relevant to the AI
systems.
Different resources can be necessary at different stages of the AI system life cycle.

B.5 Assessing impacts of AI systems


B.5.1 Objective
To assess AI system impacts to individuals or groups of individuals, or both, and
societies affected by the AI system throughout its life cycle.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
36
B.5.2 AI system impact assessment process
Control
The organization should establish a process to assess the potential consequences
for individuals or groups of individuals, or both, and societies that can result from
the AI system throughout its life cycle.
Because AI systems potentially generate significant impact to individuals, groups
of individuals, or both, and societies, the organization that provides and uses such
systems should, based on the intended purpose and use of these systems, assess
the potential impacts of these systems on these groups.
The organization should consider whether an AI system affects:
x the legal position or life opportunities of individuals;
x the physical or psychological well-being of individuals;
x universal human rights;
x societies.
The organization’s procedures should include, but are not limited to:
a) circumstances under which an AI system impact assessment should be
performed, which can include, but are not limited to:
1) criticality of the intended purpose and context in which the AI system is
used or any significant changes to these;
2) complexity of AI technology and the level of automation of AI systems or any
significant changes to that;
3) sensitivity of data types and sources processed by the AI system or any
significant changes to that;
b) elements that are part of the AI system impact assessment process, which can
include:
1) identification (e.g. sources, events and outcomes);
2) analysis (e.g. consequences and likelihood);
3) evaluation (e.g. acceptance decisions and prioritization);
4) treatment (e.g. mitigation measures);
5) documentation, reporting and communication (see 7.4, 7.5 and B.3.3);
c) who performs the AI system impact assessment;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
37
d) how the AI system impact assessment can be utilized [e.g. how it can inform the
design or use of the system (see B.6 and B.9), whether it can trigger reviews and
approvals];
e) individuals and societies that are potentially impacted based on the system’s
intended purpose, use and characteristics (e.g. assessment for individuals,
groups of individuals or societies).
Impact assessment should take various aspects of the AI system into account,
including the data used for the development of the AI system, the AI technologies
used and the functionality of the overall system.
The processes can vary based on the role of the organization and the domain of AI
application and depending on the specific disciplines for which the impact is
assessed (e.g. security, privacy and safety).
Other information
For some disciplines or organizations, detailed consideration of the impact on
individuals or groups of individuals, or both, and societies is part of risk
management, particularly in disciplines such as information security, safety and
environmental management. The organization should determine if discipline-
specific impact assessments performed as part of such a risk management process
sufficiently integrate AI considerations for those specific aspects (e.g. privacy).
NOTE ISO/IEC 23894 describes how an organization can perform impact analyses
for the organization itself, along with individuals or groups of individuals, or both,
and societies, as part of an overall risk management process.
B.5.3 Documentation of AI system impact assessments
Control
The organization should document the results of AI system impact assessments
and retain results for a defined period.
Implementation guidance
The documentation can be helpful in determining information that should be
communicated to users and other relevant interested parties.
AI system impact assessments should be retained and updated, as needed, in
alignment with the elements of an AI system impact assessment documented in
B.5.2. Retention periods can follow organization retention schedules or be
informed by legal requirements or other requirements. Items that the organization
should consider documenting can include, but are not limited to:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
38
x the intended use of the AI system and any reasonable foreseeable misuse of
the AI system;
x positive and negative impacts of the AI system to the relevant individuals or
groups of individuals, or both, and societies;
x predictable failures, their potential impacts and measures taken to mitigate
them;
x relevant demographic groups the system is applicable to;
x complexity of the system;
x the role of humans in relationships with system, including human oversight
capabilities, processes and tools, available to avoid negative impacts;
x employment and staff skilling.
B.5.4 Assessing AI system impact on individuals or groups of individuals
Control
The organization should assess and document the potential impacts of AI systems
to individuals or groups of individuals throughout the system’s life cycle.
Implementation guidance
When assessing the impacts on individuals or groups of individuals, or both, and
societies, the organization should consider its governance principles, AI policies
and objectives. Individuals using the AI system or whose PII are processed by the
AI system, can have expectations related to the trustworthiness of the AI system.
Specific protection needs of groups such as children, impaired persons, elderly
persons and workers should be taken into account. The organization should
evaluate these expectations and consider the means to address them as part of the
system impact assessment.
Depending on the scope of AI system purpose and use, areas of impact to consider
as part of the assessment can include, but are not limited to:
x fairness;
x accountability;
x transparency and explainability;
x security and privacy;
x safety and health;
x financial consequences;
x accessibility;
x human rights.
Other information

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
39
Where necessary, the organization should consult experts (e.g. researchers,
subject matter experts and users) to obtain a full understanding of potential
impacts of the AI system on individuals or groups of individuals, or both, and
societies.
B.5.5 Assessing societal impacts of AI systems
Control
The organization should assess and document the potential societal impacts of
their AI systems throughout their life cycle.
Implementation guidance
Societal impacts can vary widely depending on the organization’s context and the
types of AI systems. The societal impacts of AI systems can be both beneficial and
detrimental. Examples of these potential societal impacts can include:
x environment sustainability (including the impacts on natural resources and
greenhouse gas emissions);
x economic (including access to financial services, employment opportunities,
taxes, trade and commerce);
x government (including legislative processes, misinformation for political
gain, national security and criminal justice systems);
x health and safety (including access to healthcare, medical diagnosis and
treatment, and potential physical and psychological harms);
x norms, traditions, culture and values (including misinformation that leads
to biases or harms to individuals or groups of individuals, or both, and
societies).
Other information
Development and use of AI systems can be computationally intensive with related
impacts to environmental sustainability (e.g. greenhouse gas emissions due to
increased power usage, impacts on water, land, flora and fauna). Likewise, AI
systems can be used to improve the environmental sustainability of other systems
(e.g. reduce greenhouse gas emissions related to buildings and transportation).
The organization should consider the impacts of its AI systems in the context of its
overall environmental sustainability goals and strategies.
The organization should consider how its AI systems can be misused to create
societal harms and how they can be used to address historical harms. For example,
can AI systems prevent access to financial services such as loans, grants, insurance
and investments and likewise can AI systems improve access to these instruments?

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
40
AI systems have been used to influence the outcomes of elections and to create
misinformation (e.g. deepfakes in digital media) that can lead to political and social
unrest. Government’s use of AI systems for criminal-justice purposes has exposed
the risk of biases to societies, individuals or groups of individuals. The organization
should analyse how actors can misuse AI systems and how the AI systems can
reinforce unwanted historical social biases.
AI systems can be used to diagnose and treat illnesses and to determine
qualifications for health benefits. AI systems are also deployed in scenarios where
malfunctions can result in death or injury to humans (e.g. self-driving automobiles,
human-machine teaming). The organization should consider both the positive and
negative outcomes when using AI systems, such as in health and safety related
scenarios.
NOTE ISO/IEC TR 24368 provides a high-level overview of ethical and societal
concerns related to AI systems and applications.
B.6 AI system life cycle
B.6.1 Management guidance for AI system development
B.6.1.1 Objective
To ensure that the organization identifies and documents objectives and
implements processes for the responsible design and development of AI systems.
B.6.1.2 Objectives for responsible development of AI system
Control
The organization should identify and document objectives to guide the responsible
development of AI systems, and take those objectives into account and integrate
measures to achieve them in the development life cycle.
Implementation guidance
The organization should identify objectives (see 6.2) that affect the AI system
design and development processes. These objectives should be taken into account
in the design and development processes. For example, if an organization defines
“fairness” as one objective, this should be incorporated in the requirements
specification, data acquisition, data conditioning, model training, verification and
validation, etc. The organization should provide requirements and guidelines as
necessary to ensure that measures are integrated into the various stages (e.g. the
requirement to use a specific testing tool or method to address unfairness or
unwanted bias) to achieve such objectives.
Other information

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
41
AI techniques are being used to augment security measures such as threat
prediction detection and prevention of security attacks. This is an application of AI
techniques that can be used to reinforce security measures to protect both AI
systems and conventional non-AI based software systems. Annex C provides
examples of organizational objectives for managing risk, which can be useful in
determining the objectives for AI system development.
B.6.1.3 Processes for responsible design and development of AI systems
Control
The organization should define and document the specific processes for the
responsible design and development of the AI system.
Implementation guidance
Responsible development for AI system processes should include consideration of,
without limitation, the following:
x life cycle stages (a generic AI system life cycle model is provided by ISO/IEC
22989, but the organization can specify their own life cycle stages);
x testing requirements and planned means for testing;
x human oversight requirements, including processes and tools, especially
when the AI system can impact natural persons;
x at what stages AI system impact assessments should be performed;
x training data expectations and rules (e.g. what data can be used, approved
data suppliers and labelling);
x expertise (subject matter domain or other) required or training for
developers of AI systems or both;
x release criteria;
x approvals and sign-offs necessary at various stages;
x change control;
x usability and controllability;
x engagement of interested parties.
The specific design and development processes depend on the functionality and
the AI technologies that are intended to be used for the AI system.
B.6.2 AI system life cycle
B.6.2.1 Objective
To define the criteria and requirements for each stage of the AI system life cycle.
B.6.2.2 AI system requirements and specification
Control

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
42
The organization should specify and document requirements for new AI systems
or material enhancements to existing systems.
Implementation guidance
The organization should document the rationale for developing an AI system and
its goals. Some of the factors that should be considered, documented and
understood can include:
a) why the AI system is to be developed, for example, is this driven by a business
case, customer request or by government policy;
b) how the model can be trained and how data requirements can be achieved.
AI system requirements should be specified and should span the entire AI system
life cycle. Such requirements should be revisited in cases where the developed AI
system is unable to operate as intended or new information arises that can be used
to change and to improve the requirements. For instance, it can become unfeasible
from a financial perspective to develop the AI system.
Other information
The processes for describing the AI system life cycle are provided by ISO/IEC 5338.
For more information about human-centred design for interactive systems, see ISO
9241-210.
B.6.2.3 Documentation of AI system design and development
Control
The organization should document the AI system design and development based
on organizational objectives, documented requirements and specification criteria.
Implementation guidance
There are many design choices necessary for an AI system, including, but not
limited to:
x machine learning approach (e.g. supervised vs. unsupervised);
x learning algorithm and type of machine learning model utilized;
x how the model is intended to be trained and which data quality (see B.7);
x evaluation and refinement of models;
x hardware and software components;
x security threats considered throughout the AI system life cycle; security
threats specific to AI systems include data poisoning, model stealing or
model inversion attacks;
x interface and presentation of outputs;
x how humans can interact with the system;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
43
x interoperability and portability considerations.
There can be multiple iterations between design and development, but
documentation on the stage should be maintained and a final system architecture
documentation should be available.
Other information
For more information about human-centred design for interactive systems, see ISO
9241-210.
B.6.2.4 AI system verification and validation
Control
The organization should define and document verification and validation
measures for the AI system and specify criteria for their use.
Implementation guidance
The verification and validation measures can include, but are not limited to:
x testing methodologies and tools;
x selection of test data and their representation of the intended domain of use;
x release criteria requirements.
The organization should define and document evaluation criteria such as, but not
limited to:
x a plan to evaluate the AI system components and the whole AI system for
risks related to impacts on individuals or groups of individuals, or both, and
societies;
x the evaluation plan can be based on, for example:
x reliability and safety requirements of the AI system, including acceptable
error rates for the AI system performance;
x responsible AI system development and use objectives such as those in
B.6.1.2 and B.9.3;
x operational factors such as quality of data, intended use, including
acceptable ranges of each operational factor;
x any intended uses which can require more rigorous operational factors to
be defined, including different acceptable ranges for operational factors or
lower error rates;
x the methods, guidance or metrics to be used to evaluate whether relevant
interested parties who make decisions or are subject to decisions based on
the AI system outputs can adequately interpret the AI system outputs. The
frequency of evaluation should be determined and can be based upon
results from an AI system impact assessment;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
44
x any acceptable factors that can account for an inability to meet a target
minimum performance level, especially when the AI system is evaluated for
impacts on individuals or groups of individuals, or both, and societies (e.g.
poor image resolution for computer vision systems or background noise
affecting speech recognition systems). Mechanisms to deal with poor AI
system performance as a result of these factors should also be documented.
The AI system should be evaluated against the documented criteria for evaluation.
Where the AI system cannot meet the documented criteria for evaluation,
especially against responsible AI system development and use objectives (see
B.6.1.2 and B.9.3), the organization should reconsider or manage the deficiencies
of the intended use of the AI system, its performance requirements and how the
organization can effectively address the impacts to individuals or groups of
individuals, or both, and societies.
NOTE Further information on how to deal with robustness of neural networks can
be found in ISO/IEC TR 24029-1.
B.6.2.5 AI system deployment
Control
The organization should document a deployment plan and ensure that appropriate
requirements are met prior to deployment.
Implementation guidance
AI systems can be developed in various environments and deployed in others (such
as developed on premises and deployed using cloud computing) and the
organization should take these differences into account for the deployment plan.
The organization should also consider whether components are deployed
separately (e.g. software and model can be deployed independently). Additionally,
the organization should have a set of requirements to be met prior to release and
deployment (sometimes referred to as “release criteria”). This can include
verification and validation measures that are to be passed, performance metrics
that are to be met, user testing to be completed, as well as management approvals
and sign-offs to be obtained. The deployment plan should take into account the
perspectives of and impacts to relevant interested parties.
B.6.2.6 AI system operation and monitoring
Control
The organization should define and document the necessary elements for the
ongoing operation of the AI system. At the minimum this should include system
and performance monitoring, repairs, updates and support.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
45
Implementation guidance
Each minimum activity for operation and monitoring can take account of various
considerations. For example:
x System and performance monitoring can include monitoring for general
errors and failures, as well as for whether the system is performing as
expected with production data. Technical performance criteria can include
success rates in resolving problems or in achieving tasks, or confidence
rates. Other criteria can be related to meeting commitment or expectation
and needs of interested parties, including, for example, ongoing monitoring
to ensure compliance with customer requirements or applicable legal
requirements.
x Some deployed AI systems evolve their performance as a result of ML, where
production data and output data are used to further train the ML model.
Where continuous learning is used, the organization should monitor the
performance of the AI system to ensure that it continues to meet its design
goals and operates on production data as intended.
x The performance of some AI systems can change even if such systems do not
use continuous learning, usually due to concept or data drift in production
data. In such cases, monitoring can identify the need for retraining to ensure
that the AI system continues to meet its design goals and operates on
production data as intended. More information can be found in ISO/IEC
23053.
x Repairs can include responses to errors and failures in the system. The
organization should have processes in place for the response and repair of
these issues. Additionally, updates can be necessary as the system evolves
or as critical issues are identified, or as the result of externally identified
issues (e.g. non-compliance with customer expectations or legal
requirement). There should be processes in place for updating the system
including components affected, update schedule, information to users on
what is included in the update.
x System updates can also include changes in the system operations, new or
modified intended uses, or other changes in system functionality. The
organization should have procedures in place to address operational
changes, including communication to users.
x Support for the system can be internal, external or both, depending on the
needs of the organization and how the system was acquired. Support
processes should consider how users can contact the appropriate help, how
issues and incidents are reported, support service level agreements and
metrics.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
46
x Where AI systems are being used for purposes other than those for which
they were designed or in ways that were not anticipated, the
appropriateness of such uses should be considered.
x AI-specific information security threats related to the AI systems applied
and developed by the organization should be identified. AI-specific
information security threats include, but are not limited to data poisoning,
model stealing and model inversion attacks.
Other information
The organization should consider operational performance that can affect
interested parties and consider this when designing and determining performance
criteria.
Performance criteria for AI systems in operation should be determined by the task
under consideration, such as classification, regression, ranking, clustering or
dimensionality reduction.
Performance criteria can include statistical aspects such as error rates and
processing duration. For each criterion, the organization should identify all
relevant metrics as well as interdependences between metrics. For each metric,
the organization should consider acceptable values based on, for example, domain
expert’s recommendations and analysis of expectations of interested parties
relative to existing non-AI practices.
For example, an organization can determine that the F1 score is an appropriate
performance metric based on its assessment of the impact of false positives and
false negatives, as described in ISO/IEC TS 4213. The organization can then
establish an F1 value that the AI system is expected to meet. It should be evaluated
if these issues can be handled by existing measures. If that is not the case, changes
to existing measures should be considered or additional measures should be
defined to detect and handle these issues.
The organization should consider the performance of non-AI systems or processes
in operation and use them as potentially relevant context when establishing
performance criteria.
The organization should additionally ensure that the means and processes used to
evaluate the AI system, including, where applicable, the selection and management
of evaluation data, improve the completeness and the reliability in assessment of
its performance with respect to the defined criteria.
Development of performance assessment methodologies can be based on criteria,
metrics and values. These should inform the amount of data and the types of

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
47
processes used in the assessment and the roles and expertise of personnel that
carries out the assessment.
Performance assessment methodologies should reflect attributes and
characteristics of operation and use as closely as possible to ensure that
assessment results are useful and relevant. Some aspects of performance
assessment can require controlled introduction of erroneous or spurious data or
processes to assess impact on performance.
The quality model in ISO/IEC 25059 can be used to define performance criteria.
B.6.2.7 AI system technical documentation
Control
The organization should determine what AI system technical documentation is
needed for each relevant category of interested parties, such as users, partners,
supervisory authorities, and provide the technical documentation to them in the
appropriate form.
Implementation guidance
The AI system technical documentation can include, but is not limited to the
following elements:
x a general description of the AI system including its intended purpose;
x usage instructions;
x technical assumptions about its deployment and operation (run-time
environment, related software and hardware capabilities, assumptions
made on data, etc.);
x technical limitations (e.g. acceptable error rates, accuracy, reliability,
robustness);
x monitoring capabilities and functions that allow users or operators to
influence the system operation.
Documentation elements related to all AI system life cycle stages (as defined in
ISO/IEC 22989) can include, but are not limited to:
x design and system architecture specification;
x design choices made and quality measures taken during the system
development process;
x information about the data used during system development;
x assumptions made and quality measures taken on data quality (e.g.
assumed statistical distributions);
x management activities (e.g. risk management) taken during development or
operation of the AI system;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
48
x verification and validation records;
changes made to the AI system when it is in operation;
impact assessment documentation as described in B.5.
The organization should document technical information related to the
responsible operation of the AI system. This can include, but is not limited to:
x documenting a plan for managing failures. This can include for example, the
need to describe a rollback plan for the AI system, turning off features of the
AI system, an update process or a plan for notifying customers, users, etc. of
changes to the AI system, updated information on system failures and how
these can be mitigated;
x documenting processes for monitoring the health of the AI system (i.e. the
AI system operates as intended and within its normal operating margins,
also referred to as observability) and processes for addressing AI system
failures;
x documenting standard operating procedures for the AI system, including
which events should be monitored and how event logs are prioritized and
reviewed. It can also include how to investigate failures and the prevention
of failures;
x documenting the roles of personnel responsible for operation of the AI
system as well as those responsible for accountability of the system use,
especially in relation to handling the effects of AI system failures or
managing updates to the AI system;
x documenting system updates like changes in the system operations, new or
modified intended uses, or other changes in system functionality.
x The organization should have procedures in place to address operational
changes including communication to users and internal evaluations on the
type of change.
Documentation should be up to date and accurate. Documentation should be
approved by the relevant management within the organization.
When provided as part of the user documentation, the controls provided in Table
A.1 should be taken into account.

B.6.2.8 AI system recording of event logs


Control

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
49
The organization should determine at which phases of the AI system life cycle,
record keeping of event logs should be enabled, but at the minimum when the AI
system is in use.
Implementation guidance
The organization should ensure logging for AI systems it deploys to automatically
collect and record event logs related to certain events that occur during operation.
Such logging can include but is not limited to:
x traceability of the AI system’s functionality to ensure that the AI system is
operating as intended;
x detection of the AI system’s performance outside of the AI system’s intended
operating conditions that can result in undesirable performance on
production data or impacts to relevant interested parties through
monitoring of the operation of the AI system.
AI system event logs can include information, such as the time and date each time
the AI system is used, the production data on which the AI system operates on, the
outputs that fall out of the range of the intended operation of the AI system, etc.
Event logs should be kept for as long as required for the intended use of the AI
system and within the data retention policies of the organization. Legal
requirements related to data retention can apply.
Other information
Some AI systems, such as biometric identification systems, can have additional
logging requirements depending on jurisdiction. Organizations should be aware of
these requirements.
B.7 Data for AI systems
B.7.1 Objective
To ensure that the organization understands the role and impacts of data in AI
systems in the application and development, provision or use of AI systems
throughout their life cycles.
B.7.2 Data for development and enhancement of AI system
Control
The organization should define, document and implement data management
processes related to the development of AI systems.
Implementation guidance
Data management can include various topics such as, but not limited to:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
50
x privacy and security implications due to the use of data, some of which can
be sensitive in nature;
x security and safety threats that can arise from data dependent AI system
development;
x transparency and explainability aspects including data provenance and the
ability to provide an explanation of how data are used for determining an AI
system’s output if the system requires transparency and explainability;
x representativeness of training data compared to operational domain of use;
x accuracy and integrity of the data.
NOTE Detailed information of AI system life cycle and data management concepts
is provided by ISO/IEC 22989.
B.7.3 Acquisition of data
Control
The organization should determine and document details about the acquisition
and selection of the data used in AI systems.
Implementation guidance
The organization can need different categories of data from different sources
depending on the scope and use of their AI systems. Details for data acquisition
can include:
x categories of data needed for the AI system;
x quantity of data needed;
x data sources (e.g. internal, purchased, shared, open data, synthetic);
x characteristics of the data source (e.g. static, streamed, gathered, machine
generated);
x data subject demographics and characteristics (e.g. known or potential
biases or other systematic errors);
x prior handling of the data (e.g. previous uses, conformity with privacy and
security requirements); data rights (e.g. PII, copyright);
x associated meta data (e.g. details of data labelling and enhancing);
x provenance of the data.
Other information
The data categories and a structure for the data use in ISO/IEC 19944-1 can be
used to document details about data acquisition and use.
B.7.4 Quality of data for AI systems
Control

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
51
The organization should define and document requirements for data quality and
ensure that data used to develop and operate the AI system meet those
requirements.
Implementation guidance
The quality of data used to develop and operate AI systems potentially has
significant impacts on the validity of the system’s outputs. ISO/IEC 25024 defines
data quality as the degree to which the characteristics of data satisfy stated and
implied needs when used under specified conditions. For AI systems that use
supervised or semi-supervised machine learning, it is important that the quality of
training, validation, test and production data are defined, measured and improved
to the extent possible, and the organization should ensure that the data are suitable
for its intended purpose. The organization should consider the impact of bias on
system performance and system fairness and make such adjustments as necessary
to the model and data used to improve performance and fairness so they are
acceptable for the use case.
Other information
Additional information regarding data quality is available in the ISO/IEC 5259
series2) on data quality for analytics and ML. Additional information regarding
different forms of bias in data used in AI systems is available in ISO/IEC TR 24027.
B.7.5 Data provenance
Control
The organization should define and document a process for recording the
provenance of data used in its AI systems over the life cycles of the data and the AI
system.
Implementation guidance
According to ISO 8000-2, a record of data provenance can include information
about the creation, update, transcription, abstraction, validation and transferring
of the control of data. Additionally, data sharing (without transfer of control) and
data transformations can be considered under data provenance. Depending on
factors such as the source of the data, its content and the context of its use,
organizations should consider whether measures to verify the provenance of the
data are needed. B.7.6 Data preparation

Control
The organization shall define and document its criteria for selecting data
preparations and the data preparation methods to be used.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
52
Implementation guidance
Data used in an AI system ordinarily needs preparation to make it usable for a
given AI task. For example, machine learning algorithms are sometimes intolerant
of missing or incorrect entries, nonnormal distribution and widely varying scales.
Preparation methods and transforms can be used to increase the quality of the
data. Failure to properly prepare the data can potentially lead to AI system errors.
Common preparation methods and transformations for data used in AI systems
include:
x statistical exploration of the data (e.g. distribution, mean, median, standard
deviation, range, stratification, sampling) and statistical metadata (e.g. data
documentation initiative (DDI) specification[28]);
x cleaning (i.e. correcting entries, dealing with missing entries);
x imputation (i.e. methods for filling in missing entries);
x normalization;
x scaling;
x labelling of the target variables;
x encoding (e.g. converting categorical variables to numbers).
For a given AI task, the organization should document its criteria for selecting
specific data preparation methods and transforms as well as the specific methods
and transforms used in the AI task.
NOTE For additional information on data preparation specific to machine learning
see the ISO/IEC 5259 series2) and ISO/IEC 23053.
B.8 Information for interested parties
B.8.1 Objective
To ensure that relevant interested parties have the necessary information to
understand and assess the risks and their impacts (both positive and negative).
B.8.2 System documentation and information for users
Control
The organization should determine and provide the necessary information to
users of the system.

Implementation guidance
Information about the AI system can include both technical details and
instructions, as well as general notifications to users that they are interacting with
an AI system, depending on the context. This can also include the system itself, as

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
53
well as potential outputs of the system (e.g. notifying users that an image is created
by AI).
Although AI systems can be complex, it is critical that users are able to understand
when they are interacting with an AI system, how the system works. Users also
need to understand its intended purpose and intended uses, its potential to cause
harm or benefit the user. Some system documentation can necessarily be targeted
for more technical uses (e.g. system administrators), and the organization should
understand the needs of different interested parties and what understandability
can mean to them. The information should also be accessible, both in terms of ease
of use in finding it, as well as for users who can need additional accessibility
features.
Information that can be provided to users include, but are not limited to:
x purpose of the system;
x that the user is interacting with an AI system;
x how to interact with the system;
x how and when to override the system;
x technical requirements for system operation, including the computational
resources needed, and limitations of the system as well as its expected
lifetime;
x needs for human oversight;
x information about accuracy and performance;
x relevant information from the impact assessment, including potential
benefits and harms, particularly if they are applicable in specific contexts or
certain demographic groups (see B.5.2 and B.5.4);
x revisions to claims about the system’s benefits;
x updates and changes in how the system works, as well as any necessary
maintenance measures, including their frequency;
x contact information;
x educational materials for system use.
Criteria used by the organization to determine whether and what information is to
be provided should be documented. Relevant criteria include but are not limited
to the intended use and reasonably foreseeable misuse of the AI system, the
expertise of the user and specific impact of the AI system.
Information can be provided to users in numerous ways, including documented
instructions for use, alerts and other notifications built into the system itself,
information on a web page, etc. Depending on which methods the organization
uses to provide information, it should validate that the users have access to this

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
54
information, and that the information provided is complete, up to date and
accurate.
B.8.3 External reporting
Control
The organization should provide capabilities for interested parties to report
adverse impacts of the system.
Implementation guidance
While the system operation should be monitored for reported issues and failures,
the organization should also provide capabilities for users or other external parties
to report adverse impacts (e.g. unfairness).
B.8.4 Communication of incidents
Control
The organization should determine and document a plan for communicating
incidents to users of the system.
Implementation guidance
Incidents related to the AI system can be specific to the AI system itself, or related
to information security or privacy (e.g. a data breach). The organization should
understand its obligations around notifying users and other interested party about
incidents, depending on the context in which the system operates. For example, an
incident with an AI component that is part of a product that affects safety can have
different notification requirements than other types of systems. Legal
requirements (such as contracts) and regulatory activity can apply, which can
specify requirements for:
x types of incidents that must be communicated;
x the timeline for notification;
x whether and which authorities must be notified;
x the details required to be communicated.
The organization can integrate incident response and reporting activities for AI
into their broader organizational incident management activities, but should be
aware of unique requirements related to AI systems, or individual components of
AI systems (e.g. a PII data breach in training data for the system can have different
reporting requirements related to privacy).
Other information
ISO/IEC 27001 and ISO/IEC 27701 provide additional details on incident
management for security and privacy respectively.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
55
B.8.5 Information for interested parties
Control
The organization should determine and document its obligations to reporting
information about the AI system to interested parties.
Implementation guidance
In some cases, a jurisdiction can require information about the system to be shared
with authorities such as regulators. Information can be reported to interested
parties such as customers or regulatory authorities within the appropriate
timeframe. The information shared can include, for example:
x technical system documentation, including, but not limited, to data sets for
training, validation and testing as well as algorithmic choices justifications
and verification and validation records;
x risks related to the system;
x results of impact assessments;
x logs and other system records.
The organization should understand their obligations in this respect and ensure
that the appropriate information is shared with the correct authorities.
Additionally, it is presupposed that the organization is aware of jurisdictional
requirements related to information shared with law enforcement authorities.
B.9 Use of AI systems
B.9.1 Objective
To ensure that the organization uses AI systems responsibly and per
organizational policies.
B.9.2 Processes for responsible use of AI systems
Control
The organization should define and document the processes for the responsible
use of AI systems.

Implementation guidance
Depending on its context, the organization can have many considerations for
determining whether to use a particular AI system. Whether the AI system is
developed by the organization itself or sourced from a third party, the organization
should be clear on what these considerations are and develop policies to address
them. Some examples are:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
56
x required approvals;
x cost (including for ongoing monitoring and maintenance);
x approved sourcing requirements;
x legal requirements applicable to the organization.
Where the organization has accepted policies for the use of other systems, assets,
etc., these policies can be incorporated if desired.
B.9.3 Objectives for responsible use of AI system
Control
The organization should identify and document objectives to guide the responsible
use of AI systems.
Implementation guidance
The organization operating in different contexts can have different expectations
and objectives for what constitutes the responsible development of AI systems.
Depending on its context, the organization should identify its objectives related to
responsible use. Some objectives include:
x fairness;
x accountability;
x transparency;
x explainability;
x reliability;
x safety;
x robustness and redundancy;
x privacy and security;
x accessibility.
Once defined, the organization should implement mechanisms to achieve its
objectives within the organization. This can include determining if a third-party
solution fulfils the organization’s objectives or if an internally developed solution
is applicable for the intended use. The organization should determine at which
stages of the AI system life cycle meaningful human oversight objectives should be
incorporated. This can include:
x involving human reviewers to check the outputs of the AI system, including
having authority to override decisions made by the AI system;
x ensuring that human oversight is included if required for acceptable use of
the AI system according to instructions or other documentation associated
with the intended deployment of the AI system;

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
57
x monitoring the performance of the AI system, including the accuracy of the
AI system outputs;
x reporting concerns related to the outputs of the AI system and their impact
to relevant interested parties;
x reporting concerns with changes in the performance or ability of the AI
system to make correct outputs on the production data;
x considering whether automated decision-making is appropriate for a
responsible approach to the use of an AI system and the intended use of the
AI system.
The need for human oversight can be informed by the AI system impact
assessments (see B.5). The personnel involved in human oversight activities
related to the AI system should be informed of, trained and understand the
instructions and other documentation to the AI system and the duties they carry
out to satisfy human oversight objectives. When reporting performance issues,
human oversight can augment automated monitoring.
Other information
Annex C provides examples of organizational objectives for managing risk, which
can be useful in determining the objectives for AI system use.
B.9.4 Intended use of the AI system
Control
The organization should ensure that the AI system is used according to the
intended uses of the AI system and its accompanying documentation.
Implementation guidance
The AI system should be deployed according to the instructions and other
documentation associated with the AI system (see B.8.2). The deployment can
require specific resources to support the deployment, including the need to ensure
that human oversight is applied as required (see B.9.3). It can be necessary that for
acceptable use of the AI system, the data that the AI system is used on aligns with
the documentation associated with the AI system to ensure that the AI system
performance is accurate.
The operation of the AI system should be monitored (see B.6.2.6). Where the
correct deployment of the AI system according to its associated instructions causes
concern regarding the impact to relevant interested parties or the organization’s
legal requirements, the organization should communicate its concerns to the
relevant personnel inside the organization as well as to any third-party suppliers
of the AI system.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
58
The organization should keep event logs or other documentation related to the
deployment and operation of the AI system which can be used to demonstrate that
the AI system is being used as intended or to help with communicating concerns
related to the intended use of the AI system. The time period during which event
logs and other documentation are kept depends on the intended use of the AI
system, the organization’s data retention policies and relevant legal requirements
for data retention.
B.10 Third-party and customer relationships
B.10.1 Objective
To ensure that the organization understands its responsibilities and remains
accountable, and risks are appropriately apportioned when third parties are
involved at any stage of the AI system life cycle.
B.10.2 Allocating responsibilities
Control
The organization should ensure that responsibilities within their AI system life
cycle are allocated between the organization, its partners, suppliers, customers
and third parties.
Implementation guidance
In an AI system life cycle, responsibilities can be split between parties providing
data, parties providing algorithms and models, parties developing or using the AI
system and being accountable with regard to some or all interested parties. The
organization should document all parties intervening in the AI system life cycle
and their roles and determine their responsibilities.
Where the organization supplies an AI system to a third party, the organization
should ensure that it takes a responsible approach to developing the AI system. See
the controls and guidance in B.6. The organization should be able to provide the
necessary documentation (see B.6.2.7 and B.8.2) for the AI

system to relevant interested parties and to the third party that the organization is
supplying the AI system to.
When processed data includes PII, responsibilities are usually split between PII
processors and controllers. ISO/IEC 29100 provides further information on PII
controllers and PII processors. Where the privacy of PII is to be preserved, controls
such as those described in ISO/IEC 27701 should be considered. Based on the
organization’s and AI system’s data processing activities on PII and the
organization’s role in application and development of the AI system through their

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
59
life cycle, the organization can take on the role of a PII controller (or joint PII
controller), PII processor or both.
B.10.3 Suppliers
Control
The organization should establish a process to ensure that its usage of services,
products or materials provided by suppliers aligns with the organization’s
approach to the responsible development and use of AI systems.
Implementation guidance
Organizations developing or using an AI system can utilize suppliers in a number
of ways, from sourcing datasets, machine learning algorithms or models, or other
components of a system such as software libraries, to an entire AI system itself for
use on its own or as part of another product (e.g. a vehicle).
Organizations should consider different types of suppliers, what they supply, and
the varying level of risk this can pose to the system and organization as a whole in
determining the selection of suppliers, the requirements placed on those
suppliers, and the levels of ongoing monitoring and evaluation needed for the
suppliers.
Organizations should document how the AI system and AI system components are
integrated into AI systems developed or used by the organization.
Where the organization considers that the AI system or AI system components
from a supplier do not perform as intended or can result in impacts to individuals
or groups of individuals, or both, and societies that are not aligned with the
responsible approach to AI systems taken by the organization, the organization
should require the supplier to take corrective actions. The organization can decide
to work with the supplier to achieve this objective.
The organization should ensure that the supplier of an AI system delivers
appropriate and adequate documentation related to the AI system (see B.6.2.7 and
B.8.2).
B.10.4 Customers
Control
The organization should ensure that its responsible approach to the development
and use of AI systems considers their customer expectations and needs.
Implementation guidance
The organization should understand customer expectations and needs when it is
supplying a product or service related to an AI system (i.e. when it is itself a

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
60
supplier). These can come in the form of requirements for the product or service
itself during a design or engineering phase, or in the form of contractual
requirements or general usage agreements. One organization can have many
different types of customer relationships, and these can all have different needs
and expectations.
The organization should particularly understand the complex nature of supplier
and customer relationships and understand where responsibility lies with the
provider of the AI system and where it lies with the customer, while still meeting
needs and expectations.
For example, the organization can identify risks related to the use of its AI products
and services by the customer and can decide to treat the identified risks by giving
appropriate information to its customer, so that the customer can then treat the
corresponding risks.
As an example of appropriate information, when an AI system is valid for a certain
domain of use, the limits of the domain should be communicated to the customer.
See B.6.2.7 and B.8.2.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
61
ANNEX C
(informative)
Potential AI-related organizational objectives and risk sources

C.1 General
This annex outlines potential organizational objectives, risk sources and
descriptions that can be considered by the organization when managing risks. This
annex is not intended to be exhaustive or applicable for every organization. The
organization should determine the objectives and risk sources that are relevant.
ISO/IEC 23894 provides more detailed information on these objectives and risk
sources, and their relationship to risk management. Evaluation of AI systems,
initially, regularly and when warranted, provides evidence that an AI system is
being assessed against organizational objectives.
C.2 Objectives
C.2.1 Accountability
The use of AI can change existing accountability frameworks. Where previously
persons would be held accountable for their actions, their actions can now be
supported by or based on the use of an AI system.
C.2.2 AI expertise
A selection of dedicated specialists with interdisciplinary skill sets and expertise
in assessing, developing and deploying AI systems is needed.
C.2.3 Availability and quality of training and test data
AI systems based on ML need training, validation and test data in order to train
and verify the systems for the intended behaviour.
C.2.4 Environmental impact
The use of AI can have positive and negative impacts on the environment.
C.2.5 Fairness
The inappropriate application of AI systems for automated decision-making can be
unfair to specific persons or groups of persons.
C.2.6 Maintainability
Maintainability is related to the ability of the organization to handle modifications
of the AI system in order to correct defects or adjust to new requirements.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
62
C.2.7 Privacy
The misuse or disclosure of personal and sensitive data (e.g. health records) can
have harmful effects on data subjects.
C.2.8 Robustness
In AI, robustness properties demonstrate the ability (or inability) of the system to
have comparable performance on new data as on the data on which it was trained
or the data of typical operations.
C.2.9 Safety
Safety relates to the expectation that a system does not, under defined conditions,
lead to a state in which human life, health, property or the environment is
endangered.
C.2.10 Security
In the context of AI and in particular with regard to AI systems based on ML
approaches, new security issues should be considered beyond classical
information and system security concerns.
C.2.11 Transparency and explainability
Transparency relates both to characteristics of an organization operating AI
systems and to those systems themselves. Explainability relates to explanations of
important factors influencing the AI system results that are provided to interested
parties in a way understandable to humans.
C.3 Risk sources
C.3.1 Complexity of environment
When AI systems operate in complex environments, where the range of situation
is broad, there can be uncertainty on the performance and therefore a source of
risk (e.g. complex environment of autonomous driving).
C.3.2 Lack of transparency and explainability
The inability to provide appropriate information to interested parties can be a
source of risk (i.e. in terms of trustworthiness and accountability of the
organization).
C.3.3 Level of automation
The level of automation can have an impact on various areas of concerns, such as
safety, fairness or security.
C.3.4 Risk sources related to machine learning

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
63
The quality of data used for ML and the process used to collect data can be sources
of risk, as they can impact objectives such as safety and robustness (e.g. due to
issues in data quality or data poisoning).
C.3.5 System hardware issues
Risk sources related to hardware include hardware errors based on defective
components or transferring trained ML models between different systems.
C.3.6 System life cycle issues
Sources of risk can appear over the entire AI system life cycle (e.g. flaws in design,
inadequate deployment, lack of maintenance, issues with decommissioning).
C.3.7 Technology readiness
Risk sources can be related to less mature technology due to unknown factors (e.g.
system limitations and boundary conditions, performance drift), but also due to
the more mature technology due to technology complacency.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
64
ANNEX D
(informative)
Use of the AI management system across domains or sectors

D.1 General
This management system is applicable to any organization developing, providing
or using products or services that utilize an AI system. Therefore, it is applicable
potentially to a great variety of products and services, in different sectors, which
are subject to obligations, good practices, expectations or contractual commitment
towards interested parties. Examples of sectors are:
x health;
x defence;
x transport;
x finance;
x employment;
x energy.
Various organizational objectives (see Annex C for possible objectives) can be
considered for the responsible development and use of an AI system. This
document provides requirements and guidance from an AI technology specific
view. For several of the potential objectives, generic or sector-specific management
system standards exist. These management system standards consider the
objective usually from a technology neutral point of view, while the AI
management system provides AI technology specific considerations.
AI systems consist not only of components using AI technology, but can use a
variety of technologies and components. Responsible development and use of an
AI system therefore requires taking into account not only AI-specific
considerations, but also the system as a whole with all the technologies and
components that are used. Even for the AI technology specific part, other aspects
besides AI specific considerations should be taken into account. For example, as AI
is an information processing technology, information security applies generally to
it. Objectives such as safety, security, privacy and environmental impact should be
managed holistically and not separately for AI and the other components of the
system. Integration of the AI management system with generic or sector-specific
management system standards for relevant topics is therefore essential for
responsible development and use of an AI system.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
65
When providing or using AI systems, the organization can have objectives or
obligations related to aspects which are topics of other management system
standards. These can include, for example, security, privacy, quality, respectively
topics covered in ISO/IEC 27001, ISO/IEC 27701 and ISO 9001.
When providing, using or developing AI systems, potential relevant generic
management system standards, but not limited to that, are:
x ISO/IEC 27001: In most contexts, security is key to achieving the objectives
of the organization with the AI system. The way an organization pursues
security objectives depends on its context and its own policies. If an
organization identifies the need to implement an AI management system
and to address security objectives in a similar thorough and systematic way,
it can implement an information security management system in conformity
with ISO/IEC 27001. Given that both ISO/IEC 27001 and the AI management
systems use the high-level structure, their integrated use is facilitated and
of great benefit for the organization. In this case, the way to implement
controls which (partly) relate to information security in this document (see
B.6.1.2) can be integrated with the organization’s implementation of
ISO/IEC 27001.
x ISO/IEC 27701: In many context and application domains, PIIs are
processed by AI systems. The organization can then comply with the
applicable obligations for privacy and with its own policies and objectives.
Similarly, as for ISO/IEC 27001, the organization can benefit from the
integration of ISO/IEC 27701 with the AI management system. Privacy-
related objectives and controls of the AI management system (see B.2.3 and
B.5.4) can be integrated with the organization’s implementation of ISO/IEC
27701.
x ISO 9001: For many organizations, conformity to ISO 9001 is a key sign that
they are customer oriented and genuinely concerned about internal
effectiveness. Independent conformity assessment to ISO 9001 facilitates
business across organizations and inspires customer confidence in products
or services. The level of customer’s confidence in an organization or AI
system can be highly reinforced when an AI management system is
implemented jointly with ISO 9001 when AI technologies are involved. The
AI management system can be complementary to the ISO 9001
requirements (risk management, software development, supply chain
coherence, etc.) in helping the organization meet its objectives.
Besides the generic management system standards mentioned above, an AI
management system can also be used jointly with a management system dedicated
to a sector. For example, both ISO 22000 and an AI management system are

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
66
relevant for an AI system that is used for food production, preparation and
logistics. Another example is ISO 13485. The implementation of an AI management
system can support requirements related to medical device software in ISO 13485
or requirements from other International Standards from the medical sector such
as IEC 62304.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
67
BIBLIOGRAPHY
[1] ISO 8000-2, Data quality — Part 2: Vocabulary
[2] ISO 9001, Quality management systems — Requirements
[3] ISO 9241-210, Ergonomics of human-system interaction — Part 210:
Human-centred design for interactive systems
[4] ISO 13485, Medical devices — Quality management systems —
Requirements for regulatory purposes
[5] ISO 22000, Food safety management systems — Requirements for any
organization in the food chain
[6] IEC 62304, Medical device software — Software life cycle processes
[7] ISO/IEC Guide 51, Safety aspects — Guidelines for their inclusion in
standards
[8] ISO/IEC TS 4213, Information technology — Artificial intelligence —
Assessment of machine learning classification performance
[9] ISO/IEC 5259 (all parts2), Data quality for analytics and machine learning
(ML)
[10] ISO/IEC 5338, Information technology — Artificial intelligence — AI system
life cycle process
[11] ISO/IEC 17065, Conformity assessment — Requirements for bodies
certifying products, processes and services
[12] ISO/IEC 19944-1, Cloud computing and distributed platforms ─ Data flow,
data categories and data use — Part 1: Fundamentals
[13] ISO/IEC 23053, Framework for Artificial Intelligence (AI) Systems Using
Machine Learning (ML)
[14] ISO/IEC 23894, Information technology — Artificial intelligence —
Guidance on risk management
[15] ISO/IEC TR 24027, Information technology — Artificial intelligence (AI) —
Bias in AI systems and AI aided decision making
[16] ISO/IEC TR 24029-1, Artificial Intelligence (AI) — Assessment of the
robustness of neural networks — Part 1: Overview
[17] ISO/IEC TR 24368, Information technology — Artificial intelligence —
Overview of ethical and societal concerns

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
68
[18] ISO/IEC 25024, Systems and software engineering — Systems and software
Quality Requirements and Evaluation (SQuaRE) — Measurement of data quality
[19] ISO/IEC 25059, Software engineering — Systems and software Quality
Requirements and Evaluation (SQuaRE) — Quality model for AI systems
[20] ISO/IEC 27000:2018, Information technology — Security techniques —
Information security management systems — Overview and vocabulary
[21] ISO/IEC 27701, Security techniques — Extension to ISO/IEC 27001 and
ISO/IEC 27002 for privacy information management — Requirements and
guidelines
[22] ISO/IEC 27001, Information security, cybersecurity and privacy protection
— Information security management systems — Requirements
[23] ISO/IEC 29100, Information technology — Security techniques — Privacy
framework
[24] ISO 31000:2018, Risk management — Guidelines
[25] ISO 37002, Whistleblowing management systems — Guidelines
[26] ISO/IEC 38500:2015, Information technology — Governance of IT for the
organization
[27] ISO/IEC 38507, Information technology — Governance of IT — Governance
implications of the use of artificial intelligence by organizations
[28] Lifecycle D.D.I. 3.3, 2020-04-15. Data Documentation Initiative (DDI)
Alliance. [viewed on 202202-19]. Available at:
https://ddialliance.org/Specification/DDI -Lifecycle/3.3/
[29] Risk Framework N.I.S.T.-A.I. 1.0, 2023-01-26. National Institute of
Technology (NIST) [viewed on
2023-04-17] https://www.nist.gov/itl/ai-risk-management-framework

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
69
Scenario-Based MCQ for Certified ISO 42001:2023 Lead Auditor (BOK)

1. Scenario:

You are an auditor for an organization implementing ISO 42001:2023 for AI


management systems. During your audit, you discover that the organization's AI
system frequently uses personal data to improve its algorithms. However, you
notice that the organization lacks a documented process for data quality
management and does not perform regular data quality assessments.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Ignore the issue since the AI system is performing well and achieving its
intended results.

B. Recommend the organization to establish a data quality management process


and perform regular data quality assessments.

C. Advise the organization to discontinue using personal data in their AI systems.

D. Suggest the organization to outsource their data quality management to a third-


party vendor without any further oversight.

Answer Explanation:

Correct Answer: B. Recommend the organization to establish a data quality


management process and perform regular data quality assessments.

Explanation:

ISO 42001:2023 emphasizes the importance of managing data quality as part of an


AI management system. Ensuring that data used in AI systems is of high quality is
crucial for the reliability and trustworthiness of AI outcomes. A documented
process for data quality management and regular assessments can help identify
and mitigate risks associated with poor data quality, ensuring the AI system
continues to function effectively and responsibly.

Incorrect Answers:

● A. Ignore the issue since the AI system is performing well and achieving
its intended results.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
70
○ Ignoring the issue is not compliant with ISO 42001:2023.
Performance does not negate the need for proper data quality
management, which is essential for maintaining long-term reliability
and compliance.
● C. Advise the organization to discontinue using personal data in their
AI systems.
○ While this could mitigate some risks, it is not a practical or
comprehensive solution. Personal data is often critical for AI systems,
and discontinuing its use could impair the system’s functionality.
Proper data quality management is the correct approach.
● D. Suggest the organization to outsource their data quality
management to a third-party vendor without any further oversight.
○ Outsourcing can be part of a solution, but without proper oversight,
it does not ensure compliance with ISO 42001:2023. The
organization must maintain accountability and ensure the third party
adheres to the necessary standards and practices.

2. Scenario:

During an internal audit of your organization’s AI management system, you find


that the organization has not conducted any AI system impact assessments as
required by ISO 42001:2023. The AI systems in use include automated decision-
making tools that significantly affect customer creditworthiness assessments.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Conduct an AI system impact assessment immediately and document the


results.

B. Ignore the requirement since the AI system has not had any reported issues.

C. Discontinue the use of AI systems for customer creditworthiness assessments.

D. Perform a cost-benefit analysis to determine if AI system impact assessments


are financially viable.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
71
Correct Answer: A. Conduct an AI system impact assessment immediately
and document the results.

Explanation:

ISO 42001:2023 mandates that organizations conduct AI system impact


assessments to identify, evaluate, and address potential consequences of AI
systems on individuals and societies. This is especially crucial for systems that
significantly impact individuals, such as those used for creditworthiness
assessments. Conducting and documenting the impact assessment ensures
compliance and helps identify any risks or negative impacts that need to be
mitigated.

Incorrect Answers:

● B. Ignore the requirement since the AI system has not had any reported
issues.
○ Ignoring the requirement is non-compliant with ISO 42001:2023.
The absence of reported issues does not eliminate the need for an
impact assessment, which is a proactive measure to ensure
responsible AI use.
● C. Discontinue the use of AI systems for customer creditworthiness
assessments.
○ Discontinuing the use of AI systems is not a practical solution.
Instead, ensuring compliance through proper impact assessments is
the correct approach to responsibly manage AI systems.
● D. Perform a cost-benefit analysis to determine if AI system impact
assessments are financially viable.
○ Financial viability should not determine compliance with ISO
42001:2023. Impact assessments are a mandatory requirement to
ensure the responsible and ethical use of AI systems.

3. Scenario:

You are conducting an audit for an organization that has recently implemented an
AI system for automated hiring processes. During your review, you find that the
organization has not established clear roles and responsibilities for managing the
AI system, leading to confusion and inefficiencies.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
72
Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Allow the current process to continue as it is since the AI system is functioning


correctly.

B. Recommend that the organization define and allocate roles and responsibilities
specific to the AI system.

C. Suggest hiring external consultants to manage the AI system without internal


involvement.

D. Advise the organization to disable the AI system until roles and responsibilities
are clearly defined.

Answer Explanation:

Correct Answer: B. Recommend that the organization define and allocate


roles and responsibilities specific to the AI system.

Explanation:

ISO 42001:2023 requires organizations to establish clear roles and responsibilities


to ensure accountability and effective management of AI systems. Defining and
allocating roles specific to the AI system ensures that responsibilities are clear,
which helps in maintaining the system’s performance, addressing issues promptly,
and ensuring compliance with organizational policies.

Incorrect Answers:

● A. Allow the current process to continue as it is since the AI system is


functioning correctly.
○ Allowing the current process to continue without defining roles and
responsibilities is non-compliant with ISO 42001:2023. Functionality
does not replace the need for proper management and accountability
structures.
● C. Suggest hiring external consultants to manage the AI system without
internal involvement.
○ While external consultants can provide expertise, completely
outsourcing without internal involvement does not ensure

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
73
compliance or accountability. Internal roles and responsibilities must
still be defined and managed.
● D. Advise the organization to disable the AI system until roles and
responsibilities are clearly defined.
○ Disabling the system is an extreme measure. Instead, the organization
should promptly define and allocate roles to ensure continued use
and compliance with ISO 42001:2023.

4. Scenario:

While auditing an organization's AI management system, you observe that the


organization has not integrated the AI management system requirements into
their overall business processes. This lack of integration is causing misalignment
and inefficiencies in managing AI-related risks and objectives.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Suggest that the organization handles AI management as a separate, standalone


process without integrating it into business processes.

B. Recommend that the organization integrate AI management system


requirements into their overall business processes.

C. Advise the organization to focus solely on AI system performance and ignore


integration with business processes.

D. Propose that the organization conducts a cost-benefit analysis to decide


whether to integrate AI management system requirements.

Answer Explanation:

Correct Answer: B. Recommend that the organization integrate AI


management system requirements into their overall business processes.

Explanation:

ISO 42001:2023 requires that AI management system requirements be integrated


into the organization's overall business processes. This integration ensures that
AI-related risks and objectives are managed consistently and effectively across the
organization, promoting alignment and efficiency. Proper integration helps in

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
74
achieving strategic goals and maintaining compliance with AI management
standards.

Incorrect Answers:

● A. Suggest that the organization handles AI management as a separate,


standalone process without integrating it into business processes.
○ Handling AI management as a standalone process contradicts ISO
42001:2023 requirements. Integration into overall business
processes is essential for comprehensive risk and objective
management.
● C. Advise the organization to focus solely on AI system performance
and ignore integration with business processes.
○ Focusing only on AI system performance while ignoring process
integration can lead to mismanagement and inefficiencies.
Comprehensive integration is necessary for overall effectiveness.
● D. Propose that the organization conducts a cost-benefit analysis to
decide whether to integrate AI management system requirements.
○ Integration of AI management system requirements is a mandatory
aspect of ISO 42001:2023 and should not be contingent on a cost-
benefit analysis. Compliance and strategic alignment are essential
regardless of costs.

5. Scenario:

During an audit of an organization's AI management system, you discover that the


organization has implemented several AI systems. However, they have not
established a process for regular internal audits of these AI systems, which is
affecting their ability to ensure ongoing compliance and effectiveness.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend the organization to establish and maintain an internal audit


program specifically for AI systems.

B. Advise the organization to conduct internal audits only when a problem with an
AI system is reported.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
75
C. Suggest the organization hire an external auditor to conduct a one-time audit of
all AI systems.

D. Ignore the lack of an internal audit process since the AI systems are currently
functioning well.

Answer Explanation:

Correct Answer: A. Recommend the organization to establish and maintain


an internal audit program specifically for AI systems.

Explanation:

ISO 42001:2023 requires organizations to conduct regular internal audits to


ensure the AI management system's compliance and effectiveness. Establishing
and maintaining an internal audit program specific to AI systems allows the
organization to identify and address any issues proactively, ensuring continuous
improvement and adherence to the standards.

Incorrect Answers:

● B. Advise the organization to conduct internal audits only when a


problem with an AI system is reported.
○ Conducting audits only when problems are reported is reactive and
does not align with ISO 42001:2023 requirements for regular
internal audits to ensure ongoing compliance and effectiveness.
● C. Suggest the organization hire an external auditor to conduct a one-
time audit of all AI systems.
○ While external audits can be beneficial, relying solely on a one-time
external audit does not meet the requirement for regular internal
audits, which are essential for continuous monitoring and
improvement.
● D. Ignore the lack of an internal audit process since the AI systems are
currently functioning well.
○ Ignoring the lack of an internal audit process is non-compliant with
ISO 42001:2023. Regular internal audits are necessary to ensure
ongoing compliance, effectiveness, and continuous improvement of
AI systems.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
76
6. Scenario:

In your audit of an organization’s AI management system, you find that the


organization has not documented the provenance of the data used in their AI
systems. This lack of documentation makes it difficult to trace the origin of data
and assess its quality and suitability for AI applications.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue using the data without documenting
its provenance since it has been working fine so far.

B. Advise the organization to document the provenance of data used in AI systems


and establish processes for maintaining this documentation.

C. Suggest the organization replace all current data with new data whose
provenance can be easily documented.

D. Ignore the issue of data provenance documentation and focus on other aspects
of the AI management system.

Answer Explanation:

Correct Answer: B. Advise the organization to document the provenance of


data used in AI systems and establish processes for maintaining this
documentation.

Explanation:

ISO 42001:2023 emphasizes the importance of data quality and provenance in AI


systems. Documenting the provenance of data is essential to ensure its quality,
traceability, and suitability for AI applications. Establishing and maintaining
documentation processes helps in assessing data quality and addressing any
issues related to data sources, thereby ensuring the reliability and compliance of
AI systems.

Incorrect Answers:

● A. Recommend that the organization continue using the data without


documenting its provenance since it has been working fine so far.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
77
○ Continuing to use data without documenting its provenance is non-
compliant with ISO 42001:2023 and poses risks to data quality and
traceability.
● C. Suggest the organization replace all current data with new data
whose provenance can be easily documented.
○ Replacing all current data is impractical and unnecessary.
Establishing processes to document the provenance of existing data
is a more effective and feasible solution.
● D. Ignore the issue of data provenance documentation and focus on
other aspects of the AI management system.
○ Ignoring data provenance documentation overlooks a critical
requirement of ISO 42001:2023. Ensuring data provenance is
essential for maintaining data quality and the integrity of AI systems.

7. Scenario:

While auditing an organization's AI management system, you discover that the


organization has implemented an AI system that continuously learns and evolves.
However, they have not established a process for monitoring the AI system's
behavior over time, which is critical for ensuring that the system remains aligned
with its intended purpose.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization deactivate the continuous learning feature to


avoid potential issues.

B. Advise the organization to establish and implement a process for ongoing


monitoring and evaluation of the AI system's behavior.

C. Suggest that the organization only review the AI system's behavior annually to
save resources.

D. Ignore the need for monitoring as long as the AI system performs its tasks
correctly.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
78
Correct Answer: B. Advise the organization to establish and implement a
process for ongoing monitoring and evaluation of the AI system's behavior.

Explanation:

ISO 42001:2023 requires organizations to have processes in place for the ongoing
monitoring and evaluation of AI systems, especially those that continuously learn
and evolve. Continuous monitoring ensures that the AI system remains aligned
with its intended purpose, performs as expected, and does not introduce new risks
over time. Implementing such a process helps maintain compliance and addresses
any deviations promptly.

Incorrect Answers:

● A. Recommend that the organization deactivate the continuous


learning feature to avoid potential issues.
○ Deactivating the continuous learning feature is not a practical
solution. Continuous learning is often essential for the AI system's
effectiveness. Proper monitoring is the correct approach to manage
potential issues.
● C. Suggest that the organization only review the AI system's behavior
annually to save resources.
○ Annual reviews are insufficient for continuously learning AI systems.
Ongoing monitoring is necessary to promptly identify and address
any deviations or risks that arise as the system evolves.
● D. Ignore the need for monitoring as long as the AI system performs its
tasks correctly.
○ Ignoring the need for monitoring is non-compliant with ISO
42001:2023. Ongoing monitoring is crucial to ensure the AI system
continues to perform correctly and remains aligned with its intended
purpose.

8. Scenario:

During an audit, you find that the organization uses an AI system for automated
decision-making in loan approvals. The organization has not established a process
for communicating incidents or system failures to the users affected by the AI
system's decisions.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
79
Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue operating as usual since there have
been no major complaints from users.

B. Advise the organization to establish and document a process for communicating


incidents or system failures to users affected by the AI system's decisions.

C. Suggest that the organization only communicate incidents internally and not to
the users to avoid unnecessary concerns.

D. Propose that the organization temporarily halt the use of the AI system until a
communication process is established.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and document a


process for communicating incidents or system failures to users affected by
the AI system's decisions.

Explanation:

ISO 42001:2023 requires organizations to have processes in place for effectively


communicating incidents or system failures to relevant interested parties,
including users affected by the AI system's decisions. Establishing and
documenting such a process ensures transparency, maintains user trust, and
allows the organization to address any issues promptly and appropriately.

Incorrect Answers:

● A. Recommend that the organization continue operating as usual since


there have been no major complaints from users.
○ Continuing without a communication process is non-compliant with
ISO 42001:2023. Proactive communication is necessary to address
any potential issues and maintain user trust.
● C. Suggest that the organization only communicate incidents internally
and not to the users to avoid unnecessary concerns.
○ Communicating only internally fails to provide transparency and
does not meet the requirements of ISO 42001:2023. Users affected by

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
80
the AI system's decisions should be informed about incidents or
failures.
● D. Propose that the organization temporarily halt the use of the AI
system until a communication process is established.
○ Temporarily halting the AI system may not be necessary. Instead, the
organization should establish and implement the required
communication process promptly to ensure compliance and
continued operation.

9. Scenario:

During your audit of an organization's AI management system, you observe that


the organization uses several third-party AI components in its products. However,
there is no formal process in place to manage and review these third-party AI
components, leading to potential risks related to quality, security, and compliance.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization trust the third-party providers to manage the
quality and security of their AI components.

B. Advise the organization to establish a formal process for managing and


reviewing third-party AI components, including conducting regular assessments
and audits.

C. Suggest that the organization replace all third-party AI components with


internally developed ones to ensure control over quality and security.

D. Ignore the issue since the third-party AI components have not caused any
problems so far.

Answer Explanation:

Correct Answer: B. Advise the organization to establish a formal process for


managing and reviewing third-party AI components, including conducting
regular assessments and audits.

Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
81
ISO 42001:2023 requires organizations to manage and review third-party AI
components to ensure their quality, security, and compliance with organizational
policies and standards. Establishing a formal process for managing and reviewing
these components, including conducting regular assessments and audits, helps
mitigate risks and ensures the reliability and integrity of the AI systems.

Incorrect Answers:

● A. Recommend that the organization trust the third-party providers to


manage the quality and security of their AI components.
○ Relying solely on third-party providers without a formal
management and review process is non-compliant with ISO
42001:2023 and poses significant risks.
● C. Suggest that the organization replace all third-party AI components
with internally developed ones to ensure control over quality and
security.
○ Replacing all third-party components is impractical and unnecessary.
Proper management and review processes are sufficient to ensure
quality and security.
● D. Ignore the issue since the third-party AI components have not
caused any problems so far.
○ Ignoring the issue is non-compliant with ISO 42001:2023. Proactive
management and review are necessary to prevent potential problems
and ensure compliance.

10. Scenario:

During an audit of an organization’s AI management system, you find that the


organization has a well-documented AI policy and risk management process.
However, there is no evidence of training programs to ensure that employees
involved in AI development and deployment are aware of the AI policy and
understand their roles and responsibilities.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization hire external consultants to manage AI


development and deployment instead of training current employees.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
82
B. Advise the organization to establish and implement regular training programs
for employees involved in AI development and deployment to ensure they are
aware of the AI policy and understand their roles and responsibilities.

C. Suggest that the organization distribute a memo detailing the AI policy and
employee responsibilities without any further training.

D. Ignore the lack of training programs since the organization has documented its
AI policy and risk management process.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


regular training programs for employees involved in AI development and
deployment to ensure they are aware of the AI policy and understand their
roles and responsibilities.

Explanation:

ISO 42001:2023 requires that organizations ensure their employees are


competent and aware of their roles and responsibilities related to AI systems.
Regular training programs help employees understand the AI policy, their specific
roles, and the importance of adhering to the established processes. This ensures
effective implementation and compliance with the AI management system.

Incorrect Answers:

● A. Recommend that the organization hire external consultants to


manage AI development and deployment instead of training current
employees.
○ Hiring external consultants does not address the need for internal
employee competence and awareness. Training current employees is
essential for maintaining internal knowledge and compliance.
● C. Suggest that the organization distribute a memo detailing the AI
policy and employee responsibilities without any further training.
○ Distributing a memo is insufficient for ensuring understanding and
competence. Regular training programs are necessary to ensure
thorough awareness and adherence to the AI policy.
● D. Ignore the lack of training programs since the organization has
documented its AI policy and risk management process.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
83
○ Ignoring the lack of training programs is non-compliant with ISO
42001:2023. Proper training is essential to ensure employees
understand and can effectively implement the AI policy and risk
management processes.

11. Scenario:

During an audit of an organization's AI management system, you find that the


organization has established comprehensive AI risk assessment and treatment
processes. However, there is no documented procedure for handling
nonconformities related to AI systems, leading to unresolved issues and potential
compliance risks.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore nonconformities if they are minor and
do not significantly impact the AI system’s performance.

B. Advise the organization to establish and document a procedure for handling


nonconformities related to AI systems, including corrective actions and preventive
measures.

C. Suggest that the organization rely on ad-hoc measures to address


nonconformities as they arise, without formal documentation.

D. Propose that the organization perform a complete overhaul of their AI systems


to eliminate any possibility of nonconformities.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and document a


procedure for handling nonconformities related to AI systems, including
corrective actions and preventive measures.

Explanation:

ISO 42001:2023 requires organizations to have documented procedures for


handling nonconformities to ensure issues are promptly identified, addressed, and
prevented from recurring. Establishing and documenting such procedures ensures

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
84
a systematic approach to managing nonconformities, contributing to the
continuous improvement and reliability of AI systems.

Incorrect Answers:

● A. Recommend that the organization ignore nonconformities if they


are minor and do not significantly impact the AI system’s performance.
○ Ignoring nonconformities, even if minor, is non-compliant with ISO
42001:2023. All nonconformities should be addressed to ensure
overall system integrity and compliance.
● C. Suggest that the organization rely on ad-hoc measures to address
nonconformities as they arise, without formal documentation.
○ Ad-hoc measures lack consistency and reliability. Formal,
documented procedures are necessary to systematically address and
prevent nonconformities.
● D. Propose that the organization perform a complete overhaul of their
AI systems to eliminate any possibility of nonconformities.
○ Overhauling AI systems is an impractical and extreme solution.
Properly handling nonconformities through documented procedures
is the appropriate and feasible approach.

12. Scenario:

While auditing an organization's AI management system, you discover that the


organization has developed a robust AI system for automated medical diagnoses.
However, there is no documented evidence of an AI system impact assessment
being conducted, which is crucial given the sensitive nature and potential risks
associated with medical AI applications.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend the organization to discontinue the AI system until an impact


assessment is conducted.

B. Advise the organization to conduct an AI system impact assessment immediately


and document the results.

C. Suggest that the organization rely on the existing medical guidelines instead of
conducting an AI system impact assessment.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
85
D. Ignore the lack of an impact assessment since the AI system is performing well
and providing accurate diagnoses.

Answer Explanation:

Correct Answer: B. Advise the organization to conduct an AI system impact


assessment immediately and document the results.

Explanation:

ISO 42001:2023 requires organizations to conduct AI system impact assessments


to identify, evaluate, and address potential impacts on individuals and society,
especially for sensitive applications like medical diagnoses. Conducting and
documenting an AI system impact assessment ensures that the organization
understands and mitigates any risks associated with the AI system, ensuring
compliance and responsible use.

Incorrect Answers:

● A. Recommend the organization to discontinue the AI system until an


impact assessment is conducted.
○ Discontinuing the AI system may be overly disruptive. Instead,
conducting the impact assessment promptly while continuing to
monitor the system's use can address compliance and risk
management needs.
● C. Suggest that the organization rely on the existing medical guidelines
instead of conducting an AI system impact assessment.
○ Existing medical guidelines do not replace the need for a specific AI
system impact assessment. The assessment is crucial for
understanding the AI system's unique risks and impacts.
● D. Ignore the lack of an impact assessment since the AI system is
performing well and providing accurate diagnoses.
○ Ignoring the lack of an impact assessment is non-compliant with ISO
42001:2023. Performance alone does not ensure that all potential
risks are identified

13. Scenario:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
86
During an audit, you find that the organization’s AI system used for customer
service interactions has a high error rate, resulting in frequent customer
complaints. Despite this, the organization has not reviewed or updated their AI
objectives and performance metrics since the system was implemented.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore the high error rate since customer
complaints are not a major concern.

B. Advise the organization to review and update their AI objectives and


performance metrics to better align with current performance and customer
expectations.

C. Suggest that the organization continue using the existing AI objectives and
performance metrics without any changes.

D. Propose that the organization temporarily disable the AI system until all
customer complaints are resolved.

Answer Explanation:

Correct Answer: B. Advise the organization to review and update their AI


objectives and performance metrics to better align with current
performance and customer expectations.

Explanation:

ISO 42001:2023 requires organizations to regularly review and update AI


objectives and performance metrics to ensure they remain relevant and aligned
with the system's performance and stakeholder expectations. By updating these
objectives and metrics, the organization can better address the high error rate and
improve customer satisfaction, ensuring compliance and continuous
improvement.

Incorrect Answers:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
87
● A. Recommend that the organization ignore the high error rate since
customer complaints are not a major concern.
○ Ignoring the high error rate and customer complaints is non-
compliant with ISO 42001:2023. Addressing these issues through
updated objectives and metrics is essential for maintaining system
effectiveness and customer trust.
● C. Suggest that the organization continue using the existing AI
objectives and performance metrics without any changes.
○ Continuing with outdated objectives and metrics does not address
the high error rate or improve system performance. Regular reviews
and updates are necessary to ensure relevance and effectiveness.
● D. Propose that the organization temporarily disable the AI system
until all customer complaints are resolved.
○ Temporarily disabling the system is not a practical solution. Instead,
reviewing and updating the objectives and metrics can provide a
sustainable way to address and resolve the issues while maintaining
system operations.

14. Scenario:

During an audit of an organization's AI management system, you discover that the


organization has not established a process for regularly communicating with
interested parties about the AI system's performance and impact. This lack of
communication has led to misunderstandings and mistrust among stakeholders.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue without any formal communication


process since misunderstandings are minimal.

B. Advise the organization to establish and implement a regular communication


process to keep interested parties informed about the AI system's performance
and impact.

C. Suggest that the organization communicate only when there are significant
changes or issues with the AI system.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
88
D. Propose that the organization hire a public relations firm to handle all
communications related to the AI system.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement a


regular communication process to keep interested parties informed about
the AI system's performance and impact.

Explanation:

ISO 42001:2023 requires organizations to maintain open and regular


communication with interested parties regarding the performance and impact of
AI systems. Establishing a formal communication process helps build trust,
ensures transparency, and keeps stakeholders informed, which is crucial for
managing expectations and addressing concerns effectively.

Incorrect Answers:

● A. Recommend that the organization continue without any formal


communication process since misunderstandings are minimal.
○ Continuing without a formal communication process is non-
compliant with ISO 42001:2023. Regular communication is
necessary to maintain transparency and trust among stakeholders.
● C. Suggest that the organization communicate only when there are
significant changes or issues with the AI system.
○ Communicating only during significant changes or issues is
insufficient. Regular updates are essential to keep stakeholders
continuously informed and engaged.
● D. Propose that the organization hire a public relations firm to handle
all communications related to the AI system.
○ While a public relations firm can assist with communication, the
organization must still establish a formal process to ensure
consistent and ongoing communication with interested parties. This
process should be integrated into the overall AI management system.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
89
15. Scenario:

During an audit of an organization's AI management system, you find that the


organization is using AI systems to process sensitive personal data. However, the
organization has not implemented any specific measures to ensure the privacy and
security of this data.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore the need for privacy and security
measures since there have been no data breaches.

B. Advise the organization to implement specific measures to ensure the privacy


and security of sensitive personal data processed by their AI systems.

C. Suggest that the organization anonymize all personal data to avoid the need for
privacy and security measures.

D. Propose that the organization reduce the use of sensitive personal data in their
AI systems to minimize privacy and security concerns.

Answer Explanation:

Correct Answer: B. Advise the organization to implement specific measures


to ensure the privacy and security of sensitive personal data processed by
their AI systems.

Explanation:

ISO 42001:2023 requires organizations to implement measures to protect the


privacy and security of sensitive personal data processed by AI systems. This
includes ensuring data confidentiality, integrity, and availability, as well as
compliance with relevant data protection regulations. Implementing specific
privacy and security measures helps mitigate risks and ensures the responsible
and ethical use of AI.

Incorrect Answers:

● A. Recommend that the organization ignore the need for privacy and
security measures since there have been no data breaches.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
90
○ Ignoring privacy and security measures is non-compliant with ISO
42001:2023 and poses significant risks. Preventative measures are
essential to protect sensitive data.
● C. Suggest that the organization anonymize all personal data to avoid
the need for privacy and security measures.
○ While anonymizing data can help, it is not always practical or
sufficient. Specific privacy and security measures are necessary to
protect data that cannot be anonymized and to ensure compliance.
● D. Propose that the organization reduce the use of sensitive personal
data in their AI systems to minimize privacy and security concerns.
○ Reducing the use of sensitive personal data is not a comprehensive
solution. Proper measures must be implemented to ensure privacy
and security for any sensitive data used.

16. Scenario:

During an audit, you discover that the organization’s AI system used for employee
performance evaluations has not been tested for biases. This has led to several
complaints from employees about unfair evaluations.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore the complaints and continue using the
AI system as it is.

B. Advise the organization to conduct a thorough bias assessment of the AI system


and implement corrective measures based on the findings.

C. Suggest that the organization manually review all AI-generated evaluations to


check for fairness.

D. Propose that the organization stop using AI for employee performance


evaluations altogether.

Answer Explanation:

Correct Answer: B. Advise the organization to conduct a thorough bias


assessment of the AI system and implement corrective measures based on
the findings.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
91
Explanation:

ISO 42001:2023 emphasizes the importance of ensuring AI systems are fair and
unbiased. Conducting a thorough bias assessment helps identify and address any
potential biases in the AI system, ensuring that evaluations are fair and compliant
with the standard. Implementing corrective measures based on the assessment
findings is crucial for maintaining trust and fairness in the AI system's outcomes.

Incorrect Answers:

● A. Recommend that the organization ignore the complaints and


continue using the AI system as it is.
○ Ignoring the complaints and continuing with the biased system is
non-compliant with ISO 42001:2023 and can lead to further
unfairness and dissatisfaction among employees.
● C. Suggest that the organization manually review all AI-generated
evaluations to check for fairness.
○ While manual reviews can help in the short term, they are not a
sustainable solution. A thorough bias assessment and corrective
measures are necessary for long-term fairness and compliance.
● D. Propose that the organization stop using AI for employee
performance evaluations altogether.
○ Stopping the use of AI for performance evaluations is not necessary if
the system can be corrected for biases. Ensuring the AI system is fair
and unbiased through proper assessments and adjustments is the
appropriate approach.

17. Scenario:

During an audit of an organization's AI management system, you find that the


organization has not established clear procedures for documenting and retaining
evidence of the AI system's compliance with legal and regulatory requirements.
This has led to difficulties in demonstrating compliance during external reviews.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization focus on improving the AI system's


performance instead of documenting compliance.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
92
B. Advise the organization to establish clear procedures for documenting and
retaining evidence of the AI system's compliance with legal and regulatory
requirements.

C. Suggest that the organization outsource compliance documentation to a third-


party service provider.

D. Propose that the organization document compliance only when requested by


external reviewers.

Answer Explanation:

Correct Answer: B. Advise the organization to establish clear procedures for


documenting and retaining evidence of the AI system's compliance with legal
and regulatory requirements.

Explanation:

ISO 42001:2023 requires organizations to have clear procedures for documenting


and retaining evidence of compliance with legal and regulatory requirements. This
documentation is essential for demonstrating compliance during external reviews
and ensuring that the AI system operates within the legal framework. Establishing
these procedures helps maintain transparency, accountability, and compliance.

Incorrect Answers:

● A. Recommend that the organization focus on improving the AI


system's performance instead of documenting compliance.
○ Focusing solely on performance without ensuring compliance
documentation is non-compliant with ISO 42001:2023. Both
performance and compliance are crucial.
● C. Suggest that the organization outsource compliance documentation
to a third-party service provider.
○ While outsourcing can be helpful, the organization must still have
clear internal procedures to ensure comprehensive and accurate
documentation of compliance.
● D. Propose that the organization document compliance only when
requested by external reviewers.
○ Documenting compliance only upon request is insufficient. Regular
and proactive documentation is necessary to ensure continuous
compliance and readiness for any review.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
93
18. Scenario:

During an audit, you find that the organization has an AI policy in place, but the
policy does not include any commitments to continual improvement or integration
with the organization's strategic direction. This oversight has led to stagnation in
the development and optimization of AI systems.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue with the existing AI policy since it
addresses the basic requirements.

B. Advise the organization to revise their AI policy to include commitments to


continual improvement and alignment with the organization's strategic direction.

C. Suggest that the organization establish a separate policy for continual


improvement of AI systems without integrating it into the main AI policy.

D. Propose that the organization focus on immediate performance improvements


rather than policy changes.

Answer Explanation:

Correct Answer: B. Advise the organization to revise their AI policy to include


commitments to continual improvement and alignment with the
organization's strategic direction.

Explanation:

ISO 42001:2023 requires that the AI policy include commitments to continual


improvement and be aligned with the organization's strategic direction. This
ensures that the AI systems evolve and improve over time, staying relevant and
effective in meeting organizational goals. Revising the AI policy to incorporate
these elements is essential for compliance and long-term success.

Incorrect Answers:

● A. Recommend that the organization continue with the existing AI


policy since it addresses the basic requirements.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
94
○ Continuing with a policy that lacks commitments to continual
improvement and strategic alignment is non-compliant with ISO
42001:2023 and hinders the organization's progress.
● C. Suggest that the organization establish a separate policy for
continual improvement of AI systems without integrating it into the
main AI policy.
○ A separate policy can create fragmentation and inconsistency.
Integrating continual improvement into the main AI policy ensures a
cohesive and comprehensive approach.
● D. Propose that the organization focus on immediate performance
improvements rather than policy changes.
○ Immediate performance improvements are important, but without
policy changes, they may not be sustainable. A revised AI policy
ensures ongoing and systematic improvement.

19. Scenario:

During an audit, you discover that the organization has defined roles and
responsibilities for AI management. However, there is no documented evidence
that these roles include accountability for ensuring the ethical use of AI systems,
leading to potential ethical concerns.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore ethical considerations since they are
not explicitly required by the standard.

B. Advise the organization to update the roles and responsibilities to include


accountability for ensuring the ethical use of AI systems and document this change.

C. Suggest that the organization focus only on technical performance and leave
ethical considerations to external regulators.

D. Propose that the organization establish a separate committee to oversee ethical


issues without involving the current AI management roles.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
95
Correct Answer: B. Advise the organization to update the roles and
responsibilities to include accountability for ensuring the ethical use of AI
systems and document this change.

Explanation:

ISO 42001:2023 emphasizes the importance of ethical considerations in the use of


AI systems. Including accountability for ethical use in the defined roles and
responsibilities ensures that ethical concerns are actively managed and addressed.
Documenting these changes ensures transparency and accountability within the
organization.

Incorrect Answers:

● A. Recommend that the organization ignore ethical considerations


since they are not explicitly required by the standard.
○ Ignoring ethical considerations is non-compliant with the spirit of
ISO 42001:2023, which emphasizes responsible and ethical AI use.
● C. Suggest that the organization focus only on technical performance
and leave ethical considerations to external regulators.
○ Ethical considerations are an integral part of AI management and
should not be left solely to external regulators. Internal accountability
is crucial.
● D. Propose that the organization establish a separate committee to
oversee ethical issues without involving the current AI management
roles.
○ While a separate committee can help, integrating ethical
accountability into the current AI management roles ensures
comprehensive and consistent oversight.

20. Scenario:

During an audit, you find that the organization has implemented several AI
systems but has not established a process for evaluating the impact of AI system
failures on their overall business operations. This lack of evaluation has led to
significant disruptions whenever an AI system fails.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
96
A. Recommend that the organization ignore AI system failures and focus on other
business priorities.

B. Advise the organization to establish and implement a process for evaluating the
impact of AI system failures on business operations and document the findings.

C. Suggest that the organization manually address AI system failures as they occur
without any formal evaluation process.

D. Propose that the organization replace all AI systems with manual processes to
avoid system failures.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement a


process for evaluating the impact of AI system failures on business
operations and document the findings.

Explanation:

ISO 42001:2023 requires organizations to have processes in place for evaluating


the impact of AI system failures on their business operations. This ensures that the
organization can identify and mitigate potential disruptions, maintaining business
continuity and minimizing risks. Documenting these evaluations helps in
understanding the root causes and implementing corrective measures effectively.

Incorrect Answers:

● A. Recommend that the organization ignore AI system failures and


focus on other business priorities.
○ Ignoring AI system failures is non-compliant with ISO 42001:2023
and can lead to significant business disruptions and risks.
● C. Suggest that the organization manually address AI system failures as
they occur without any formal evaluation process.
○ Manually addressing failures without a formal process is inefficient
and does not provide a systematic approach to identify and mitigate
risks.
● D. Propose that the organization replace all AI systems with manual
processes to avoid system failures.
○ Replacing AI systems with manual processes is impractical and
counterproductive. Implementing a proper evaluation process is a
more effective solution.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
97
21. Scenario:

During an audit, you find that the organization has been using AI systems for
automated customer service responses. However, the organization has not defined
or documented any criteria for measuring the effectiveness and efficiency of these
AI systems.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization discontinue the use of AI systems for


customer service until effectiveness and efficiency criteria are defined.

B. Advise the organization to define and document specific criteria for measuring
the effectiveness and efficiency of their AI systems used in customer service.

C. Suggest that the organization continue using the AI systems without any defined
criteria, as long as there are no customer complaints.

D. Propose that the organization focus on expanding their AI capabilities rather


than defining criteria for current systems.

Answer Explanation:

Correct Answer: B. Advise the organization to define and document specific


criteria for measuring the effectiveness and efficiency of their AI systems
used in customer service.

Explanation:

ISO 42001:2023 requires organizations to establish criteria for measuring the


effectiveness and efficiency of AI systems. Defining and documenting these criteria
ensures that the organization can systematically evaluate and improve their AI
systems, ensuring they meet desired performance standards and provide value to
the organization and its customers.

Incorrect Answers:

● A. Recommend that the organization discontinue the use of AI systems


for customer service until effectiveness and efficiency criteria are
defined.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
98
○ Discontinuing the use of AI systems may be unnecessarily disruptive.
Defining and documenting criteria while continuing to use the
systems is a more practical approach.
● C. Suggest that the organization continue using the AI systems without
any defined criteria, as long as there are no customer complaints.
○ Using AI systems without defined criteria does not ensure
compliance with ISO 42001:2023 and can lead to unmanaged
performance and efficiency issues.
● D. Propose that the organization focus on expanding their AI
capabilities rather than defining criteria for current systems.
○ Expanding AI capabilities without ensuring current systems are
effective and efficient is not compliant with ISO 42001:2023. Proper
evaluation criteria are essential for responsible AI management.

22. Scenario:

During an audit of an organization's AI management system, you discover that


while the organization has a well-documented AI policy, they have not established
a clear process for handling changes to AI systems. This has resulted in several
unauthorized changes that negatively impacted the system's performance.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization revert all changes made to the AI systems
until a process is established.

B. Advise the organization to establish and document a formal process for


managing changes to AI systems, including authorization and review procedures.

C. Suggest that the organization allow changes to AI systems to be made informally


as long as they are communicated to the team afterward.

D. Propose that the organization only implement changes that are deemed critical,
without the need for a formal process.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
99
Correct Answer: B. Advise the organization to establish and document a
formal process for managing changes to AI systems, including authorization
and review procedures.

Explanation:

ISO 42001:2023 requires organizations to have formal processes in place for


managing changes to AI systems. This includes documenting the procedures for
authorization, review, and implementation of changes to ensure that all
modifications are properly controlled and do not negatively impact system
performance. Establishing such a process helps maintain system integrity and
compliance.

Incorrect Answers:

● A. Recommend that the organization revert all changes made to the AI


systems until a process is established.
○ Reverting all changes may be disruptive and impractical. Instead,
establishing a formal process to manage future changes is a more
balanced approach.
● C. Suggest that the organization allow changes to AI systems to be made
informally as long as they are communicated to the team afterward.
○ Informal change management lacks control and accountability,
leading to potential risks and non-compliance with ISO 42001:2023.
● D. Propose that the organization only implement changes that are
deemed critical, without the need for a formal process.
○ Even critical changes need to be managed formally to ensure they are
properly evaluated and implemented. A formal process is essential
for all changes to maintain compliance and system integrity.

23. Scenario:

During an audit, you discover that the organization uses an AI system for fraud
detection in financial transactions. However, the organization has not documented
any procedures for periodically reviewing the AI system's effectiveness and
accuracy, leading to potential undetected fraud or false positives.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
100
A. Recommend that the organization ignore periodic reviews since the AI system
is currently detecting fraud effectively.

B. Advise the organization to establish and document procedures for periodically


reviewing the AI system's effectiveness and accuracy.

C. Suggest that the organization review the AI system only when there is a
significant increase in fraud cases.

D. Propose that the organization rely on external audits to assess the AI system's
effectiveness and accuracy instead of conducting internal reviews.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and document


procedures for periodically reviewing the AI system's effectiveness and
accuracy.

Explanation:

ISO 42001:2023 requires organizations to regularly review and assess the


performance of their AI systems to ensure they remain effective and accurate.
Periodic reviews help identify and address any issues with the system, such as
undetected fraud or false positives, ensuring the AI system continues to meet its
intended purpose and complies with relevant standards.

Incorrect Answers:

● A. Recommend that the organization ignore periodic reviews since the


AI system is currently detecting fraud effectively.
○ Ignoring periodic reviews is non-compliant with ISO 42001:2023 and
can lead to undetected issues over time. Regular assessments are
necessary to maintain system effectiveness.
● C. Suggest that the organization review the AI system only when there
is a significant increase in fraud cases.
○ Waiting for a significant increase in fraud cases before reviewing the
system is reactive and does not align with the proactive approach
required by ISO 42001:2023.
● D. Propose that the organization rely on external audits to assess the
AI system's effectiveness and accuracy instead of conducting internal
reviews.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
101
○ While external audits can be valuable, internal reviews are essential
for ongoing monitoring and quick identification of issues. A
combination of both is often the best practice.

24. Scenario:

During an audit, you find that the organization uses an AI system to make decisions
about employee promotions. However, there is no transparency in the decision-
making process, and employees do not understand how the AI system reaches its
conclusions, leading to concerns about fairness.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization keep the AI decision-making process


confidential to maintain its competitive advantage.

B. Advise the organization to establish and implement procedures to ensure


transparency in the AI system's decision-making process, including clear
communication with employees about how decisions are made.

C. Suggest that the organization only disclose decision-making criteria when


employees file formal complaints.

D. Propose that the organization switch to a manual decision-making process for


employee promotions to avoid transparency issues.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


procedures to ensure transparency in the AI system's decision-making
process, including clear communication with employees about how
decisions are made.

Explanation:

ISO 42001:2023 emphasizes the importance of transparency in AI systems,


especially when making decisions that significantly impact individuals, such as
employee promotions. Ensuring transparency helps build trust and allows
employees to understand and trust the AI system's decisions. Clear communication

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
102
about the decision-making process is crucial for maintaining fairness and
compliance.

Incorrect Answers:

● A. Recommend that the organization keep the AI decision-making


process confidential to maintain its competitive advantage.
○ Keeping the decision-making process confidential can lead to
mistrust and concerns about fairness, which is non-compliant with
the transparency requirements of ISO 42001:2023.
● C. Suggest that the organization only disclose decision-making criteria
when employees file formal complaints.
○ Disclosing criteria only when complaints are filed is reactive and
insufficient. Proactive transparency is necessary to ensure ongoing
trust and compliance.
● D. Propose that the organization switch to a manual decision-making
process for employee promotions to avoid transparency issues.
○ Switching to a manual process is not necessary if transparency can be
achieved with the AI system. Proper procedures and communication
can ensure the AI system is fair and transparent.

25. Scenario:

During an audit, you find that the organization has implemented multiple AI
systems for various functions. However, there is no centralized governance
framework to oversee these AI systems, leading to inconsistencies in their
management and potential risks.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that each department independently manage their AI systems


without a centralized framework.

B. Advise the organization to establish a centralized governance framework to


oversee the management of all AI systems, ensuring consistency and compliance.

C. Suggest that the organization only focus on high-risk AI systems and leave other
systems unmanaged.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
103
D. Propose that the organization periodically review AI systems without
implementing a centralized governance framework.

Answer Explanation:

Correct Answer: B. Advise the organization to establish a centralized


governance framework to oversee the management of all AI systems,
ensuring consistency and compliance.

Explanation:

ISO 42001:2023 emphasizes the importance of a centralized governance


framework for managing AI systems. This framework ensures consistency, proper
risk management, and compliance across all AI systems within the organization. A
centralized approach helps in aligning AI systems with organizational policies and
standards, reducing risks associated with decentralized and inconsistent
management.

Incorrect Answers:

● A. Recommend that each department independently manage their AI


systems without a centralized framework.
○ Independent management by each department can lead to
inconsistencies and increased risks. A centralized governance
framework ensures uniformity and compliance.
● C. Suggest that the organization only focus on high-risk AI systems and
leave other systems unmanaged.
○ Focusing only on high-risk systems is insufficient. All AI systems
should be managed under a centralized framework to ensure
comprehensive oversight and risk management.
● D. Propose that the organization periodically review AI systems
without implementing a centralized governance framework.
○ Periodic reviews without a centralized framework can still result in
inconsistencies. A centralized governance framework provides a
structured and consistent approach to managing all AI systems.

26. Scenario:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
104
An organization is implementing an AI system for predictive maintenance in its
manufacturing plant. However, the implementation team has not involved any
stakeholders from the maintenance department, leading to a lack of buy-in and
potential usability issues.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Proceed with the implementation without involving the maintenance


department to avoid delays.

B. Advise the implementation team to include stakeholders from the maintenance


department in the planning and implementation phases to ensure the AI system
meets their needs and gains their buy-in.

C. Suggest that the implementation team conduct training for the maintenance
department after the AI system is fully deployed.

D. Propose that the organization hire external consultants to manage the


implementation without involving internal stakeholders.

Answer Explanation:

Correct Answer: B. Advise the implementation team to include stakeholders


from the maintenance department in the planning and implementation
phases to ensure the AI system meets their needs and gains their buy-in.

Explanation:

ISO 42001:2023 emphasizes the importance of stakeholder involvement in the


implementation of AI systems. Including stakeholders from the maintenance
department ensures that their needs and concerns are addressed, leading to better
usability and acceptance of the AI system. Involvement of relevant stakeholders is
crucial for successful implementation and compliance with the standard.

Incorrect Answers:

● A. Proceed with the implementation without involving the


maintenance department to avoid delays.
○ Ignoring stakeholder involvement can lead to usability issues and
lack of acceptance, which is non-compliant with ISO 42001:2023.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
105
● C. Suggest that the implementation team conduct training for the
maintenance department after the AI system is fully deployed.
○ While training is important, involving stakeholders from the
beginning ensures that the system is designed to meet their needs,
leading to better acceptance and effectiveness.
● D. Propose that the organization hire external consultants to manage
the implementation without involving internal stakeholders.
○ External consultants can provide expertise, but internal stakeholder
involvement is crucial for ensuring the AI system aligns with
organizational needs and practices.

27. Scenario:

An organization is developing an AI system to assist with customer service


inquiries. However, the organization has not conducted any risk assessments to
identify potential impacts on customer data privacy and security.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Proceed with the development without conducting risk assessments to avoid


delays.

B. Advise the organization to conduct a comprehensive risk assessment to identify


and mitigate potential impacts on customer data privacy and security.

C. Suggest that the organization implement the AI system and monitor for any
privacy and security issues as they arise.

D. Propose that the organization outsource the risk assessment to an external


company without internal oversight.

Answer Explanation:

Correct Answer: B. Advise the organization to conduct a comprehensive risk


assessment to identify and mitigate potential impacts on customer data
privacy and security.

Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
106
ISO 42001:2023 requires organizations to conduct risk assessments to identify,
evaluate, and mitigate potential risks associated with AI systems. This is
particularly important for systems that handle sensitive data, such as customer
service inquiries. Conducting a comprehensive risk assessment ensures that the
organization can address any privacy and security concerns before they become
issues, ensuring compliance and protecting customer data.

Incorrect Answers:

● A. Proceed with the development without conducting risk assessments


to avoid delays.
○ Skipping risk assessments is non-compliant with ISO 42001:2023
and can lead to significant privacy and security issues.
● C. Suggest that the organization implement the AI system and monitor
for any privacy and security issues as they arise.
○ Reactive monitoring is insufficient. Proactive risk assessments are
necessary to identify and mitigate issues before they occur.
● D. Propose that the organization outsource the risk assessment to an
external company without internal oversight.
○ While outsourcing can be beneficial, internal oversight is essential to
ensure that the risk assessment aligns with the organization’s specific
context and requirements. Internal stakeholders should be involved
in the risk assessment process.

28. Scenario:

An organization is deploying an AI system to optimize its supply chain operations.


However, the organization has not established any performance metrics to
evaluate the system's effectiveness or its impact on operational efficiency.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Deploy the AI system without performance metrics and evaluate its


effectiveness based on general feedback.

B. Advise the organization to establish specific performance metrics to evaluate


the AI system's effectiveness and its impact on operational efficiency before
deployment.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
107
C. Suggest that the organization monitor the AI system's performance informally
and make adjustments as needed.

D. Propose that the organization rely on the AI system vendor's default metrics for
evaluation.

Answer Explanation:

Correct Answer: B. Advise the organization to establish specific performance


metrics to evaluate the AI system's effectiveness and its impact on
operational efficiency before deployment.

Explanation:

ISO 42001:2023 requires organizations to define and document performance


metrics for AI systems. These metrics help evaluate the system's effectiveness and
its impact on operational efficiency. Establishing specific metrics before
deployment ensures that the organization can systematically assess the AI
system's performance and make informed decisions about its optimization and
improvement.

Incorrect Answers:

● A. Deploy the AI system without performance metrics and evaluate its


effectiveness based on general feedback.
○ Evaluating effectiveness based on general feedback is insufficient and
non-compliant with ISO 42001:2023. Specific metrics are necessary
for systematic evaluation.
● C. Suggest that the organization monitor the AI system's performance
informally and make adjustments as needed.
○ Informal monitoring lacks the rigor and consistency required by ISO
42001:2023. Formal metrics provide a structured approach to
performance evaluation.
● D. Propose that the organization rely on the AI system vendor's default
metrics for evaluation.
○ Vendor's default metrics may not align with the organization's
specific goals and requirements. Custom performance metrics ensure
relevance and comprehensiveness in evaluation.

29. Scenario:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
108
An organization is integrating an AI system to enhance its marketing strategies by
analyzing customer behavior data. However, the organization has not established
any ethical guidelines or frameworks to ensure the responsible use of customer
data within the AI system.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Proceed with the AI integration without ethical guidelines to expedite the


marketing strategy enhancement.

B. Advise the organization to establish and implement ethical guidelines and


frameworks to ensure the responsible use of customer data within the AI system.

C. Suggest that the organization rely on existing general business ethics without
specific AI-related guidelines.

D. Propose that the organization only consider ethical guidelines if a customer data
misuse incident occurs.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


ethical guidelines and frameworks to ensure the responsible use of customer
data within the AI system.

Explanation:

ISO 42001:2023 emphasizes the importance of ethical considerations in the use of


AI systems, particularly when dealing with sensitive data such as customer
behavior. Establishing and implementing ethical guidelines and frameworks
ensures that the AI system operates responsibly, protecting customer data and
maintaining trust. This proactive approach is essential for compliance and ethical
AI use.

Incorrect Answers:

● A. Proceed with the AI integration without ethical guidelines to


expedite the marketing strategy enhancement.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
109
○ Ignoring ethical guidelines compromises compliance with ISO
42001:2023 and risks unethical use of customer data, potentially
leading to trust issues and legal consequences.
● C. Suggest that the organization rely on existing general business
ethics without specific AI-related guidelines.
○ General business ethics may not adequately address the specific
challenges and responsibilities associated with AI systems. Specific
AI-related ethical guidelines are necessary.
● D. Propose that the organization only consider ethical guidelines if a
customer data misuse incident occurs.
○ Reactive consideration of ethics is insufficient. Proactively
establishing ethical guidelines ensures responsible AI use from the
outset, preventing misuse incidents.

30. Scenario:

An organization has developed an AI system for financial forecasting. The


organization has not conducted any stakeholder engagement sessions to gather
input and feedback from key stakeholders, including financial analysts and
decision-makers.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Proceed with the deployment of the AI system without stakeholder engagement


to save time.

B. Advise the organization to conduct stakeholder engagement sessions to gather


input and feedback from financial analysts and decision-makers before finalizing
the AI system.

C. Suggest that the organization deploy the AI system and collect feedback from
stakeholders after implementation.

D. Propose that the organization focus solely on the technical development of the
AI system and not involve stakeholders.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
110
Correct Answer: B. Advise the organization to conduct stakeholder
engagement sessions to gather input and feedback from financial analysts
and decision-makers before finalizing the AI system.

Explanation:

ISO 42001:2023 emphasizes the importance of stakeholder engagement in the


development and deployment of AI systems. Engaging stakeholders such as
financial analysts and decision-makers ensures that the AI system meets their
needs and expectations, and that any potential issues are identified and addressed
early. This approach enhances the system's effectiveness and fosters acceptance
and trust among users.

Incorrect Answers:

● A. Proceed with the deployment of the AI system without stakeholder


engagement to save time.
○ Skipping stakeholder engagement can lead to unmet needs, potential
issues, and lack of trust, which is non-compliant with ISO
42001:2023.
● C. Suggest that the organization deploy the AI system and collect
feedback from stakeholders after implementation.
○ Collecting feedback after deployment is reactive and can result in
significant rework and disruptions. Proactive engagement is essential
for compliance and effective system design.
● D. Propose that the organization focus solely on the technical
development of the AI system and not involve stakeholders.
○ Ignoring stakeholder involvement overlooks crucial insights and
feedback, leading to a system that may not align with user needs and
organizational goals. Stakeholder engagement is a key component of
compliance with ISO 42001:2023.

31. Scenario:

An organization is using an AI system for automated decision-making in the hiring


process. However, there is no clear documentation or process for handling and

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
111
investigating complaints from job applicants who believe they were unfairly
treated by the AI system.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore complaints from job applicants to


avoid additional workload.

B. Advise the organization to establish and document a clear process for handling
and investigating complaints from job applicants about the AI system.

C. Suggest that the organization handle complaints informally without a


documented process.

D. Propose that the organization discontinue the use of AI in hiring to prevent any
potential complaints.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and document a clear


process for handling and investigating complaints from job applicants about
the AI system.

Explanation:

ISO 42001:2023 requires organizations to have documented procedures for


handling and investigating complaints related to AI systems. Establishing a clear
process ensures that complaints are addressed fairly and transparently,
maintaining trust in the AI system and ensuring compliance with ethical and legal
standards.

Incorrect Answers:

● A. Recommend that the organization ignore complaints from job


applicants to avoid additional workload.
○ Ignoring complaints is non-compliant with ISO 42001:2023 and can
lead to significant ethical and legal issues.
● C. Suggest that the organization handle complaints informally without
a documented process.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
112
○ Handling complaints informally lacks the transparency and
consistency required by ISO 42001:2023. A documented process is
essential.
● D. Propose that the organization discontinue the use of AI in hiring to
prevent any potential complaints.
○ Discontinuing the use of AI is not necessary if proper complaint
handling procedures are in place. AI systems can provide significant
benefits when managed correctly.

32. Scenario:

An organization has implemented an AI system for dynamic pricing in its e-


commerce platform. However, the organization has not conducted any audits to
ensure that the AI system's pricing decisions comply with relevant legal and
regulatory requirements.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization trust the AI system’s decisions without


conducting audits to avoid disrupting operations.

B. Advise the organization to conduct regular audits of the AI system to ensure its
pricing decisions comply with relevant legal and regulatory requirements.

C. Suggest that the organization only conduct audits if customers file complaints
about unfair pricing.

D. Propose that the organization disable the AI system temporarily until


compliance can be verified through a one-time audit.

Answer Explanation:

Correct Answer: B. Advise the organization to conduct regular audits of the


AI system to ensure its pricing decisions comply with relevant legal and
regulatory requirements.

Explanation:

ISO 42001:2023 requires organizations to regularly audit their AI systems to


ensure compliance with legal and regulatory requirements. Regular audits help

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
113
identify and rectify any non-compliance issues, ensuring the AI system’s decisions
are fair, legal, and ethical. This proactive approach maintains trust and minimizes
the risk of legal repercussions.

Incorrect Answers:

● A. Recommend that the organization trust the AI system’s decisions


without conducting audits to avoid disrupting operations.
○ Trusting the AI system without audits is non-compliant with ISO
42001:2023 and poses significant risks of non-compliance with legal
and regulatory requirements.
● C. Suggest that the organization only conduct audits if customers file
complaints about unfair pricing.
○ Conducting audits only in response to complaints is reactive and
insufficient. Regular audits are necessary to proactively ensure
compliance.
● D. Propose that the organization disable the AI system temporarily
until compliance can be verified through a one-time audit.
○ Disabling the AI system is unnecessarily disruptive. Regular, ongoing
audits ensure continuous compliance and system reliability.

33. Scenario:

An organization has implemented an AI system for monitoring and managing


energy consumption across its facilities. However, the organization has not
established any procedures for regularly updating the AI system based on new
data and advancements in technology.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue using the AI system without updates
to avoid operational disruptions.

B. Advise the organization to establish and implement procedures for regularly


updating the AI system based on new data and technological advancements.

C. Suggest that the organization update the AI system only when significant issues
arise.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
114
D. Propose that the organization rely on the initial setup and configuration without
considering future updates.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


procedures for regularly updating the AI system based on new data and
technological advancements.

Explanation:

ISO 42001:2023 requires organizations to ensure their AI systems are regularly


updated based on new data and technological advancements. Regular updates help
maintain the AI system's effectiveness, accuracy, and compliance with evolving
standards and requirements. Establishing procedures for updates ensures the AI
system remains relevant and performs optimally.

Incorrect Answers:

● A. Recommend that the organization continue using the AI system


without updates to avoid operational disruptions.
○ Avoiding updates can lead to outdated and less effective AI systems,
which is non-compliant with ISO 42001:2023.
● C. Suggest that the organization update the AI system only when
significant issues arise.
○ Waiting for significant issues before updating is reactive and can lead
to performance degradation and non-compliance. Regular updates
are necessary.
● D. Propose that the organization rely on the initial setup and
configuration without considering future updates.
○ Relying solely on the initial setup without updates does not align with
ISO 42001:2023, which emphasizes the need for continuous
improvement and adaptation to new data and technology.

34. Scenario:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
115
An organization uses an AI system for predictive analytics in its supply chain
management. However, the organization has not provided any training to its
employees on how to interpret and act on the AI system's predictions, leading to
misinformed decisions and inefficiencies.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization allow employees to learn about the AI system
on their own through trial and error.

B. Advise the organization to develop and implement a comprehensive training


program for employees on how to interpret and act on the AI system's predictions.

C. Suggest that the organization rely solely on the AI system's recommendations


without involving employees in the decision-making process.

D. Propose that the organization only provide training to a select few employees in
senior management positions.

Answer Explanation:

Correct Answer: B. Advise the organization to develop and implement a


comprehensive training program for employees on how to interpret and act
on the AI system's predictions.

Explanation:

ISO 42001:2023 requires that organizations ensure their employees are


competent in using AI systems, which includes understanding how to interpret and
act on AI predictions. A comprehensive training program ensures that all relevant
employees are equipped with the necessary skills and knowledge, leading to more
informed decisions and improved efficiency.

Incorrect Answers:

● A. Recommend that the organization allow employees to learn about


the AI system on their own through trial and error.
○ Relying on trial and error is inefficient and can lead to mistakes and
misinformed decisions. Structured training is necessary for
compliance and effective use of AI.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
116
● C. Suggest that the organization rely solely on the AI system's
recommendations without involving employees in the decision-
making process.
○ Employee involvement is crucial for interpreting AI
recommendations within the context of the business. Solely relying
on AI without human oversight can lead to errors and non-
compliance.
● D. Propose that the organization only provide training to a select few
employees in senior management positions.
○ Training should be provided to all relevant employees who interact
with the AI system to ensure comprehensive understanding and
effective use across the organization. Limiting training to senior
management is insufficient.

35. Scenario:

An organization has implemented an AI system to streamline its customer support


operations. However, there is no process in place to regularly evaluate and improve
the system based on customer feedback and performance metrics.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue using the AI system without


evaluations to avoid disruptions.

B. Advise the organization to establish and implement a process for regularly


evaluating and improving the AI system based on customer feedback and
performance metrics.

C. Suggest that the organization rely solely on periodic technical maintenance


without considering customer feedback.

D. Propose that the organization conduct a one-time evaluation and improvement


cycle, then continue using the AI system without further assessments.

Answer Explanation:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
117
Correct Answer: B. Advise the organization to establish and implement a
process for regularly evaluating and improving the AI system based on
customer feedback and performance metrics.

Explanation:

ISO 42001:2023 emphasizes the importance of continual improvement and


stakeholder feedback. Regularly evaluating the AI system using customer feedback
and performance metrics ensures the system remains effective, meets user needs,
and can adapt to changing requirements. This approach helps in maintaining high
standards of service and compliance with the standard.

Incorrect Answers:

● A. Recommend that the organization continue using the AI system


without evaluations to avoid disruptions.
○ Avoiding evaluations is non-compliant with ISO 42001:2023 and can
lead to a decline in system performance and user satisfaction over
time.
● C. Suggest that the organization rely solely on periodic technical
maintenance without considering customer feedback.
○ Technical maintenance alone is insufficient. Customer feedback is
crucial for understanding real-world performance and areas needing
improvement.
● D. Propose that the organization conduct a one-time evaluation and
improvement cycle, then continue using the AI system without further
assessments.
○ A one-time evaluation does not ensure ongoing effectiveness. Regular
assessments are necessary for continual improvement and
compliance with the standard.

36. Scenario:

An organization uses an AI system to analyze large sets of data for market trend
predictions. However, the organization has not documented any procedures for
data governance, including data quality, integrity, and security measures.

Question:

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
118
What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization continue without documented data


governance procedures since the AI system is functioning well.

B. Advise the organization to establish and document comprehensive data


governance procedures to ensure data quality, integrity, and security.

C. Suggest that the organization only focus on data security and ignore data quality
and integrity.

D. Propose that the organization rely on the data sources' inherent quality without
establishing internal governance procedures.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and document


comprehensive data governance procedures to ensure data quality, integrity,
and security.

Explanation:

ISO 42001:2023 requires organizations to have documented data governance


procedures to ensure the quality, integrity, and security of the data used in AI
systems. Comprehensive data governance helps maintain the reliability and
accuracy of AI predictions, supports compliance with legal and regulatory
requirements, and protects against data breaches and corruption.

Incorrect Answers:

● A. Recommend that the organization continue without documented


data governance procedures since the AI system is functioning well.
○ Functioning well currently does not guarantee future reliability.
Proper documentation and procedures are necessary for sustained
compliance and performance.
● C. Suggest that the organization only focus on data security and ignore
data quality and integrity.
○ Focusing solely on data security while ignoring quality and integrity
is insufficient. All aspects of data governance are critical for ensuring
reliable and compliant AI system operation.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
119
● D. Propose that the organization rely on the data sources' inherent
quality without establishing internal governance procedures.
○ Relying on data sources without internal governance procedures is
risky. Internal governance ensures that all data, regardless of source,
meets the necessary standards for quality, integrity, and security.

37. Scenario:

An organization has implemented an AI system to automate its financial auditing


processes. However, there is no formal process for logging and tracking AI system
errors and incidents, which has led to unresolved issues affecting the system's
reliability.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore minor errors and incidents to focus on
major issues.

B. Advise the organization to establish a formal process for logging and tracking all
AI system errors and incidents to ensure they are resolved promptly.

C. Suggest that the organization rely on manual checks to identify errors without a
formal logging process.

D. Propose that the organization replace the AI system with manual auditing to
avoid errors.

Answer Explanation:

Correct Answer: B. Advise the organization to establish a formal process for


logging and tracking all AI system errors and incidents to ensure they are
resolved promptly.

Explanation:

ISO 42001:2023 requires organizations to have processes in place for identifying,


logging, and tracking AI system errors and incidents. This ensures that all issues
are documented and addressed promptly, maintaining the system's reliability and
compliance. A formal logging and tracking process helps in continuous
improvement and risk management.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
120
Incorrect Answers:

● A. Recommend that the organization ignore minor errors and


incidents to focus on major issues.
○ Ignoring minor errors can lead to larger, unresolved issues over time.
All errors and incidents should be logged and addressed to ensure
system reliability.
● C. Suggest that the organization rely on manual checks to identify
errors without a formal logging process.
○ Manual checks without formal logging are insufficient and non-
compliant with ISO 42001:2023. A structured logging process
ensures systematic tracking and resolution of issues.
● D. Propose that the organization replace the AI system with manual
auditing to avoid errors.
○ Replacing the AI system with manual auditing is not a practical
solution. Proper error logging and tracking can help maintain the AI
system’s reliability and benefits.

38. Scenario:

An organization has deployed an AI system to optimize its inventory management.


However, the organization has not established any procedures for ensuring the
system's compliance with applicable data protection regulations, leading to
potential risks in handling sensitive supplier and customer data.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization ignore data protection regulations since the
AI system is performing well.

B. Advise the organization to establish and implement procedures to ensure the AI


system's compliance with applicable data protection regulations.

C. Suggest that the organization rely on the AI system vendor's data protection
measures without internal procedures.

D. Propose that the organization limit the AI system's use of sensitive data to
minimize potential risks.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
121
Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


procedures to ensure the AI system's compliance with applicable data
protection regulations.

Explanation:

ISO 42001:2023 requires organizations to comply with relevant data protection


regulations when implementing AI systems. Establishing and implementing
procedures ensures that the AI system handles sensitive data responsibly and
legally, mitigating risks and maintaining compliance. These procedures are
essential for protecting supplier and customer data and maintaining trust.

Incorrect Answers:

● A. Recommend that the organization ignore data protection


regulations since the AI system is performing well.
○ Ignoring data protection regulations is non-compliant with ISO
42001:2023 and can lead to significant legal and ethical issues.
● C. Suggest that the organization rely on the AI system vendor's data
protection measures without internal procedures.
○ While vendor measures can help, the organization must have its own
internal procedures to ensure comprehensive and consistent
compliance.
● D. Propose that the organization limit the AI system's use of sensitive
data to minimize potential risks.
○ Limiting the use of sensitive data alone is insufficient. Proper
procedures are necessary to ensure that any data used is handled in
compliance with regulations.

39. Scenario:

An organization has implemented an AI system for automated decision-making in


loan approvals. The organization has not conducted any impact assessments to
evaluate how the AI system's decisions affect different demographic groups,
leading to concerns about potential biases and unfair treatment.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
122
A. Recommend that the organization continue using the AI system without impact
assessments since it is already in place.

B. Advise the organization to conduct comprehensive impact assessments to


evaluate and address potential biases and unfair treatment in the AI system's
decisions.

C. Suggest that the organization only address biases if there are formal complaints
from affected individuals.

D. Propose that the organization replace the AI system with manual decision-
making to avoid biases.

Answer Explanation:

Correct Answer: B. Advise the organization to conduct comprehensive impact


assessments to evaluate and address potential biases and unfair treatment
in the AI system's decisions.

Explanation:

ISO 42001:2023 requires organizations to conduct impact assessments to ensure


that AI systems operate fairly and do not discriminate against any demographic
groups. Comprehensive impact assessments help identify and mitigate potential
biases, ensuring that the AI system's decisions are fair and ethical.

Incorrect Answers:

● A. Recommend that the organization continue using the AI system


without impact assessments since it is already in place.
○ Continuing without impact assessments is non-compliant with ISO
42001:2023 and risks perpetuating biases and unfair treatment.
● C. Suggest that the organization only address biases if there are formal
complaints from affected individuals.
○ Waiting for formal complaints is reactive and insufficient. Proactive
impact assessments are necessary to identify and address biases
before they cause harm.
● D. Propose that the organization replace the AI system with manual
decision-making to avoid biases.
○ Replacing the AI system with manual decision-making is not
practical. Addressing biases through impact assessments ensures the
AI system can be used effectively and fairly.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
123
40. Scenario:

An organization uses an AI system to automate the process of reviewing and


approving expense reports. However, the organization has not established any
protocols for regularly validating the accuracy and reliability of the AI system's
decisions.

Question:

What is the most appropriate action to take to ensure compliance with ISO
42001:2023?

A. Recommend that the organization trust the AI system's decisions without


validation to avoid disruptions.

B. Advise the organization to establish and implement protocols for regularly


validating the accuracy and reliability of the AI system's decisions.

C. Suggest that the organization only validate the AI system's decisions when
discrepancies are reported by employees.

D. Propose that the organization manually review all expense reports in addition
to using the AI system.

Answer Explanation:

Correct Answer: B. Advise the organization to establish and implement


protocols for regularly validating the accuracy and reliability of the AI
system's decisions.

Explanation:

ISO 42001:2023 emphasizes the importance of regularly validating the


performance of AI systems to ensure their decisions are accurate and reliable.
Establishing and implementing validation protocols helps maintain the system's
integrity, improves trust in its outcomes, and ensures compliance with
organizational standards and regulations.

Incorrect Answers:

● A. Recommend that the organization trust the AI system's decisions


without validation to avoid disruptions.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
124
○ Trusting the AI system without validation is non-compliant with ISO
42001:2023 and can lead to undetected errors and issues.
● C. Suggest that the organization only validate the AI system's decisions
when discrepancies are reported by employees.
○ Reactive validation is insufficient. Regular validation is necessary to
proactively identify and address potential issues.
● D. Propose that the organization manually review all expense reports
in addition to using the AI system.
○ Manually reviewing all reports negates the efficiency benefits of using
an AI system. Regular validation protocols ensure the AI system can
be trusted without the need for redundant manual reviews.

This material is exclusively for GSDC members and cannot be distributed, sold, or reproduced.
Certified ISO 42001:2023 Lead Auditor Book of Knowledge © Global Skillup Certification Pvt. Ltd.
100D Pasir Panjang RD, #05 - 03 Meissa, Singapore 118520
125

You might also like