Ieee - 7003-2024
Ieee - 7003-2024
STANDARDS
Considerations
Developed by the
Software Systems and Engineering Standards Committee
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
Developed by the
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Alogrithmic Bias Considerations
Abstract: The processes and methodologies to help users address issues of bias in the creation
of algorithms are described in this standard. Elements include but are not limited to: criteria for
the selection of validation data sets for bias quality control, guidelines on establishing and
communicating the application boundaries for which the algorithm has been designed and
validated to guard against unintended consequences arising from out-of-bound application of
algorithms, and suggestions for user expectation management to help mitigate bias due to
incorrect interpretation of systems outputs by users (e.g. correlation vs. causation)
IEEE is a registered trademark in the U.S. Patent & Trademark Office, owned by The Institute of Electrical and Electronics
Engineers, Incorporated.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
IEEE Standards documents are made available for use subject to important notices and legal disclaimers.
These notices and disclaimers, or a reference to this page (https://standards.ieee.org/ipr/
disclaimers.html), appear in all IEEE standards and may be found under the heading “Important No-
tices and Disclaimers Concerning IEEE Standards Documents.”
Notice and Disclaimer of Liability Concerning the Use of IEEE Standards Documents
IEEE Standards documents are developed within IEEE Societies and subcommittees of IEEE Standards As-
sociation (IEEE SA) Board of Governors. IEEE develops its standards through an accredited consensus de-
velopment process, which brings together volunteers representing varied viewpoints and interests to achieve
the final product. IEEE standards are documents developed by volunteers with scientific, academic, and
industry-based expertise in technical working groups. Volunteers involved in technical working groups are
not necessarily members of IEEE or IEEE SA and participate without compensation from IEEE. While IEEE
administers the process and establishes rules to promote fairness in the consensus development process, IEEE
does not independently evaluate, test, or verify the accuracy of any of the information or the soundness of
any judgments contained in its standards.
IEEE makes no warranties or representations concerning its standards, and expressly disclaims all warranties,
express or implied, concerning all standards, including but not limited to the warranties of merchantability,
fitness for a particular purpose and non-infringement IEEE Standards documents do not guarantee safety,
security, health, or environmental protection, or compliance with law, or guarantee against interference with
or from other devices or networks. In addition, IEEE does not warrant or represent that the use of the material
contained in its standards is free from patent infringement. IEEE Standards documents are supplied “AS IS”
and “WITH ALL FAULTS.”
Use of an IEEE standard is wholly voluntary. The existence of an IEEE standard does not imply that there
are no other ways to produce, test, measure, purchase, market, or provide other goods and services related to
the scope of the IEEE standard. Furthermore, the viewpoint expressed at the time a standard is approved and
issued is subject to change brought about through developments in the state of the art and comments received
from users of the standard.
In publishing and making its standards available, IEEE is not suggesting or rendering professional or other
services for, or on behalf of, any person or entity, nor is IEEE undertaking to perform any duty owed by any
other person or entity to another. Any person utilizing any IEEE Standards document should rely upon their
own independent judgment in the exercise of reasonable care in any given circumstances or, as appropriate,
seek the advice of a competent professional in determining the appropriateness of a given IEEE standard.
IN NO EVENT SHALL IEEE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO: THE NEED
TO PROCURE SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSI-
NESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE PUBLICATION, USE OF, OR RELIANCE UPON ANY STANDARD, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE AND REGARDLESS OF WHETHER SUCH
DAMAGE WAS FORESEEABLE.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Translations
The IEEE consensus balloting process involves the review of documents in English only. In the event that
an IEEE standard is translated, only the English language version published by IEEE is the approved IEEE
standard.
In no event shall material in any IEEE Standards documents be used for the purpose of creating, training,
enhancing, developing, maintaining, or contributing to any artificial intelligence systems without the express,
written consent of IEEE SA in advance. “Artificial intelligence” refers to any software, application, or other
system that uses artificial intelligence, machine learning, or similar technologies, to analyze, train, process,
or generate content. Requests for consent can be submitted using the Contact Us form.
Official statements
A statement, written or oral, that is not processed in accordance with the IEEE SA Standards Board Oper-
ations Manual is not, and shall not be considered or inferred to be, the official position of IEEE or any of
its committees and shall not be considered to be, or be relied upon as, a formal position of IEEE or IEEE
SA. At lectures, symposia, seminars, or educational courses, an individual presenting information on IEEE
standards shall make it clear that the presenter’s views should be considered the personal views of that indi-
vidual rather than the formal position of IEEE, IEEE SA, the Standards Committee, or the Working Group.
Statements made by volunteers may not represent the formal position of their employer(s) or affiliation(s).
News releases about IEEE standards issued by entities other than IEEE SA should be considered the view of
the entity issuing the release rather than the formal position of IEEE or IEEE SA.
Comments on standards
Comments for revision of IEEE Standards documents are welcome from any interested party, regardless of
membership affiliation with IEEE or IEEE SA. However, IEEE does not provide interpretations, consult-
ing information, or advice pertaining to IEEE Standards documents.
Suggestions for changes in documents should be in the form of a proposed change of text, together with
appropriate supporting comments. Since IEEE standards represent a consensus of concerned interests, it
is important that any responses to comments and questions also receive the concurrence of a balance of
interests. For this reason, IEEE and the members of its Societies and subcommittees of the IEEE SA Board
of Governors are not able to provide an instant response to comments or questions, except in those cases
where the matter has previously been addressed. For the same reason, IEEE does not respond to interpretation
requests. Any person who would like to participate in evaluating comments or revisions to an IEEE standard
is welcome to join the relevant IEEE SA working group. You can indicate interest in a working group using
the Interests tab in the Manage Profile & Interests area of the IEEE SA myProject system. 1 An IEEE Account
is needed to access the application.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Users of IEEE Standards documents should consult all applicable laws and regulations. Compliance with
the provisions of any IEEE Standards document does not constitute compliance to any applicable regulatory
requirements. Implementers of the standard are responsible for observing or referring to the applicable reg-
ulatory requirements. IEEE does not, by the publication of its standards, intend to urge action that is not in
compliance with applicable laws, and these documents may not be construed as doing so.
Data privacy
Users of IEEE Standards documents should evaluate the standards for considerations of data privacy and
data ownership in the context of assessing and using the standards in compliance with applicable laws and
regulations.
Copyrights
IEEE draft and approved standards are copyrighted by IEEE under U.S. and international copyright laws.
They are made available by IEEE and are adopted for a wide variety of both public and private uses. These
include both use by reference, in laws and regulations, and use in private self-regulation, standardization,
and the promotion of engineering practices and methods. By making these documents available for use and
adoption by public authorities and private users, neither IEEE nor its licensors waive any rights in copyright
to the documents.
Photocopies
Subject to payment of the appropriate licensing fees, IEEE will grant users a limited, non-exclusive license
to photocopy portions of any individual standard for company or organizational internal use or individual,
non-commercial use only. To arrange for payment of licensing fees, please contact Copyright Clearance
Center, Customer Service, 222 Rosewood Drive, Danvers, MA 01923 USA; +1 978 750 8400; https:
//www.copyright.com/. Permission to photocopy portions of any individual standard for educational
classroom use can also be obtained through the Copyright Clearance Center.
Users of IEEE Standards documents should be aware that these documents may be superseded at any time
by the issuance of new editions or may be amended from time to time through the issuance of amendments,
corrigenda, or errata. An official IEEE document at any point in time consists of the current edition of the
document together with any amendments, corrigenda, or errata then in effect.
Every IEEE standard is subjected to review at least every 10 years. When a document is more than 10 years
old and has not undergone a revision process, it is reasonable to conclude that its contents, although still of
some value, do not wholly reflect the present state of the art. Users are cautioned to check to determine that
they have the latest edition of any IEEE standard.
In order to determine whether a given document is the current edition and whether it has been amended
through the issuance of amendments, corrigenda, or errata, visit IEEE Xplore or contact IEEE. 3 For more
information about the IEEE SA or IEEE’s standards development process, visit the IEEE SA Website.
3 Available at: https://ieeexplore.ieee.org/browse/standards/collection/ieee.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Errata
Errata, if any, for all IEEE standards can be accessed on the IEEE SA Website. 4 Search for standard number
and year of approval to access the web page of the published standard. Errata links are located under the
Additional Resources Details section. Errata are also available in IEEE Xplore. Users are encouraged to
periodically check for errata.
Patents
IEEE standards are developed in compliance with the IEEE SA Patent Policy. 5
Attention is called to the possibility that implementation of this standard may require use of subject matter
covered by patent rights. By publication of this standard, no position is taken by the IEEE with respect to the
existence or validity of any patent rights in connection therewith. If a patent holder or patent applicant has
filed a statement of assurance via an Accepted Letter of Assurance, then the statement is listed on the IEEE
SA Website at https://standards.ieee.org/about/sasb/patcom/patents.html. Letters
of Assurance may indicate whether the Submitter is willing or unwilling to grant licenses under patent rights
without compensation or under reasonable rates, with reasonable terms and conditions that are demonstrably
free of any unfair discrimination to applicants desiring to obtain such licenses.
Essential Patent Claims may exist for which a Letter of Assurance has not been received. The IEEE is
not responsible for identifying Essential Patent Claims for which a license may be required, for conduct-
ing inquiries into the legal validity or scope of Patents Claims, or determining whether any licensing terms
or conditions provided in connection with submission of a Letter of Assurance, if any, or in any licensing
agreements are reasonable or non-discriminatory. Users of this standard are expressly advised that determi-
nation of the validity of any patent rights, and the risk of infringement of such rights, is entirely their own
responsibility. Further information may be obtained from the IEEE Standards Association.
IMPORTANT NOTICE
Technologies, application of technologies, and recommended procedures in various industries evolve over
time. The IEEE standards development process allows participants to review developments in industries,
technologies, and practices, and to determine what, if any, updates should be made to the IEEE standard.
During this evolution, the technologies and recommendations in IEEE standards may be implemented in ways
not foreseen during the standard’s development. IEEE standards development activities consider research
and information presented to the standards development group in developing any safety recommendations.
Other information about safety practices, changes in technology or technology implementation, or impact
by peripheral systems also may be pertinent to safety considerations during implementation of the standard.
Implementers and users of IEEE Standards documents are responsible for determining and complying with
all appropriate safety, security, environmental, health, data privacy, and interference protection practices and
all applicable laws and regulations.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Participants
At the time this draft Standard was completed, the Algorithmic Bias Considerations Working Group had the
following membership:
The following members of the individual Standards Association balloting committee voted on this Standard.
Balloters may have voted for approval, disapproval, or abstention.
7
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
When the IEEE SA Standards Board approved this Standard on 11 December 2024, it had the following
membership:
∗
Member Emeritus
8
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
Introduction
This introduction is not part of IEEE Std 7003™-2024, IEEE Standard for Algorithmic Bias Considerations.
This standard has evolved along with the maturity of the concept of ethical bias. The term ‘ethical bias’
is used to refer to bias that is wanted, at the time it is being assessed, because it contributes to the correct
functioning of the Autonomous Intelligent Systems (AIS).
Bias is inherent and in many cases wanted. For example, a good result from a search engine should be
biased to match the interests of the user as expressed by the search-term, and possibly refined based on
personalization data.
When ‘no bias’ is wanted it generally means that unwanted bias should be minimized, as defined by the
context within which the AIS is being used.
In the absence of malicious intent, unwanted bias in an AIS is generally caused by:
– Insufficient understanding of the context of which the system is a part. This includes an incomplete under-
standing about who could be affected by the algorithmic decision outcomes, resulting in a failure to test
how the system performs for specific groups. This can be mitigated when those responsible for the AIS
have a diversity of perspectives. Diversity includes individuals with different cultural backgrounds, skin
colour, appearance, gender/sex, age, experience, disabilities and education. The relevant clauses in the
standard are those concerning requirements (Clause 4), stakeholder identification (Clause 6) and risk and
impact analysis (Clause 8).
– Failure to rigorously map decision criteria. When algorithmic decisions are considered more objectively
trustworthy than human decisions, more often than not this assumes that algorithmic systems follow a
clearly defined set of criteria with no hidden agenda. However, the complexity of system development
may embed hidden decision criteria derived from the data used by the system. The failure to correctly
map decision criteria can be mitigated by ensuring a balance of influence from across stakeholder groups
throughout the life cycle of the AIS. The relevant clauses in the standard are those concerning data repre-
sentation (Clause 7) and again stakeholder identification (Clause 6).
– Inadequate monitoring in the operations stage. AIS are complex systems, which means they have emergent
properties, some of which may only become apparent long after development. Amongst many issues, the
most notable is drift, arising either from changing alignment between the original data from which the
model was built and ‘live’ data, or from the model’s outputs no longer aligning with changing expectations.
Thus, bias that was ethical in the past may no longer be so and indeed bias that was unethical may become
so. Consequently, an AIS needs to be monitored throughout its operation and the associated mitigations
can propagate back through to the initial steps in its creation. The relevant clauses in the standard are all
of those identified so far with the addition of evaluation (Clause 9).
This standard sets out an approach for the mindful consideration of bias throughout the AIS life cycle from
inception through to decommissioning. This document focuses on what to do and why, but not how to do
it, because the precise actions are inevitably context specific, and should be determined and justified by
those responsible. Rather than providing a checklist for algorithmic bias consideration, the aim is to support
those using the standard in working out the appropriate considerations for their context of use. This bias
profile, central to the process, contains the various initial, intermediate and final versions of the documents.
These documents are associated with each activity that capture information for considering what constitutes
an acceptable level of algorithmic bias risk for the circumstances that hold at the time. Each activity feeds
forwards and feeds backwards to facilitate the bias consideration process. Some biases may be caught before
they do harm through foresight and experience, others may only manifest later, from which hindsight can
inform the next iteration. The iterative process ensures that risk and the stakeholder impact are assessed at all
9
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
milestones of the AIS life cycle. Conducting iterative risk assessments at appropriate life cycle stages allows
for fulsome and targeted mitigation strategies. This standard provides organizations, that develop, implement
and use AIS, an approach to mindfully consider and then optimize for ethical bias.
Acknowledgments
– Material excerpted from ISO standards, as cited in this standard, is used with permission of the American
National Standards Institute (ANSI) on behalf of the International Organization for Standardization. All
rights reserved.
– The definition of stakeholder (Clause 3) is reproduced with modification from IEEE Std 7010-2020 with
permission from IEEE SA.
– Figure 1 “Bias-considerate Development” reproduced with permission from Julian Padget, © 2024.
10
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Contents
1 Overview 14
1.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Word Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Field of application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Organization of the standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8 Conformance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.9 Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Normative references 17
6 Stakeholder Identification 26
6.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.3 Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7 Data Representation 28
7.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.3 Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4.1 Action purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4.2 Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.5 Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
11
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
8.4 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.5 Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9 Evaluation 32
9.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2 Evaluation of the AIS Design and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2.2 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2.3 Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2.4 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.2.5 Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
9.3 Ongoing evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
9.3.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
9.3.2 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9.3.3 Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9.3.4 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9.3.5 Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
12
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
13
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Standard for Algorithmic Bias
Considerations
1 Overview
1.1 Scope
Computer algorithms and analytics are playing an increasingly influential role in government, business and
society. They underpin information services and autonomous intelligent systems (AIS) including but not
limited to artificial intelligence applications that involve symbolic and subsymbolic technologies and their
hybridization. These technologies are having a direct and significant impact on human lives across a broad
socioeconomic, political and cultural spectrum. Algorithms enable the exploitation of vast and varied data
sources from public and private spheres to support human decision-making and actions that serve the diverse
interests of the societies and economies in which they operate. However, alongside the benefits, their use is
not without maleficent risk.
This standard describes processes and methodologies to help users address issues of bias in the creation of
algorithms and models. Elements include but are not limited to: criteria for the selection of data sets; guide-
lines on establishing and communicating the application boundaries for which the AIS has been designed
and validated to guard against unwanted consequences arising from out-of-bounds application of an AIS;
suggestions for user expectation management to help mitigate unwanted bias due to incorrect interpretation
of systems outputs by users (e.g., correlation vs. causation).
1.2 Purpose
This standard is designed to provide individuals or organizations creating an AIS, certification-oriented pro-
cesses and methodologies to produce clearly articulated accountability and clarity around how an AIS targets,
assesses and influences the stakeholders of said AIS. This standard enables AIS creators to define, measure
and communicate how bias is used in the AIS to users and regulatory authorities, and show that best practices
were used in the design, development, testing and evaluation of the AIS to avoid unwanted differential impact
on stakeholders.
14
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The word shall indicates mandatory requirements strictly to be followed in order to conform to the standard
and from which no deviation is permitted (shall equals is required to). 6,7
The word should indicates that among several possibilities one is recommended as particularly suitable,
without mentioning or excluding others; or that a certain course of action is preferred but not necessarily
required (should equals is recommended that).
The word may is used to indicate a course of action permissible within the limits of the standard (may equals
is permitted to).
The word can is used for statements of possibility and capability, whether material, physical, or causal (can
equals is able to).
This standard is agnostic to the types of computational approaches used in the algorithmic systems it applies
to, be they rule-based, statistical, machine learning or otherwise.
This standard applies to all algorithmic systems that are involved in selection, allocation, ranking, decision-
making or any other processes in which some parties could receive different outcomes to other parties, making
it possible for the system to exhibit unwanted bias. The standard is applicable when new algorithmic systems
are designed, existing systems are used with data different to their training data, when systems are deployed
in new contexts or the systems are updated and when systems are decommissioned.
This standard does not deny the role of operationally-justified bias in algorithmic processing as a fundamental
element in information classification and decision making. This standard seeks to help with distinguishing
and communicating the difference between wanted and unwanted bias, and thereby clarify the limits for
appropriate use of that algorithmic system.
This standard acknowledges that the defining attributes of unwanted bias may depend on the social and soci-
etal context within which an algorithmic system is used. This standard therefore does not seek to enumerate
specific algorithmic decision criteria or differential impacts to define unwanted bias, but rather to provide
methods which can be tuned in accordance with individual contexts of use.
It is recognized that teams with diverse experience, age, education, gender and cultural backgrounds would
contribute to a comprehensive consideration of bias and help minimize unwanted bias within the AIS through-
out its life cycle.
1.5 Limitations
The scope of this standard includes methods and processes to help designers and developers in the creation of
AIS in which bias, when required, is wanted. It does not include systems or processes to detect and evaluate
problematic algorithms in actual use. Tracking and certification are out of scope for this standard.
6 The use of the word must is deprecated and cannot be used when stating mandatory requirements; must is used only to describe
unavoidable situations.
7 The use of will is deprecated and cannot be used when stating mandatory requirements; will is only used in statements of fact.
15
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The algorithmic bias identification and mitigation processes described in this standard are constructed to be
consistent with the IEEE 70xx series of standards developed as part of the IEEE Global Initiative on Ethics
of Autonomous and Intelligent Systems.
This standard is organized into normative clauses addressing specific stages in the AIS life cycle [B34] where
actions shall be taken to consider the bias intent and to mitigate unwanted bias. The normative clauses
are accompanied by several annexes of a) informative clauses intended to support the contextualization and
cultural context within which algorithmic bias is situated and b) examples of process to support the normative
clauses.8
The clauses are ordered around the central concept of the bias profile (Clause 5). The bias profile is the
repository that holds the outputs and outcomes that support bias consideration. The ordering of the clauses
in the standard follows the sequence in which the clauses are most likely to be used for the first time during a
development process, but each shall be iteratively re-visited throughout the AIS life cycle, appropriate to the
development, operation and changing circumstances of the AIS. The clauses are:
a) Requirements for Bias Consideration (Clause 4): Establishing the essential documents and processes and
methods to consider bias for a specific AIS within its context of use.
b) Bias Profile (Clause 5): The record of the process of bias consideration throughout the AIS life cycle
within a specific context of use.
c) Stakeholder Identification (Clause 6): The attributes of all stakeholders that influence and are impacted
by the AIS. It is to be noted that it is an identification process not an evaluation process.
d) Data Representation (Clause 7): Review of the data to assess whether the data are representative of the
stakeholders identified. This includes data provenance and the representative mapping of data against the
stakeholders.
e) Risk and Impact Assessment (Clause 8): An assessment of all potential risks and impacts from intended
and unintended uses or applications. It is to be noted that the team assessing the risks should be repre-
sentative of the stakeholders identified.
f) Evaluation (Clause 9): Evaluating the AIS as a whole for bias through two processes: 1) assess for bias
in the design and outputs of the AIS; 2) set up a process to perform the ongoing evaluation of the AIS.
1.7 Audience
The intended audience for this standard includes system, software, and hardware suppliers, procurers, de-
velopers, maintainers, system auditors, operators, users, and managers in both the supplier and procuring
organizations or within their own organization in case of in-house development. The standard is also aimed
at data scientists, artificial intelligence researchers and their research organizations.
1.8 Conformance
This standard can be used as a conformance document for projects and organizations claiming conformance
to IEEE 7003™ standard on Algorithmic Bias Considerations.
8 The numbers in brackets correspond to those of the bibliography in Annex F.
16
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Use of the nomenclature of this standard for the parts of information for users (for example,chapters, topics,
pages, screens, windows) is not required to claim conformance.
This standard may be included or referenced in contracts or similar agreements when the parties (called the
procurer and the supplier) agree that the supplier shall deliver services and systems in accordance with the
standard.
This standard may also be adopted as an in-house standard by a project or organization that decides to acquire
information for users from another part of the organization in accordance with the standard.
1.9 Disclaimer
This standard establishes minimum criteria for helping ensure that AIS do not exhibit unwanted bias. How-
ever, implementing these criteria does not automatically ensure conformance to system or mission objectives,
or prevent adverse consequences (e.g., loss of life, mission failure, loss of system safety or security, or finan-
cial or social loss). Conformance to this standard does not absolve any party from any social, ethical, moral,
financial, or legal obligations.
2 Normative references
For the purposes of this document, the following terms and definitions apply. The IEEE Standards Dictionary
Online should be consulted for terms not defined in this clause.9
NOTE 1—For additional terms and definitions in the field of systems and software engineering, see ISO/IEC/IEEE
24765:2017 [B39], which is published periodically as a “snapshot” of the SEVOCAB (Systems and Software Engineering
Vocabulary) database and is publicly accessible at computer.org/sevocab.10
NOTE 2—The following ISO/IEC and ISO/IEC/IEEE standards: ISO/IEC/IEEE 2382:2015 Information technology –
Vocabulary, ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and
terminology, ISO/IEC/IEEE 24765:2017 Systems and software engineering — Vocabulary and ISO/IEC/IEEE 29119-
1:2022 Software and systems engineering – Software testing – Part 1: General concepts are publicly available from
https://standards.iso.org/ittf/PubliclyAvailableStandards/index.html.
NOTE 3—The following IEEE standards: ISO/IEC/IEEE 24748-7000-2022 Standard for Systems and software engineering–
Life cycle management–Part 7000: Standard model process for addressing ethical concerns during system design,
IEEE Std 7005-2021 IEEE Standard for Transparent Employer Data Governance and IEEE Std 7010-2020 IEEE Rec-
ommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being are pub-
licly available through the IEEE GET program from https://ieeexplore.ieee.org/browse/standards/
get-program/page/series?id=93.
9 IEEE Standards Dictionary Online is available at: http://dictionary.ieee.org. An IEEE Account is required for access
to the dictionary, and one can be created at no charge on the dictionary sign-in page.
10 Notes in text, tables, and figures of a standard are given for information only and do not contain requirements needed to implement
this standard.
17
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The principle followed in the choice and definition of terms is to maximise alignment with existing standards
terminology related to artificial intelligence in general and to bias in the context of AIS in particular. In
consequence, many definitions in this clause cite several recently produced ISO and IEEE standards (see
Annex F). In some cases, a term has more than one definition. This is done to emphasise the alignment of
definitions from different sources, or to provide technical and non-technical definitions of a term.
NOTE 1—ISO-sourced definitions have an associated footnote indicating their copyright status. Where several definitions
have been sourced from the same ISO standard, all the definitions reference the same footnote.
NOTE 2—IEEE-sourced terms are not reproduced here, following IEEE SA policy; the reader may refer to the IEEE
Standards Dictionary Online for their definitions or the cited standard.9
3.1 Definitions
algorithmic system: Any system that uses automated processing of data to produce an output decision
NOTE—The decision process usually depends on the combination of algorithm(s) that manipulate the information and
the data used as inputs. In some cases this may involve machine learning, or other AI techniques, but this can also not be
the case.
algorithm: Finite set of well-defined rules for the solution of a problem in a finite number of steps
(ISO/IEC/IEEE 24765:2017, 3.124.1 [B39])11
autonomous intelligent system (AIS): See IEEE Standards Dictionary Online9 (IEEE Std 7010-2020, 2.1
[B18]); See also autonomous system.
autonomous system: System capable of working without human intervention for sustained periods
(ISO/IEC TR 29119-11:2020, 3.1.14 [B31]);12 See also autonomous intelligent system.
attribute: (1) See IEEE Standards Dictionary Online9 (IEEE Std 1320.2-1998, 3.1.9 [B16]) (2) Inherent
property or characteristic of an entity that can be distinguished quantitatively or qualitatively by human or
automated means (ISO/IEC 25000:2014, 4.1 [B27])13 (3) Property or characteristic of an object that can be
distinguished quantitatively or qualitatively by human or automated means (ISO/IEC/IEEE 15939:2017, 3.2
[B36])14
bias: Systematic difference in treatment of certain objects, people or groups in comparison to others
NOTE—Treatment is any kind of action, including perception, observation, representation, prediction or decision.
(ISO/IEC 22989:2022, 3.5.4 [B21]).15
bias profile: A repository of information created and maintained through the activities of algorithmic bias
consideration defined in this document
concept drift: Movement of the decision boundary, which degrades the accuracy of predictions, even though
the data have not changed
11 ©ISO. This material is reproduced from ISO/IEC/IEEE 24765:2017 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
12 ©ISO. This material is reproduced from ISO/IEC TR 29119-11:2020 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
13 ©ISO. This material is reproduced from ISO/IEC 25000:2014 with permission of the American National Standards Institute (ANSI)
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
15 ©ISO. This material is reproduced from ISO/IEC 22989:2022 with permission of the American National Standards Institute (ANSI)
18
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
NOTE—Concept drift can result in poor AIS performance against some stakeholders according to the metrics identified
in the design and outputs evaluation process, which in turn, may indicate the presence of bias.
context of use: See IEEE Standards Dictionary Online9 (ISO/IEC/IEEE 24748-7000-2022, 3.1 [B38]).
data drift: Decay over time of model prediction accuracy, due to changes in the statistical characteristics of
the production data (e.g., image resolution has changed, or one class has become more frequent in data than
another)
NOTE—Data drift can cause omitted-variable bias. The correlation between features can change, so that some features
can become less relevant for the purpose of the AIS and others can become more relevant.
(Adapted from ISO/IEC 22989:2022, 5.11.9.1 [B21])16 ; See also concept drift.
dataset: Collection of data with a shared format (ISO/IEC 22989:2022, 3.2.5 [B21])15
NOTE 2—Features play a role in training and prediction. Features provide a machine-readable way to describe the
relevant objects. As the algorithm cannot go back to the objects or events themselves, feature representations are designed
to contain all useful information.
fairness: A treatment, a behavior or an outcome that respects established facts, beliefs and norms and is not
determined by favoritism or unjust discrimination (ISO/IEC TR 24027:2021, 5.1 [B24])18
group: Subset of objects in a domain that are linked because they have shared attributes
life cycle: Evolution of a system, product, service, project or other human-made entity, from conception
through retirement (ISO/IEC/IEEE 15288:2023, 3.21 [B35])20
NOTE—The above is a general-purpose definition of life cycle. ISO/IEC 5338:2023 [B34] provides an AIS-specific view
on life cycle.
machine learning algorithm: Algorithm to determine parameters of a machine learning model from data
16 ©ISO. This material is adapted from ISO/IEC 22989:2022 with permission of the American National Standards Institute (ANSI)
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
19 ©ISO. This material is adapted from ISO/IEC TR 24027:2021 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
20 ©ISO. This material is reproduced from ISO/IEC/IEEE 15288:2023 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
19
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
machine learning model: Mathematical construct that generates an inference or prediction based on input
data or information (ISO/IEC 22989:2022, 3.3.7 [B21])16
metadata: Data that describe other data (ISO/IEC 25024:2015, 4.29 [B28])21
metric: (1) Quantitative measure of the degree to which a system, component, or process possesses a given
attribute (ISO/IEC/IEEE 24765:2017, 3.2440 (1) [B39])11 (2) See IEEE Standards Dictionary Online9 (IEEE
Std 7005-2021, 3.1 [B17])
model: (1) Representation of a real-world process, device, or concept (ISO/IEC/IEEE 24765:2017, 3.2485
(1) [B39])11 (2) Semantically closed abstraction of a system or a complete description of a system from
a particular perspective (ISO/IEC/IEEE 24765:2017, 3.2485 (5) [B39])11 (3) Representation of a system
of interest, from the perspective of a related set of concerns (ISO/IEC 19506:2012, 4 [B20])22 (4) Al-
gorithm or calculation combining one or more base or derived measures with associated decision criteria
(ISO/IEC/IEEE 15939:2017, 3.27 [B36])14 (5) Abstract representation of an entity or collection of entities
that provides the ability to portray, understand or predict the properties or attributes of the entity or collection
under conditions or situations of interest NOTE—attributes replaces characteristics (Adapted from ISO/IEC/IEEE
42020:2019, 3.13 [B41])23 (6) Physical, mathematical, or otherwise logical representation of a system, entity,
phenomenon, processor (ISO/IEC TR 24030:2021, 3.1.3 [B25])24
model decay: Deterioration of performance according to the defined performance metrics. Model decay can
be a symptom of one or more kinds of drift; See also concept drift, data drift.
output: (1) Data that an information processing system, or any of its parts, transfers outside of that system
or part (ISO/IEC/IEEE 2382:2015, 2 [B37])26 (2) Artefact resulting from an activity or process
protected attribute: Attribute designated as legally protected; See also sensitive attribute.
retirement: (1) Withdrawal of active support by the operation and maintenance organization, partial or
total replacement by a new system, or installation of an upgraded system (ISO/IEC 24748-1:2024, 3.1.46
[B26])27 (2) Permanent removal of a system or component from its operational environment (ISO/IEC/IEEE
24765:2017, 3.43 [B39])11
21 ©ISO. This material is reproduced from ISO/IEC 25024:2015 with permission of the American National Standards Institute (ANSI)
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
24 ©ISO. This material is reproduced from ISO/IEC TR 24030:2021 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
25 ©ISO. This material is reproduced from ISO/IEC TS 12791:2022 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
26 ©ISO. This material is adapted from ISO/IEC/IEEE 2382:2015 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
27 ©ISO. This material is reproduced from ISO/IEC 24748-1:2024 with permission of the American National Standards Institute
(ANSI) on behalf of the International Organization for Standardization. All rights reserved.
20
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
sensitive attribute: Attribute that is either protected, or is not protected but about which there is societal or
cultural sensitivity; See also protected attribute.
stakeholder: Anyone or anything that is meaningfully or potentially meaningfully impacted by the AIS or
meaningfully or potentially meaningfully impacts the AIS
[Example. acquirers, data owners, decision-makers, developers, disposers, end users, end user organizations,
environment, maintainers, producers, project managers, regulatory bodies, supplier organizations, supporters,
testers, trainers]
NOTE 2—Some stakeholders can have interests that oppose each other or oppose the system.
team: The person or group of people accountable and or responsible for the AIS at a specific stage of the
AIS life cycle
testing: Process of operating a system or component under specified conditions, observing or recording the
results, and making an evaluation of some aspect of the system or component (ISO/IEC 25051:2014, 4.1.23
[B29])28
trained model: Result of model training (ISO/IEC 22989:2022, 3.1.23 [B21])15 ; See also training, model
training.
training: Process to determine or to improve the parameters of a machine learning model, based on a machine
learning algorithm, by using training data (ISO/IEC 22989:2022, 3.3.15 [B21])15
training data: Data used to train a machine learning model (ISO/IEC 22989:2022, 3.3.16 [B21])15
transparency: <organization> Property of an organization that appropriate activities and decisions are com-
municated to relevant stakeholders in a comprehensive, accessible and understandable manner (ISO/IEC
22989:2022, 3.5.14 [B21])15
transparency: <system> Property of a system that appropriate information about the system is communi-
cated to relevant stakeholders (ISO/IEC 22989:2022, 3.5.15 [B21])15
21
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
4.1 Purpose
The purpose of requirements setting for bias consideration is to establish forethought on the role of bias in
the AIS. This is the initial stage of a set of iterative activities that complement the primary system life cycle.
This stage sets up the bias profile, which is the core artefact supporting the bias consideration process and
where all the information is recorded about bias consideration from each of the other stages.
Requirements setting establishes preliminary thinking on what bias is wanted in the AIS to enable it to
achieve its functional objectives and what bias is unwanted that can impede the functional objectives or lead
to maleficence against stakeholders.
4.2 Inputs
4.3 Outputs
a) The bias profile of the AIS including the bias requirements and boundaries of acceptability
b) Values statement for the AIS incorporating bias considerations
4.4 Actions
22
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
– Recommended: feasibility study to understand the conceptualization of the AIS and forethought about
and investigation of barriers to and enablers of the AIS, the IEEE-CertifAIEd set of ontologies [B11,
B12, B13, B14] and ISO/IEC/IEEE 24748-7000-2022 [B38] to support the assessment of ethical con-
siderations and any other documentation that has been prepared in relation to the development and
operation of the AIS.
b) Identify the sensitive attributes, if any, to be represented and advocated for. Ascertain whether these
attributes are represented in the individuals assigned to the bias consideration process. Where sensi-
tive attributes are not appropriately represented, source external advocates to assist in key stages of the
process.
c) Choose an accountability structure and identify how the bias consideration process shall interface with
the organizational governance framework.
d) Identify any other requirements for the AIS life cycle up to and including decommissioning.
e) Set up the bias profile (see Clause 5).
f) Create a values statement for the AIS incorporating the organizational values and initial bias considera-
tions.
NOTE—For example, ISO/IEC/IEEE 24748-7000-2022 [B38] provides guidelines on the development of ethical
value requirements.
g) Propose boundaries of acceptability in the context of use, such as in respect of diversity, inclusion, dis-
ability, culture, accessibility.
h) Ensure clear processes and methods are established for considering bias within the AIS context of use.
4.5 Outcomes
Through the activities and tasks in this clause, the developers of the AIS create and understand the bias profile
of the AIS. In doing so, they acquire a deeper understanding of the boundaries of acceptability for the AIS.
5.1 Purpose
The bias profile is defined (see Clause 3) as “a repository of information created and maintained through the
activities of algorithmic bias consideration defined in this document.” Its purpose is to provide a through-life
record of how bias has been considered in relation to an AIS.
Algorithmic bias can be a bug, but it is also a necessary feature of an AI system (AIS). Some bias is wanted for
the AIS to be able to satisfy its functional requirements, and it may also be added to the AIS to fix unwanted
bias. A likely indicator of unwanted bias is that the AIS produces an output that it should not, hence the term
“bug”.
The problem with debugging for algorithmic bias is that AIS are typically data-driven, whether by human-
curated knowledge engineering, or by algorithmic model construction. Thus, there is no explicit code to
inspect, because the AIS’s function is the product of data and the code that interprets that data. One way to
localize a source of unwanted bias is to observe pairs of inputs and outputs for anomalies.
23
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The code however is not the only potential source of bias in an algorithmic system. Both the data that is used
to drive the building of the code, and the data that are fed into the operational AIS, are also possible sources,
and these data can be inspected and analyzed to provide a third indicator of unwanted bias. Thus, the data or
the model or the code that builds the model can all independently be sources of unwanted bias.
The goal of algorithmic model construction is the controlled incorporation of bias, in that the construction
is embedded in a development and test cycle that should detect and address the presence of unwanted bias.
Similarly, the improvement of model performance is embedded in an optimize and test cycle that should again
detect and address unwanted bias. In this way, the biases in data and models in isolation can be partially
understood. Unfortunately, bias is not independent. The interaction of data and model in both of which
unwanted bias has been addressed does not preclude unwanted bias in the outputs. Thus, any interaction
between data and model or model and model needs to consider how to detect and address unwanted bias.
The purpose of the activities described in this standard is to allow anyone developing an AIS to build a
collection of documents that record the consideration of algorithmic bias for an AIS, by setting out how to
define their own processes, suited to their context of use and that of the AIS, to help in the identification,
detection and mitigation of unwanted algorithmic bias. This standard describes the activities in the order that
they are likely first encountered, from the requirements for undertaking bias consideration through to system
operation, but in practice there is potential and need for iteration and feedback that needs careful management
to ensure the complementarity of AIS development and AIS bias consideration.
In practice, the stages in Figure 1 need to be laid alongside and aligned with the system life cycle [B34] and
the process model in use. The bias profile at the center of Figure 1 holds the initial, (multiple) intermediate
and most recent versions of the documents associated with each activity that capture information for input to
later stages and feedback for subsequent iterations through each, some or all of the stages. Iteration ensures
that risk and stakeholder impact are assessed at all milestones of the AIS life cycle. Conducting iterative risk
assessments at appropriate life cycle stages allows for fulsome and accurate mitigation strategies.
Each activity in the bias profile construction feeds forwards and feeds backwards to facilitate the bias consid-
eration process. The three indicators of the presence of bias come from three different places: sets of outputs,
sets of inputs and sets of outputs and inputs, but where bias is detected may be some way from where it
was introduced. Some biases may be caught before they do harm through foresight and experience, others
may only manifest later, from which hindsight can inform the next iteration. Mitigation may be proactive or
reactive, in consequence. Specification of mitigations are out of the scope of this standard, but the standard
does provide the user with the means to establish the processes to discover where and how to help mitigate.
Algorithmic bias consideration comprise five stages: requirements, stakeholder identification, data represen-
tation, system evaluation and risk and impact. Outlines of each follow to provide an overview of what each
brings to and takes from the bias profile as part of an iterative, on-going consideration of algorithmic bias,
while detailed descriptions appear in the corresponding normative clauses of this standard.
The purpose of this activity is to establish forethought on the use of bias in the AIS and how that feeds into
and receives feedback from the other stages of bias consideration. This stage sets up the bias profile to receive
the inputs from the activities that follow. See Clause 4.
24
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
System System
conception decommissioning
Requirements
System
for bias
evaluation
consideration
bias
m itig a tio n
repository of
bias profile
documents
p r o fil e
Risk &
Stakeholder
impact
identification
assessment
+
con
n
str
uc
tio
Data
tio
n:
representation
if ica
detec id e nt
tio n +
The two goals for stakeholder identification are to identify which groups can influence the AIS and which
groups can be impacted by the AIS. In order to support the achievement of those goals, it is necessary to
consider the resourcing of those responsible for stakeholder identification, their representation with respect
to those identified in the business requirements, and the positioning of the AIS with regard to systemic and
cultural biases. See Clause 6.
The goal of data representation is to map the system data to the identified stakeholders. At this stage the
main requirement for this is to check that data is available, can be collected or can be generated for use by
the system. The sources of data shall be documented, including how it was governed and collected, whether
it is useful for the AIS and what biases it could introduce to the AIS. See Clause 7.
25
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The goal of risk and impact assessment (RIA) is the identification and analysis of bias-related risks arising
from the AIS. The RIA is an on-going process throughout the life cycle of the system, as the system and its
context of use evolve, and therefore needs to be resourced as such, including change of context of use. The
results of requirements setting for stakeholder identification and data representation feed into that for risk and
impact assessment. See Clause 8.
5.2.5 Evaluation
The goal of evaluation is to evaluate the AIS as a whole for bias through two processes: a) Assess for bias
in the design and outputs of the AIS b) Set up a process to perform an ongoing evaluation of the AIS. Once
again, the evaluation process needs to be resourced throughout the life cycle of the AIS. Preliminary thinking
at this stage in the life cycle shall consider the components of the AIS and their use of data and how the AIS
can represent a solution within the context of use. Use of experienced staff in AIS bias analysis can aid in the
bias evaluation process. See Clause 9.
This should lead to identification of features of the data and the system that can be measured and what
constitutes wanted and unwanted system behaviour. The questions to ask include:
6 Stakeholder Identification
6.1 Purpose
The purpose of stakeholder identification is to set out how to help ensure the identification of all the stake-
holders impacted by bias and who influence bias in the AIS and its context of use. The attributes of all the
stakeholders shall be identified and used to construct groups against which bias is evaluated and mitigated.
The output of this stage is the initial set of stakeholders (see clause 6.3). At each of the subsequent stages
of bias consideration – data representation, risk and impact assessment and evaluation – the stakeholder
attributes shall be reviewed and possibly revised.
6.2 Inputs
a) The business case for the AIS: The business case shall contain a preliminary list of stakeholders, which
are the input to the stakeholder identification process described here; the business case shall also explicitly
or implicitly provide the initial ranking of stakeholder importance or priority to the business.
b) The technical requirements for the AIS
c) The context of use of the AIS
d) Any other information those carrying out stakeholder identification need for the actions set out in clause 6
e) The bias profile
26
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
6.3 Outputs
a) Updated bias profile, such that the information described in clause 6.4 shall be added to the relevant
section of the bias profile, namely:
6.4 Actions
The activities of stakeholder identification shall at least comprise those enumerated below. Several of the
activities have an associated, cross-referenced, informative clause describing how to perform the action com-
prehensively and meaningfully. The stakeholder identification process shall include the following actions:
a) Document and provide a rationale for: 1) The specific process defined by those carrying out stakeholder
identification 2) The resulting decisions when addressing each of the actions described in clause 6.4.
b) Apply diverse perspectives, consulting with influencing and impacted stakeholders already identified to
identify additional stakeholders and their attributes (see Annex B.5).
c) Identify the stakeholders impacted by the AIS (see Annex B.2).
d) Identify the stakeholders who influence the AIS (see Annex B.4).
e) Identify the attributes – both inherent and specific to context of use – corresponding to the identified
stakeholders (see Annex B.4.4).
f) Identify attributes that are protected, shall document the need for this protection, and any additional steps
taken to help ensure these are properly protected (see Annex B.6).
g) Analyze whether the attributes identified do not themselves manifest bias.
h) Assess and revise the bias considerations initially identified in the requirements (see Clause 4).
i) Measure and record the effect of influencing stakeholders (see Annex B.4.5).
j) Use these measurements during risk and impact assessment (see Annex B.4.6).
6.5 Outcomes
Clause 6 describes the actions to create the reference set of all stakeholders and their corresponding attributes
that shall be used throughout the AIS life cycle and is used to inform the data representation, risk and impact
and evaluation stages. The stakeholder reference set is a part of the bias profile that is updated after the
completion of the activities set out in Clause 6.4.
Stakeholders shall be distinguished as impacted (see Annex B.2), influencing (see Annex B.4) or as both.
Attributes shall include both those which are inherent to a given stakeholder, and those which are relevant
due to the context of use. Through this process of identifying stakeholders and their attributes, it shall be
27
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
established which (if any) of the attributes are protected. Such attributes can require special treatment in
subsequent stages of bias consideration. Furthermore, when identifying such attributes, consideration shall be
given to the competency, diversity, cultural context, and biases of those carrying out stakeholder identification
(see Annex B.5).
7 Data Representation
7.1 Purpose
This clause details a process to document the sources and types of the data, context of use (as outlined in the
business case) and how the data represents the identified stakeholders of the AIS. The data to be used within
the AIS shall be examined and explored to understand the potential impact the use of these data may have on
the identified stakeholders. Data representation for the purpose of this section means how well it captures the
attributes of impacted stakeholders. The question of the computational representation of such data is out of
scope.
7.2 Inputs
7.3 Outputs
7.4 Actions
The reasons for making particular decisions regarding data shall be such that the data is sufficiently represen-
tative of the identified stakeholders. The reasons are also essential to the process of creating the document(s)
of data representation. As the data is being collected, tested, used and decommissioned, reasons for inclusions
and/or deletions and/or omissions of data and attributes from the datasets shall be documented and added to
the bias profile. These reasons are beneficial for making decisions to reduce unwanted bias and assess the po-
tential risks of producing unwanted bias. Concerns and limitations arising from any of the following activities
shall be documented.
28
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
7.4.2 Metadata
Metadata is information about the dataset. The activities identified below, along with any available data
provenance record, verify and augment any metadata associated with the given dataset. The activities outlined
in this section document the structure, content, provenance and potential relationships within the dataset.
These activities may identify potential anomalies and assess the data quality of the data set. The output of
these activities and tasks shall be captured in the form of metadata.
The type(s) of data used throughout the AIS for each feature in the dataset shall be documented. The reasons
for the selected type(s) of data shall be based on the data available, the context of use and the anticipated
outcome of the AIS.
Consider, document and verify the conditions under which each dataset is collected or generated, such as
those that follow:
d) What was the purpose for which the dataset was originally collected?
e) What is the means of collection? For example, crowd-sourced, synthesized, scraped, static or real time?
f) Was the dataset supplied voluntarily?
o) Determine fitness for purpose of data for the given business case.
p) Carry out a data quality assessment.
29
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
This activity shall explore the data being considered for use in the AIS to check for potential sources of bias
before the system is implemented, including at least the following:
a) Investigate sensitive and non-sensitive attributes to establish if these could be sources of bias.
b) Investigate the correlation or causation or both between non-sensitive and sensitive attributes.
The data bias profile includes a mapping of the data attributes to the impacted stakeholders. The process
of mapping the data shall enable comparative evaluation of identified stakeholders, for example when the
attributes describing stakeholders are not equally represented or when sample size is small, for example.
a) Based on the stakeholder ID, map the data against the attributes identified in the stakeholders.
b) If the AIS is retrained, the mapping shall be performed again.
c) Any imbalance between the attributes of stakeholders within the data shall be documented.
The data shall be continuously monitored, as described in Clause 9, and the bias profile updated accordingly.
7.5 Outcomes
Through the activities and tasks in this section, the users of this standard gain an understanding of how the
stakeholders are represented by the data. This is foundational to the assessment of risk and impact, the
evaluation of bias, and helping in the mitigation of unwanted bias.
8.1 Purpose
The purpose of the risk and impact assessment (RIA) section is to set out a series of actions for the identifi-
cation and analysis of bias-related risks and impacts arising from the AIS under consideration.
The RIA is an on-going process throughout the life cycle of the system, including change of context of use. At
the system inception stage, an initial bias RIA shall be carried out. At each subsequent stage, upon evaluation
of bias of the AIS, the RIA is reviewed and risks and impacts are updated and issues arising from the analysis
added to the bias profile; mitigations previously implemented are reviewed as part of the process and updated.
This is particularly important to counter unwanted systemic bias and unwanted amplification of bias.
8.2 Inputs
The inputs to each iteration of the RIA shall be the current version of each of the following documents which
are outputs of:
30
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
8.3 Outputs
8.4 Actions
a) The assessment process shall commence at the creation of the business case, the risk and impact metrics
shall be identified and the rationale for them provided.
b) At each stage of the AIS life cycle when there is a change in the bias profile, when there is a change in the
AIS, its context of use or its environment, the RIA shall be reviewed and updated for new or emerging bias
risks and impacts and to determine effectiveness of the mitigation strategies and risk tolerance acceptance
levels.
c) The assessment review process is to help ensure capability for on-going assessment throughout the AIS
life-cycle. The diversity of the RIA team members should reflect the diversity of the stakeholders identi-
fied in the stakeholder identification section. External stakeholders should be consulted for input.
8.5 Outcomes
Based on the levels of risk of bias and impact identified by the users of this standard, risk treatment activities
are to be identified from this standard and used on an ongoing basis to determine if the identified risks and
impacts have been mitigated.
31
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
The output of the risk and impact identification, analysis, mitigation, and ongoing audit process is to follow
the guidelines set out in this standard. The assessment is to be formally accepted by the owner of the AIS.
When an algorithmic system’s ownership and/or environment changes a full risk and impact reassessment
and formal acceptance of the reassessment is required.
9 Evaluation
9.1 Purpose
This clause sets out how to evaluate the AIS as a whole for bias through two processes: a) Assess for bias in
the design and outputs of the AIS b) Set up a process to perform an ongoing evaluation of the AIS.
9.2.1 Purpose
The purpose of this clause is to assess the design and outputs of the AIS for bias. This process shall be
embedded in the design and development stages and may be carried out several times as the product matures.
The process proposes evaluation actions and tasks to assess bias in the AIS design and outputs.
9.2.2 Inputs
9.2.3 Outputs
32
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
5) An assessment of the relationship between the assessed bias and business requirements
6) Where applicable, comparison of the AIS decisions and an expert panel’s decisions
7) Where applicable, an assessment of the bias in the design of the UI and UX and their impact on data
bias
8) A description of the explored biases and, where feasible, the sources of these biases
9) The results of evaluating the mitigation process and the performance optimization process for bias
9.2.4 Actions
9.2.4.1 Review
9.2.4.2 Data
Evaluate data pre-processing for the introduction of bias in the data set and for bias in the pre-processing
itself.
Perform the evaluation with respect to the information in the data bias profile.
NOTE—Pre-processing the data can introduce new types of bias. For example, if training data was augmented to help
mitigate bias, the data bias profile may be altered.
It is important that this is documented so that similar procedures shall be followed when retraining and updat-
ing a model and it is important to determine how this affects the AIS outcomes. The pre-processing techniques
shall be evaluated for any bias in the technique itself and for the introduction of bias in the processed data set.
9.2.4.3 Stakeholders, risk and impact, and other stages of the AIS life cycle
a) Choose testing scenarios, in which bias is viewed as a risk, and apply risk-based testing as set out in
ISO/IEC/IEEE 29119 [B40],29 then take into consideration the granularity of testing for example, group
or subgroup levels. (See Annex D.)
NOTE—One approach to choose the testing scenarios is to review the risk and impact assessment and the impacted
stakeholders list to identify testing scenarios for pairwise combinations of the identified risks and the impacted
stakeholders.
b) Choose a metric to evaluate bias in the AIS using the chosen testing scenarios and reference values.
Document each test and its outputs (see Annex D) as follows:
33
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
c) Evaluate the AIS outputs according to the defined method and interpret the results. Perform the evaluation
with respect to business requirements (see Clause 4). Describe and document the interpretation of the
method results and their relationship to the business requirements.
d) Recommend any mitigations based on the evaluation process.
Where appropriate, evaluate exclusively human decisions against the AIS decisions. This may comprise an
expert panel and may involve comparing the decisions of those experts against the AIS decision. For example,
for the purpose of identifying ambiguous or anomalous behavior.
Evaluate for bias introduced through any procedures taken to help mitigate bias.
a) Assess and document any trade off between mitigating one kind of bias and the introduction of another.
b) Document any new types of bias uncovered through the outputs of the model after applying the procedures
to help mitigate bias.
c) Iterate the above actions as appropriate.
Evaluate for bias introduced by any procedures applied to optimize computational performance by applying
the metrics before and after the optimization and comparing the results.
NOTE—For example, check for bias introduced by a technique such as model compression (quantization and pruning
techniques [B9, B10]).
Evaluate the UI and the UX designs against the stakeholder identification clause (see Clause 6).
The UI design shall reflect the attributes of all of the identified stakeholders. For example, a UI design may
reflect bias through offering a limited number of options in a drop-down list that does not include all of the
impacted stakeholders attributes. Another example is where a UI design may be biased against people with
disabilities due to lack of accessibility in the design.
9.2.5 Outcomes
Through the activities and tasks set out in this clause, the users of the standard shall gain an understanding of
evaluating bias in the AIS design.
9.3.1 Purpose
The purpose of the ongoing evaluation process is to assess the AIS for drifts and changes affecting the AIS
with regards to bias. Continuous monitoring of the AIS shall help to detect the inevitable drift during AIS
34
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
operation up to, and including, decommissioning of the AIS. For example, drift may result from changes in
the data sources, changes in society, or significant events that affect the context of use of the AIS.
9.3.2 Inputs
9.3.3 Outputs
9.3.4 Actions
Define the list of items to monitor. For example, data, decisions of the AIS, effect of the AIS on the stake-
holders, risks and impacts, unconsidered human acceptance of AIS decisions (complacency bias), feedback
loops in recommender systems [B44], the user interface, the culture (see Clause E), how external real-world
bias is impacted by the deployment of the AIS, etc..
Design and document a process for ongoing evaluation for each of the identified items. This process shall take
account of the behavior of the AIS in establishing what is appropriate for ongoing evaluation. For example,
one AIS might produce an output for each input. On the other hand, another AIS might consume arbitrarily
many inputs, or produce arbitrarily many outputs with no apparent synchronization. Take into consideration
the actions taken in the preceding evaluation of the design and outputs.
Define and document a rationale for the recurrence for ongoing evaluation. The recurrence may be temporal
or be captured by other indicators such as transaction volume, throughput, automatic alarms or others, in-
cluding exceptional circumstances, as appropriate to the context of use. If any mitigation steps are taken, in
any part of the AIS, another cycle of the evaluation process shall follow as appropriate.
NOTE—Special cases can require a re-run of the evaluation processes outside the normal cadence such as changes to the
data representation, changes to the risk and impact assessment, significant changes to the system’s life cycle, release of
new versions of any parts of the AIS, or based on a third-party audit.
35
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
If appropriate, establish mechanisms through which to collect impacted stakeholders’ feedback and incorpo-
rate it into the ongoing evaluation process.
Each iteration of the ongoing evaluation process shall check and document different kinds of drift that can
cause bias and the plans to act upon each, if any, as in the following list.
a) Check for data drift: It is also called feature, population, and covariate drift. Data drift can be caused by
adding, removing or replacing existing data sources that present different spatial, structural, lexical, se-
mantic, and syntactic differences, etc.,or by adding data that has a time shift, or has a different population
than the target population. This check can result in a recommendation to retrain the model and to revise
the business metric or the bias metrics, or both, and to change the frequency of retraining and of ongoing
evaluation.
b) Check for concept drift: Concept drift can happen when the ‘learned’ relationship between the inputs
and outputs changes so that the model inadequately reflects the relationship between inputs and outputs.
This check can result in a recommendation to retrain the model, and to revise the metrics identified in
the design evaluation process, or to use a different algorithm to construct a new model and identify new
metrics. This check can be initiated because of a major external event. For example, the COVID-19
lock-downs significantly changed customer behavior.
c) Identify and monitor other kinds of drift that are specific to the AIS. This drift shall be documented and
a plan made to address the bias.
If there is evidence of reinforcement of external real-world bias, this item shall be monitored. The drift
mechanism in question here is how the existence of the AIS and its outputs affect the real world and how
this then feeds back into the AIS. For example, groups with increased oversight may experience higher
levels of monitoring, leading to a greater number of recorded incidents. A prediction model trained on
these instances may result in even more intensive monitoring, thereby reinforcing this cycle of influence.
After each iteration of the ongoing evaluation process review and update, if necessary, the parts of the bias
profile as follows:
9.3.5 Outcomes
Through the activities and tasks set out in this clause, the users of the standard shall gain an understanding of
the ongoing evaluation of bias in the AIS.
36
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex A
(informative)
Conceptualizing Bias
A.1 Introduction
The purpose of this Annex is to establish an understanding of algorithmic bias by providing and explaining
the key concepts underlying bias in automated systems and algorithmic processes. This Annex provides and
explains a perspective on the nature and complexities of algorithmic bias and its implications for various
stakeholders in a system of interest or system under consideration. This Annex also explains some of the key
causes of bias and how biases may be treated or dealt with.
The complex and multifaceted nature of algorithmic bias means there have been a number of definitions and
elaborations associated with this phenomenon.
Algorithmic bias occurs when processing resulting from the execution of an algorithm produces outputs or
outcomes that favour one group over another. When unwanted bias occurs, we can get unwanted decisions.
Degraded predictive performance from an AIS may also occur. It should be noted that such outputs and
outcomes in algorithms can emerge from both intentional and unintentional activities and actions at any of
the stages in the life cycle of the AIS.
It is essential to recognize that bias in systems is not a singular inherent property but rather an intricate
interplay of different elements. It is usually noticeable from the outputs or outcomes from the execution of an
AIS and those outcomes or outputs can be traced back to any of the stages of the AIS. Thus, bias can originate
from the decisions that are made during conceptualization, requirements specification, design, development
and testing, operation and monitoring or decommissioning of the AIS. Bias can also originate from using
an unrepresentative dataset and from a learning process as the algorithm learns from the data it operates on
and tries to adapt. This explains the significance of adopting a multidimensional perspective when analyzing
and addressing bias, as this allows for a more nuanced understanding of its nature, implications and potential
remedies.
It is also important to emphasize the need to differentiate between different dimensions of bias. This includes
considering the scope or level of bias, which can range from situations where a single individual is affected
to cases where an entire community or demographic category is affected. Understanding the scope of bias
is vital for tailoring appropriate interventions and remedies, as different forms of bias may require distinct
approaches for evaluation and mitigation.
Another dimension, the impact of bias on individuals and groups, can depend on the context or the environ-
ment under which an AIS is being used. For example, an AIS can exhibit bias in one setting or community
and show no bias in another similar setting or community. The bias can also be exhibited in both settings but
in different ways. For example, the AIS does not display statistical bias, however, it does display unwanted
bias. Or, it could have statistical bias and wanted bias in another community setting. Recognizing and ad-
dressing, where appropriate, the differential impact of bias on different demographic groups is essential for
equitable outcomes and promoting social justice.
37
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Instances of bias are categorized in many different ways, which tend to be reflective of the context of use of
the AIS and its life cycle stage. It is important to determine the most appropriate bias categorizations for the
AIS in its context of use and life cycle and provide a rationale for its appropriateness.
One key element is a comprehensive taxonomy of bias mechanisms, offering valuable insights into how bias
can manifest within algorithms. Omission is identified as the most straightforward form of bias, where the
system excludes specific items from its processing. This could occur, for instance, when certain data points
are intentionally left out of the algorithmic analysis, leading to incomplete and potentially skewed results,
however, stratified sampling, for example, can be justified as a mitigation mechanism. Additionally, omission
can also occur in cases where data on certain protected attributes, such as race or gender, is not adequately
represented, potentially resulting in unwanted outputs.
Skew, on the other hand, represents a more complex category of bias, encompassing a range of different
mechanisms that affect algorithmic decision-making. Calibration is one of the key skew mechanisms [B1],
referring to the absolute value of assigned scores. When calibrating, it is important to understand the im-
portance of individual calibration and group calibration. One can have an unwanted bias of an individual
within a fairly distributed group. Thus, it raises a critical concern that if calibration is not appropriately main-
tained, unwanted bias can emerge, leading to unwanted biased treatment of individuals based on their group
affiliations.
Balance is another skew mechanism [B42], focusing on the relative scores within groups. The idea is to help
ensure that the under-represented and over-represented individuals within each group are treated fairly and
equally. Failure to maintain balance can lead to unwanted disparities and inequalities, perpetuating biases in
algorithmic decision-making.
The goal of categorizing bias is to touch upon the concept of over- and under-generalization, which in-
volves using group membership variables as predictors but employing criteria that either apply to smaller
sub-populations within the group or encompass larger populations, including those outside of the group.
This can lead to misconceptions and inaccuracies in AIS outputs, particularly when using proxies to make
inferences about certain attributes or behaviours.
There are many different classifications regarding conceptualization supporting the consideration process
of the standard. There is labelling of external items which can be considered as part of the stakeholder
identification process. These can bias certain roles for specific genders, cultures and ages, for example, in
employment. Data input bias can occur when the data representation against these roles is not balanced. The
evaluation process may be biased due to the actual algorithm and also through the human biases of those who
influence algorithmic inputs, outputs and outcomes. These biases can become systemic self-perpetuating
and amplify existing biases. When conducting the risk and impact assessments for AIS, bias categorization
should be identified, rationale given and diversity assessment inputs substantiated.
The bias profile represents a crucial perspective that sheds light on the intricate nature of algorithmic bias and
its genesis throughout the life cycle. The profile emphasizes the significance of understanding bias not as an
isolated event but rather as a multi-dimensional consequence of different stages of the system’s life cycle. By
identifying these distinct stages, ranging from the system conception to the system decommissioning stage,
the model presents a comprehensive view of how bias can permeate algorithmic systems.
AIS developers play a critical role in the early stages as they determine the theoretical constructs, features,
and evaluation criteria that form the foundation of the AIS. Their choices can significantly influence the
potential for bias in the system’s outputs and outcomes. Developers and maintainers are responsible for the
38
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
technical aspects of translating these theoretical constructs into working algorithms and ensuring their proper
functioning. Their actions, including data selection, model building, and decision-making, can introduce
bias at the measurement and decision stages. Finally, end-users have agency in the behavioural space, where
they can influence, challenge, or accept the system’s results, leading to further implications for fairness or
unwanted bias.
By adopting the bias profile, influencing stakeholders can gain a clearer understanding of the specific points
in the life cycle where bias may arise. This awareness helps developers, policymakers, and end-users to
take proactive steps in designing, implementing, and utilizing algorithms to minimize unwanted bias and
promote fair and ethical automated systems. Additionally, the bias profile facilitates ongoing evaluation and
improvement of an AIS to help ensure its continued alignment with societal values and norms.
It is important to note that different types of bias can appear at different stages.
When conducting a risk and impact assessment, it is important to evaluate the risks and impacts produced or
potentially produced by different types of bias. As Mehrabi et al. [B45, Figure 1] indicate, for ML models,
the biases intertwine and create feedback loops where a human factor bias can influence an algorithmic bias,
which can in turn create other biases after some mitigations have been applied.
All this is just to show one example. In the end, there can be multiple systems with multiple different types of
bias. When doing a bias analysis and a risk/impact assessment, it is important to analyze the different biases
that can occur at each stage in order to help mitigate them appropriately.
A.5 Considerations
To deepen the understanding of algorithmic bias, several open questions and future considerations are identi-
fied, but not discussed further. These include exploring proxy correlations, distinguishing intentional and un-
intentional bias, investigating the granularity of groups affected by bias, and understanding temporal aspects
related to bias. Additionally, the impact of data labelling, the business case and the AIS concept of operations
are identified. These open questions highlight the ongoing complexities and challenges in addressing bias in
automated systems.
The analysis provided has significant implications for policymakers, developers, and users of AIS. The impor-
tance of developing robust and transparent algorithms that help mitigate unwanted bias and promote fairness
is crucial to the acceptance of the use of AIS. Policymakers can use the insights to develop legal frameworks
that protect individuals from discrimination based on protected attributes. Developers can adopt the bias
profile process to understand and address bias at different stages of algorithmic processing. Users of auto-
mated/autonomous systems are encouraged to be aware of bias and advocate for the mitigation of unwanted
bias throughout the life cycle of the AIS.
The primary purpose of this clause is to reinforce the importance of recognizing the complexities of algorith-
mic bias. Bias in automated/autonomous systems is not a single issue but rather a multifaceted problem with
implications for individuals and society. The bias profile provides a structured approach to understanding
and measuring bias at different stages, empowering stakeholders to take appropriate actions to help mitigate
bias. As the development and deployment of automated/autonomous systems continue to evolve, addressing
algorithmic bias is likely to remain a critical area of research and action, promoting fairness and equity in the
digital age.
39
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex B
(informative)
Stakeholder Identification
a) Intended use
b) Granularity
c) Breadth
Stakeholder identification is dependent on its intended use. For example, in determining the impact of an
AIS predicting traffic accidents requires stakeholders to be identified as drivers and pedestrians and their
geographic location, not as an income group. Determining the impact of an AIS predicting credit risk for
loan approval requires stakeholders to be categorized within income groups but not as pedestrians and drivers
and their geographic location may be a protected variable.
Granularity of stakeholder groups is dependent on the type of AIS. For example, health care budgets are being
determined based on medical procedures, stakeholders would need to be classified according to their gender,
however a binary classification is not sufficiently granular because it misses identification of non-binary
stakeholders. However, gender identification is a protected field in student enrollment and job placement.
Breadth of identifying stakeholders is influenced by its intended use. Using the example of predicting traffic
accidents, if the AIS is being implemented in Japan or Canada. Which side of the road drivers drive on is
different and hence the breadth of use is different than implementing the same AIS only in Japan.
Impacted stakeholders comprise all stakeholders who are impacted – directly or indirectly – by the AIS. All
groups affected by the AIS shall be identified. When identifying these stakeholders, the following parties
should be included where appropriate:
a) Direct participants: individuals whose life choices and actions are impacted as a result of an algorithmic
output; this may include current and future consumers.
b) Indirect participants: those affected by effects of the system, yet who are not in direct contact with the
system.
c) Excluded participants: those to whom access to the AIS is denied.
d) Indirect contributors: individuals who enable the operation of the system indirectly, whether by contribut-
ing data, or by supporting the infrastructure of the system.
e) Communities: groups of individuals whose life choices and actions are impacted as a result of an algo-
rithmic output.
40
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
f) Competitors: organizations that target the same groups as the AIS under consideration.
g) Adversarial actors: individuals, organizations, corporations and governments whose intent is to harm
and/or limit the life choices and actions of groups of individuals.
h) Adversarial processes: interventions to test, exploit, co-opt or take malicious action against an algorithm
to divert it from its original intent.
i) Lifeforms: non-human entities such as animals and plants.
j) Environment: existing natural systems such as forests, rivers or the climate.
Relevant attributes differ depending on the stakeholders involved, the system being created, and the context in
which it is being applied. When considering such attributes one should consider those which are immutable
for an individual stakeholder and which are also necessitated by the context of use30 of the AIS.
For example, an AIS used to predict lung cancer has different attributes for impacted stakeholders than an
AIS for credit scoring.
When identifying the attributes of impacted stakeholders, the following guide may be considered:
a) Identify existing documentation which may be used to guide the stakeholder identification process, e.g.:
– Business case
– Technical requirements
– Existing regulations
– Reference materials
b) Identify from the documents the intended use of the system, e.g:
– What is the problem to be solved?
– Who/What has the problem?
30 Context of use can be captured using a Context of Use Description (See ISO/IEC 25063:2014 [B30])
41
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
When defining the attributes it is important, where possible, that an experientially diverse group of individuals
are included in the process to help mitigate unwanted bias in the stakeholder identification and scoping of the
stakeholder groups.
Influencing stakeholders are those who own, conceptualize, design, fund, develop, deploy, constrain, use,
monitor, maintain, or decommission the AIS. They have the potential for injecting positive or negative bias
via their influence, conflict of interest or personal bias.
It is important to note that this applies to systems used for processes internal and external to the organization.
Organizations span business, government, NGOs and not-for-profits. Some examples follow:
– Government: Laws, policy and regulations can define design and use boundaries.
– System Purchasers: The internal business group who controls the funding of the algorithmic system; those
who are responsible for the business case from which the technical specifications are determined.
– System Vendors: Those who sell the system to end-users.
– Designers: Accountable for the design of the system that include but not limited to software engineers.
– Ethics panel members: Including organizational influencing-facing ethics panels as well as external-facing
citizen panels.
– Developer: Those who are responsible for the creation and building of the AIS and the models underlying
it, who include but are not limited to project managers, data scientists, software engineers.
– Operators and Users: Once the system is deployed, those who use the system that then impacts the impacted
stakeholder.
42
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
A stakeholder’s level of influence may change depending on the stage of the process of the creation or operator
of the system, and their decision making authority. It may also be that certain stakeholders are influential only
within particular phases of the life cycle. It may be that certain stakeholders have a greater influence over
other stakeholders during phases of the life-cycle depending upon organizational roles.
a) Conceptualization
b) Design
c) Build
d) Test
e) Deploy
f) Use and maintenance
g) Retirement or end-of-life or (data and model) decommissioning
When identifying who are the stakeholders, it is important to differentiate between internal and external
AIS deployments. For example, systems that are for internal use may include stakeholders that are both
influencing and impacted stakeholders.
a) Identify the influencing stakeholders who are accountable, responsible for, who are consulted on, who
have governance over and who can influence governance over all parts of the AIS life cycle.
b) Identify the sphere of influence for each group and/or person.
c) Identify the degree of influence. Explain the metric used and the rationale of the degree of influence
assigned.
Influencing stakeholders are characterized by their level or degree of influence, commitment, actions and
motivations within the stages of the creation and use of the AIS. Legislation and regulations, government
oversight and legislative and societal constraints are additional attributes that may be relevant for stakeholders
of particular types of AISs.
Government oversight and legislation influence bias intentionally and unintentionally. Intentionally by man-
dating specific policy to mitigate unwanted bias, for example gender discrimination; unintentionally through
43
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
policy intended to mitigate discrimination but can actually increase unwanted bias, for example mandating
cannot be used to assess loan rates but then unintentionally the model data of stakeholders categorized ac-
cording to sex cannot be assessed for unwanted bias.
The degree of influence is determined by the level of accountability and responsibility the individual has in
determining the inputs and outputs of the system and in determining how the system could be used. The
measurement defining the level of influence may be a scale (1-10) or a level (high/medium/low). The mea-
surement is specific to an influencer’s power.
The outcomes of the identification of the influence of influencing stakeholders is an input into the risk and
impact assessment (see Clause 8). Within that clause, their influence shall be measured and their composition
assessed for their impact and risk on the AIS life cycle.
B.5 Diversity
Diversity includes visible and hidden diversity, heritage/cultural diversity and diversity of power of influence
and competency.
Visible diversity are elements that include skin colour, sex, age and visible disability; hidden diversity in-
cludes elements of physical, invisible disabilities, and cognitive attributes; heritage diversity includes the
private family environment within which the influencer is nurtured; cultural diversity includes the social en-
vironment within which the influencer lives; power includes wherein the influencer has influence into the
design, deployment and operation of the AIS.
The attributes which require special treatment, and on the basis of which unlawful discrimination, unwanted
bias or unfairness can ensue, be entrenched systematically, or be amplified or proliferated. For example,
an AIS trained on historic data representative of historic societal attitudes to persons based on protected
attributes could entrench and further proliferate unwanted bias within the AIS. Whilst acknowledging that no
AIS could ever be completely free of unwanted bias, so far as is reasonably possible, all efforts shall be made
to ensure that no unwanted bias in an AIS results in discrimination (which in most jurisdictions is considered
unlawful) in respect of protected attributes or attributes that might reasonably be considered proxies for such
protected attributes, thereby resulting in unfair and unlawful outcomes.
In order to determine that the AIS has mitigated unwanted bias across protected attributes, it should be
possible to define a test for fairness that can demonstrate that the system is not presenting with unwanted bias
in its outcomes with respect to these attributes. The criteria used to define fairness shall be documented and
a rationale for the definition’s use in determining wanted and unwanted bias shall be provided.
44
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex C
(informative)
Guidance for Risk and Impact Assessment
Tasks for assessment of Risk and Impact include those in the following list:
– Describe the nature, scope, context and purposes of the algorithmic system.
– Describe the environment wherein the algorithmic system is being used.
– Identify the organization’s risk measurement framework.
– Measure the influence of influencing stakeholders.
– Identify the potential impact of bias – both positive and negative – as a result of the composition of influ-
encing stakeholders. For example extent of diversity of influence, gender culture and experience.
– Identify the potential risk of disproportionate harm to stakeholders resulting from changes to the environ-
ment of the AIS.
– Identify sources of data and review for the risk that the process of obtaining data is not compliant with
relevant legislation.
– Measure the risk and benefit associated with each impact.
– Identify mitigation strategies for each risk and impact.
– Identify the potential impacts on impacted stakeholders (Annex B), due to AIS unwanted bias.
– Identify the types of advantage and disadvantage which could result from the algorithm using all types of
attributes. Identify the Risk and Impact of missing attributes.
– Identify the biases of the influencing stakeholders that may lead to unconscious bias.
– Identify and measure life cycle risk: for example the retirement of an AIS can in itself create a bias.
– Identify and measure risk of unintended use.
– Identify a mitigation plan during the first assessment and review its execution and update as appropriate
throughout the AIS life cycle.
The following resources are useful guides for the consideration of risk and impact:
– Ethical impact assessment: a tool of the Recommendation on the Ethics of Artificial Intelligence [B49].
– Artificial Intelligence Risk Management Framework (AI RMF 1.0) [B47].
– ISO/IEC 23894:2023 Information technology – Artificial intelligence – Guidance on risk management
[B23].
– ISO/IEC 42001:2023 Information technology – Artificial intelligence – Management system [B32].
45
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
– ISO/IEC 42005:2024 Information technology – Artificial intelligence – AI system impact assessment [B33].
– IEEE Std 1012-2016 IEEE Standard for System, Software, and Hardware Verification and Validation
[B15].
46
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex D
(informative)
Measuring Bias
The choice of a proper metric is usually not straightforward. An article in ProPublica [B2] discusses an
algorithm used for assessing recidivism risk. One of the findings of the journalists was that the false positive
and false negative rates were different for different ethnic groups. The creators of the algorithm pointed
out that equalizing those metrics between subgroups had not been included in their design goals. Instead,
the creators’ goal was to develop a well calibrated algorithm: the actual outcomes conditional to risk scores
should be the same for relevant subgroups. In the discussion that followed, Kleinberg, Mullainathan, and
Raghavan [B43] showed that it is not possible to satisfy all these conditions simultaneously if the base rates
of the groups are different. While all these statistical properties sound intuitively appealing, choices between
them should be made. Kleinberg, Mullainathan, and Raghavan may also explain why the results of using
adversarial learning [B50] did not improve accuracy but reduced the difference between false positive rates
and false negative rates between subgroups.
The following is a guide to help in choosing the right metric(s), reference values, and testing scenarios to
measure bias in an AIS.
D.1 Metrics
a) Choose a metric. This can be from a range of metrics that have already been validated by the organization.
The choice of a metric is dependent on numerous factors. Some examples follow:
– The purpose of the AIS
– The task of the AIS. For example, is it a classification or regression task? reasoning? constraint
satisfaction?
– The specifics of the task. For example, if the task is classification, is it a binary or multi-category
classification?
– The social, ethnic, organizational and cultural contexts of the AIS
– The impacted stakeholders’ perception of bias. This might lead to having more than one metric to
accommodate different perceptions.
– The risks and impacts of the AIS
b) Explain the reasoning behind the chosen metrics.
c) Assess and compare the results of the chosen metrics. It is not necessary that any two metrics are con-
sistent (biased or not), but it is necessary to ensure that the chosen bias metrics are the best fit for the
AIS.
d) Identify the limitations of the chosen metric(s).
Reference values are set to assess if the observed bias is acceptable or not. Reference values are also important
to preserve transparency in the bias consideration process and facilitate external audit/regulation. A too
conservative reference value can result in the rejection of a system with an acceptable level of bias, while a
too liberal reference value can result in the acceptance of a system with an unacceptable level of bias.
47
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Setting reference values takes into consideration multiple factors. For example, the context of use of the sys-
tem, and stakeholder preferences, which can be mutually conflicting. Another factor is using several metrics.
Observing the reference values of different metrics can get complicated and just looking at the reference value
is not sufficient. Instead, we need to observe the behavior of the metrics for different situations, including
real life situations, and different inputs. Humans shall remain accountable for the chosen reference value.
The AIS should be tested for bias with respect to the stakeholders identified in the stakeholders’ identification
section (link) at three levels; group, sub-group, and individual. Choose a similarity metric for each of the three
testing granularities if needed.
Similarity metrics are likely to be dependent on the AIS purpose, context of use, and the stakeholders in-
volved, etc. Choose a similarity metric that takes due account of these factors and use it to measure the
similarity of groups, subgroups, and individual stakeholders. The purpose of the similarity metric is to guide
the process of testing the AIS for unwanted bias between any similar groups, subgroups, and individuals.
When choosing a similarity metric, setting the threshold value needs consideration, experimentation, and
justification. For example, should two individuals be considered similar if the metric score is >90%, or
>80%? Is there a need to set more than one threshold? And, if appropriate, is there an acceptable margin
around that threshold? This margin, and the identification of the threshold can be informed by the density of
individuals and the diameter of the hyper-sphere denoted by the selected similarity attributes.
– Group bias: The purpose of testing for group level bias is to assess that the groups of interest, which
can be groups of people or groups of items, have the desired metric outputs, whether it is having similar
or different outputs. Forming the groups is dependent on the nature of the dataset values. I.e., whether
the values are discrete or continuous. For example, If the stakeholders were to be grouped, this might be
done by attribute (e.g., group by language), it can be grouped into binary groups (e.g., Estonian and non-
Estonian) or into distinct groups (all of the possible languages). Another example is grouping by height,
if there is a concern that people are unjustifiably treated differently based on their height, metrics outputs
shall be compared between different groups based on height, e.g., the shortest quartile against the tallest.
– Subgroup bias: The purpose of testing for subgroup level bias is to assess that the subgroups, within a
group, have the desired metric outputs, whether it is having similar or different outputs. For example, the
subgroups Estonian-tall and Estonian-non-tall are considered subgroups of the original group Estonian.
The test may also be done between a subgroup and the whole dataset.
– Individual bias: The purpose of testing for individual level bias is to assess that the similar individuals in
a subgroup or a group have the desired metric outputs or AIS outputs or both. This testing level requires a
similarity metric and can measure the bias between two similar individuals or between an individual and a
group of individuals that are together similar, according to the chosen similarity metric.
– Black et al. Artificial Intelligence and Software Testing: Building systems you can trust [B4]
48
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex E
(informative)
Cultural Aspects
E.1 General
This annex explores the role of cultural factors in the creation or propagation of bias and whether and how
biases are perceived and understood in the context of developing an artificial intelligence system (AIS).
“[...] the set of distinctive spiritual, material, intellectual and emotional features of society or a
social group, and that it encompasses, in addition to art and literature, lifestyles, ways of living
together, value systems, traditions and beliefs.”
where a social group is based on national, religious, workplace, professional, family, tribal, or other group
attributes.
Perception of what is fair is to some extent determined by culture. Fairness, in turn, is an outcome of the bias
in the AIS and what constitutes wanted and unwanted bias in the context of use.
Understanding power differentials inherent between and within cultures is essential when considering the
influence of culture on bias. The content of the remainder of this annex:
– Proposes possible ways in which to account for cultural diversity when imagining, designing, developing,
implementing and maintaining AIS.
The cultural contexts in which data is collected and the cultural assumptions of people involved in the AIS
life cycle, including where and for what purpose it is deployed, could impact system outputs and behaviours.
The behaviour of technology is influenced by the cultural context in which it is constructed. This concept
reflects the idea that technologies are not neutral or purely objective but are shaped by the values, beliefs, and
socio-cultural factors of their creators [B3].
Cultural bias becomes embedded in systems when there are assumptions about:
49
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
– The meaning of attributes in different cultures may be inadequate for the reliable inclusion or exclusion of
individuals or groups
– The criteria that affect the choice of model or algorithm
– Decisions about the AIS reflect the cultural assumptions of the people making those decisions
– Attitudes to issues of bias and discrimination can differ
An example of societal biases being transferred into algorithmic systems is the case of word embedding,
a class of natural language processing techniques that enable machines to use human language plausibly,
and which absorb the accepted usage of words [B5]. Word embedding acquires the human biases, such
as gender stereotypes (e.g., associating male names with concepts related to career, and female names
with home/family) and racial stereotypes (e.g., associating European-/African-American names with pleas-
ant/unpleasant concepts), present in language use. Such biases are comparable to those discovered when
humans take the Implicit Association Test [B7], a widely used measure in social psychology that reveals the
subconscious associations between the mental representations of concepts in our memory, and may constitute
wanted or unwanted bias in an AIS, depending on its purpose and context of use.
The biases and assumptions of system architects might not align to the societal norms of the context of
use. Laws, standards, guidelines and rules also influence design, and the design culture can influence the use
culture in a manner that might circumvent local regulatory and legal frameworks. Structural factors embedded
in AIS may in turn affect systemic bias within society to impact individuals and communities.
Algorithmic culture is the use of computational processes to sort, classify, and hierarchize people, places,
objects, and ideas, and also the habits of thought, conduct and expression that arise in relationship to those
processes [B8]. Algorithmic culture helps to explain some biases that people experience in their interactions
with systems. In particular, pre-existing inequalities may be reinforced. For example, credit may be allocated
or risk profiles assessed based on historical data, where that data is collected in a context of discrimination
against particular groups.
Algorithmic culture can have a stultifying effect, as deploying systems developed at one time may not be
appropriate at a subsequent time or may entrench a cultural status quo, inhibiting cultural change. Cultural
assumptions at one point in time may impact systems in ways that perpetuate those assumptions after they
have lost cultural resonance. For example, in India there are attempts to overcome the history of caste-based
classification. If systems are collecting information about caste, and using that in predictions and decision-
making, then attempts at cultural change may be stymied. In this example, algorithmic culture – the tendency
to collect classification information in order to process it – could impact a cultural movement for change in
regard to which attributes of people ought to matter. This can occur even if information about caste is not
collected directly, but where it is correlated with other information, such as occupation or name.
50
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
AIS can distort culture if an individual chooses to suppress culturally revealing behaviour or information in
order to avoid being classified according to that culture. People can perceive themselves as being gamed
and hence game in return. For example, people may avoid looking up certain information for fear of being
labelled in a way that might negatively affect them in terms of pricing or government surveillance. The
use of cultural attributes in classification can lead to placing individuals incorrectly in categories, through
wrong interpretation, association or approximation. For example, determining the likelihood of re-offending
or abuse based on ethnicity.
An outcome of AIS can be accelerated polarisation between and within cultures. In consequence, people
become ‘othered’, from the family dinner table, to social media groups, to ethnic groups and nationalities.
For example, the case of algorithmic manipulation that used social media hashtags to foment interracial
violence in South Africa.31
Under this heading, we set out various ways in which a confined cultural context and imagination can neg-
atively impact particular groups. Among the influencing factors of people’s cultural perspective are their
age, education, wealth, religion and belief systems, family and geography, while also taking into account the
special cases that arise at the intersections of groups. Here we explore some examples in more detail.
It is appropriate to recognise that there are multiple perceptions of beauty, ways of life and ways of being,
such as style, music, clothing, hair and body shapes. Equally, how someone moves, walks or stands varies
with culture, so that an AIS trained on one needs retraining or re-calibrating to account for these differences.32
Bias consideration of disability can be affected by various factors, such as an aversion to disability, the
magnitude of the task of including the multitude of disabilities, lack of knowledge and experience, but also
deep rooted assumptions about disability. Disability is something that may affect everyone at some point
in their life. The UN Convention on the Rights of Persons with Disabilities recognizes that ‘disability is
an evolving concept’ [B6, p.1], and defines persons with disabilities as those who have ‘long-term physical,
mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full
and effective participation in society on an equal basis with others’ [B6, p.4]. For example, cultural attitudes
towards facial disfigurement can lead to ignoring the issue in the design of facial access systems, or a lack
of awareness and empathy for people with dementia, despite the likelihood that dementia may impact many
people later in life, can limit the development of appropriate AIS.
31 See https://www.theguardian.com/media/2017/sep/05/bell-pottingersouth-africa-pr-firm and
https://www.theglobeandmail.com/arts/television/article-influence-the-story-of-the-most-wicked-man-in-the-wor
32 See https://learningenglish.voanews.com/a/system-recognizes-people-from-body-shape-walking-movements/
51
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Price discrimination is a legitimate practice for revenue maximisation and accommodating limited supply,
in many cultures. For example, new customer offers (e.g., insurance providers) or surge pricing (e.g., ride-
hailing services). With more widespread data on customers, models of their possible behaviours and AI tools
to process these, potential customers are segmented and sorted into groups with variable prices, offers and
conditions. This could result in significant bias against certain groups based on their apparent ability to pay,
and indirectly on sensitive attributes such as racial and social groups with clear implication for unwanted bias
linked to culture.
In some applications, there is a risk that systems may be optimised for particular genders or sexualities. Ex-
amples include: queer content might be over-filtered in a system designed to filter pornography and translation
software that assumes gender from gender-stereotypical context.
Parameters in AIS for Natural Language Processing are often trained and tuned with data from a specific
language and culture, which can be a source of problems in the use of these AIS when applied to another
language and culture. For example, American English and or Chinese Mandarin may present barriers when
such technologies are applied in a different cultural context.
Workplaces develop their own cultures that can make it difficult to think of alternative perspectives. At the
same time, organisations and professions can also develop a group culture with expectations about how things
should be done, and make it easier to be alert to and successfully handle bias. Where outsourcing occurs, the
organisational cultures of suppliers could also have an impact. Ultimately, the design of systems should be
influenced by the intersecting cultures of those responsible.
The concentration of leading technology companies in particular countries or social groups with the deploy-
ment of those technologies more broadly means the racial and national context of the latter might not be
taken into account. It may be helpful to consider the following elements of cultural bias in race and national-
ity: physical appearance, assumption of universal preferences, social and ethical norms, and biases based on
historical contexts.
The creation and use of AIS offer an opportunity to better understand our cultural biases and design systems
that actively reduce levels of bias to create a more equitable society. Although preferences differ between
cultures, we may share universal values [B46].
Cultural biases arise despite designers’ intentions, but diversity within accountability chains increases the
chances that cultural assumptions can be detected prior to deployment of a system. By following the process
52
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
set out in this document, this accountability and the continuous monitoring of AIS in operation can help
identify, establish and evolve the practices for the continuous minimisation of unwanted bias. AIS outcomes
and impact on stakeholders, through an on-going engagement between all stakeholders, can reflect their
diversity and their needs within an AIS’s context of use.
E.5 Recommendations
– Provide training for every person working on AIS about what bias is, the sources of bias, and how to detect
and help mitigate bias, for example IEEE CertifAIedT M [B12].
– Consider deploying diverse teams throughout the AIS life cycle to help manage cultural assumptions.
– Enable local content moderation and engagement with the communities in the context of use.
– Facilitate stakeholders to assess the outcomes of the AIS in the context of use.
53
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
Annex F
(informative)
Bibliography
Bibliographical references are resources that provide additional or helpful material but do not need to be
understood or used to implement this standard. Reference to these resources is made for informational use
only.
[B1] Abdollahpouri, H., M. Mansoury, R. Burke, and B. Mobasher. “The Connection Between Popularity
Bias, Calibration, and Fairness in Recommendation”. In: Proceedings of the 14th ACM Conference
on Recommender Systems. RecSys ’20. Virtual Event, Brazil: Association for Computing
Machinery, 2020, pp. 726–731. ISBN: 9781450375832. DOI: 10.1145/3383313.3418487.
[B2] Angwin, J., J. Larson, S. Mattu, and L. Kirchner. Machine Bias. ProPublica. May 2016. URL:
https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing. Accessed 2023-08-01.
[B3] Bijker, W. “Social Construction of Technology”. English. In: International Encyclopedia of the
Social & Behavioral Sciences, 2nd edition. Ed. by J. Wright. United Kingdom: Elsevier Science,
2015, pp. 135–140. ISBN: 978-0-08-097087-5.
[B4] Black, R., J. Davenport, J. Olszewska, J. Rößler, A. L. Smith, and J. Wright. Artificial Intelligence
and Software Testing: Building systems you can trust. Ed. by A. L. Smith. BCS, The Chartered
Institute for IT, 2022, p. 146. ISBN: 978-1780175768.
[B5] Caliskan, A., J. J. Bryson, and A. Narayanan. “Semantics derived automatically from language
corpora contain human-like biases”. In: Science 356.6334 (2017), pp. 183–186. DOI:
10.1126/science.aal4230.
[B6] Convention On The Rights Of Persons With Disabilities (CRPD). United Nations. 2006. URL:
https://social.desa.un.org/issues/disability/crpd/convention-on-
the-rights-of-persons-with-disabilities-crpd. Accessed 2023-10-07.
[B7] Greenwald, A. G., D. E. McGhee, and J. L. K. Schwartz. “Measuring individual differences in
implicit cognition: The implicit association test”. In: Journal of Personality and Social Psychology
74.6 (1998), pp. 1464–1480. DOI: 10.1037/0022-3514.74.6.1464.
[B8] Hallinan, B. and T. Striphas. “Recommended for you: The Netflix Prize and the production of
algorithmic culture”. In: New Media & Society 18.1 (2016), pp. 117–137. DOI:
10.1177/1461444814538646.
[B9] Hooker, S., A. C. Courville, Y. N. Dauphin, and A. Frome. “Selective Brain Damage: Measuring the
Disparate Impact of Model Pruning”. In: CoRR abs/1911.05248 (2019). arXiv: 1911.05248.
[B10] Hooker, S., N. Moorosi, G. Clark, S. Bengio, and E. Denton. “Characterising Bias in Compressed
Models”. In: CoRR abs/2010.03058 (2020). arXiv: 2010.03058.
[B11] IEEE CertifAIEd™. Ontological Specification for Ethical Accountability. Institute of Electrical and
Electronics Engineers. 2022. URL: https://engagestandards.ieee.org/rs/211-
FYL-955/images/IEEE_CertifAIEd_Ontological_Spec-Accountability-
2022.pdf. Accessed 2024-02-09.
[B12] IEEE CertifAIEd™. Ontological Specification for Ethical Algorithmic Bias. Institute of Electrical
and Electronics Engineers. 2022. URL: https://engagestandards.ieee.org/rs/211-
FYL-955/images/IEEE+CertifAIEd+Ontological+Spec-Algorithmic+Bias-
2022+[I1.3].pdf. Accessed 2024-02-09.
[B13] IEEE CertifAIEd™. Ontological Specification for Ethical Privacy. Institute of Electrical and
Electronics Engineers. 2022. URL: https://engagestandards.ieee.org/rs/211-
FYL-955/images/IEEESTD-2022+CertifAIEd+Privacy.pdf. Accessed 2024-02-09.
[B14] IEEE CertifAIEd™. Ontological Specification for Ethical Transparency. Institute of Electrical and
Electronics Engineers. 2022. URL:
https://engagestandards.ieee.org/rs/211-FYL-
54
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
955/images/IEEE+CertifAIED+Ontological+Spec-Transparency-2022.pdf.
Accessed 2024-02-09.
[B15] IEEE Std 1012-2016. IEEE Standard for System, Software, and Hardware Verification and
Validation. Institute of Electrical and Electronics Engineers, 1997. DOI:
https://doi.org/10.1109/IEEESTD.2017.8055462.
[B16] IEEE Std 1320.2-1998. IEEE Standard for Conceptual Modeling Language - Syntax and Semantics
for IDEF1X97 (IDEFobject). Institute of Electrical and Electronics Engineers, 1997. DOI:
10.1109/IEEESTD.1997.8883275.
[B17] IEEE Std 7005-2021. IEEE Standard for Transparent Employer Data Governance. Institute of
Electrical and Electronics Engineers, 2021. DOI: 10.1109/IEEESTD.2021.9618905.
[B18] IEEE Std 7010-2020. IEEE Recommended Practice for Assessing the Impact of Autonomous and
Intelligent Systems on Human Well-Being. Institute of Electrical and Electronics Engineers, 2020.
DOI : 10.1109/IEEESTD.2020.9084219.
[B19] ISO/IEC TS 12791:2022. Information technology — Artificial intelligence — Treatment of
unwanted bias in classification and regression machine learning tasks. International Standards
Organization, 2022. URL: https://www.iso.org/standard/84110.html.
[B20] ISO/IEC 19506:2012. Information technology — Object Management Group Architecture-Driven
Modernization (ADM) — Knowledge Discovery Meta-Model (KDM). International Standards
Organization, 2012. URL: https://www.iso.org/standard/32625.html.
[B21] ISO/IEC 22989:2022. Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology. Publicly available from https:
//standards.iso.org/ittf/PubliclyAvailableStandards/index.html.
International Standards Organization, 2022. URL:
https://www.iso.org/standard/74296.html.
[B22] ISO/IEC 23053:2022. Information technology — Artificial intelligence — Framework for Artificial
Intelligence (AI) Systems Using Machine Learning (ML). International Standards Organization,
2022. URL: https://www.iso.org/standard/74438.html.
[B23] ISO/IEC 23894:2023. Information technology – Artificial intelligence – Guidance on risk
management. International Standards Organization, 2023. URL:
https://www.iso.org/standard/77304.html.
[B24] ISO/IEC TR 24027:2021. Information technology — Artificial intelligence (AI) — Bias in AI
systems and AI aided decision making. International Standards Organization, 2021. URL:
https://www.iso.org/standard/77607.html.
[B25] ISO/IEC TR 24030:2021. Information technology — Artificial intelligence (AI) — Use cases.
International Standards Organization, 2021. URL:
https://www.iso.org/standard/77610.html.
[B26] ISO/IEC 24748-1:2024. Systems and software engineering — Life cycle management — Part 1:
Guidelines for life cycle management. International Standards Organization, 2024. URL:
https://www.iso.org/standard/84709.html.
[B27] ISO/IEC 25000:2014. Systems and software Engineering–Systems and software product Quality
Requirements and Evaluation (SQuaRE) – Guide to SQuaRE. International Standards Organization,
2014. URL: https://www.iso.org/standard/64764.html.
[B28] ISO/IEC 25024:2015. Systems and software engineering — Systems and software Quality
Requirements and Evaluation (SQuaRE) — Measurement of data quality. International Standards
Organization, 2015. URL: https://www.iso.org/standard/35749.html.
[B29] ISO/IEC 25051:2014. Software engineering — Systems and software Quality Requirements and
Evaluation (SQuaRE) — Requirements for quality of Ready to Use Software Product (RUSP) and
instructions for testing. International Standards Organization, 2014. URL:
https://www.iso.org/standard/61579.html.
[B30] ISO/IEC 25063:2014. Systems and software engineering — Systems and software product Quality
Requirements and Evaluation (SQuaRE) — Common Industry Format (CIF) for usability: Context
of use description. International Standards Organization, 2014. URL:
https://www.iso.org/standard/35749.html.
55
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
[B31] ISO/IEC TR 29119-11:2020. Software and systems engineering – Software testing – Part 11:
Guidelines on the testing of AI-based systems. International Standards Organization, 2020. URL:
https://www.iso.org/standard/79016.html.
[B32] ISO/IEC 42001:2023. Information technology – Artificial intelligence – Management system.
International Standards Organization, 2023. URL:
https://www.iso.org/standard/81230.html.
[B33] ISO/IEC 42005:2024. Information technology – Artificial intelligence – AI system impact
assessment. International Standards Organization, 2024. URL:
https://www.iso.org/standard/44545.html.
[B34] ISO/IEC 5338:2023. Information technology – Artificial intelligence – AI system life cycle
processes. International Standards Organization, 2023. URL:
https://www.iso.org/standard/81118.html.
[B35] ISO/IEC/IEEE 15288:2023. Systems and software engineering — System life cycle processes.
International Standards Organization, 2023. URL:
https://www.iso.org/standard/81702.html.
[B36] ISO/IEC/IEEE 15939:2017. Systems and software engineering — Measurement process.
International Standards Organization, 2017. URL:
https://www.iso.org/standard/71197.html.
[B37] ISO/IEC/IEEE 2382:2015. Information technology – Vocabulary. Publicly available from https:
//standards.iso.org/ittf/PubliclyAvailableStandards/index.html.
International Standards Organization, 2015. URL:
https://www.iso.org/standard/63598.html. Accessed 2024-03-22.
[B38] ISO/IEC/IEEE 24748-7000-2022. Standard for Systems and software engineering–Life cycle
management–Part 7000: Standard model process for addressing ethical concerns during system
design. Institute of Electrical and Electronics Engineers, 2022. DOI:
10.1109/IEEESTD.2022.9967807.
[B39] ISO/IEC/IEEE 24765:2017. Systems and software engineering — Vocabulary. Publicly available
from https:
//standards.iso.org/ittf/PubliclyAvailableStandards/index.html.
International Standards Organization, 2017. URL:
https://www.iso.org/standard/71952.html.
[B40] ISO/IEC/IEEE 29119-1:2022. Software and systems engineering – Software testing – Part 1:
General concepts. Publicly available from https:
//standards.iso.org/ittf/PubliclyAvailableStandards/index.html.
International Standards Organization, 2022. URL:
https://www.iso.org/standard/81291.html.
[B41] ISO/IEC/IEEE 42020:2019. Software, systems and enterprise — Architecture processes.
International Standards Organization, 2019. URL:
https://www.iso.org/standard/68982.html.
[B42] Kang, H. and C. D. Yoo. “Skew Class-Balanced Re-Weighting for Unbiased Scene Graph
Generation”. In: Machine Learning and Knowledge Extraction 5.1 (2023), pp. 287–303. ISSN:
2504-4990. DOI: 10.3390/make5010018.
[B43] Kleinberg, J. M., S. Mullainathan, and M. Raghavan. “Inherent Trade-Offs in the Fair
Determination of Risk Scores”. In: 8th Innovations in Theoretical Computer Science Conference,
ITCS 2017, January 9-11, 2017, Berkeley, CA, USA. Ed. by C. H. Papadimitriou. Vol. 67. LIPIcs.
Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017, 43:1–43:23. DOI:
10.4230/LIPICS.ITCS.2017.43.
[B44] Mansoury, M., H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, and R. Burke. “Feedback Loop
and Bias Amplification in Recommender Systems”. In: Proceedings of the 29th ACM International
Conference on Information & Knowledge Management. CIKM ’20. Virtual Event, Ireland:
Association for Computing Machinery, 2020, pp. 2145–2148. ISBN: 9781450368599. DOI:
10.1145/3340531.3412152. An earlier version is available from
https://arxiv.org/abs/2007.13019. Accessed 2024-03-25.
56
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
IEEE Std 7003™-2024
IEEE Standard for Algorithmic Bias Considerations
[B45] Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. “A Survey on Bias and Fairness
in Machine Learning”. In: ACM Comput. Surv. 54.6 (July 2021). ISSN: 0360-0300. DOI:
10.1145/3457607. An earlier version is available from
http://arxiv.org/abs/1908.09635. Accessed 2024-3-25.
[B46] Schwartz, S. H. “An overview of the Schwartz theory of basic values”. In: Online readings in
Psychology and Culture 2.1 (2012), p. 11. DOI: 10.9707/2307-0919.1116.
[B47] Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of
Standards and Technology. Jan. 2023. DOI: doi:10.6028/NIST.AI.100-1.
[B48] UNESCO. Universal Declaration on Cultural Diversity. 2001. URL:
https://en.unesco.org/about-us/legal-affairs/unesco-universal-
declaration-cultural-diversity. Accessed 2023-10-07.
[B49] UNESCO. Ethical impact assessment: a tool of the Recommendation on the Ethics of Artificial
Intelligence. 2023. DOI: 10.54678/YTSA7796.
[B50] Wadsworth, C., F. Vera, and C. Piech. “Achieving Fairness through Adversarial Learning: an
Application to Recidivism Prediction”. In: CoRR abs/1807.00199 (2018). arXiv: 1807.00199.
URL: http://arxiv.org/abs/1807.00199. Accessed 2024-03-25.
57
Copyright © 2025 IEEE. All rights reserved.
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.
RAISING THE
WORLD’S
STANDARDS
Connect with us on:
Facebook: facebook.com/ieeesa
LinkedIn: linkedin.com/groups/1791118
Beyond Standards blog: beyondstandards.ieee.org
YouTube: youtube.com/ieeesa
standards.ieee.org
Phone: +1 732 981 0060
Authorized licensed use limited to: Michael Borrelli. Downloaded on May 03,2025 at 16:55:07 UTC from IEEE Xplore. Restrictions apply.