AI Sustainability in Practice
AI Sustainability in Practice
AI Sustainability in Practice
AI Sustainability
in Practice
Part Two: Sustainability Throughout
the AI Workflow
ParticipantWorkbook
Facilitator Workbook
Annotatedfor
Intended to participants
support facilitators
to engage
in delivering
with in the
preparation for,activities.
accompanying and during, workshops.
AI Sustainability in Practice
1
Part Two: Sustainability Throughout the AI Workflow
Acknowledgements
This workbook was written by David Leslie, Cami Rincón, Morgan Briggs, Antonella Perini,
Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell,
Claudia Fischer, Janis Wong, and Ismael Kherroubi Garcia.
The creation of this workbook would not have been possible without the support and
efforts of various partners and collaborators. As ever, all members of our brilliant team
of researchers in the Ethics Theme of the Public Policy Programme at The Alan Turing
Institute have been crucial and inimitable supports of this project from its inception several
years ago, as have our Public Policy Programme Co-Directors, Helen Margetts and Cosmina
Dorobantu. We are deeply thankful to Conor Rigby, who led the design of this workbook
and provided extraordinary feedback across its iterations. We also want to acknowledge
Johnny Lighthands, who created various illustrations for this document, and Alex Krook and
John Gilbert, whose input and insights helped get the workbook over the finish line. Special
thanks must be given to the Ministry of Justice for helping us test the activities and review
the content included in this workbook. Lastly, we want to thank Youmna Hashem (The Alan
Turing Institute) and Sabeehah Mahomed (The Alan Turing Institute) for their meticulous
peer review and timely feedback, which greatly enriched this document.
This work was supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC
Grant EP/W006022/1, particularly the Public Policy Programme theme within that grant &
The Alan Turing Institute; Towards Turing 2.0 under the EPSRC Grant EP/W037211/1 & The
Alan Turing Institute; and the Ecosystem Leadership Award under the EPSRC Grant EP/
X03870X/1 & The Alan Turing Institute.
Cite this work as: Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A.,
Bennett, SJ. Burr, C., Aitken, M., Katell, M., Fischer, C., Wong, J., and Kherroubi Garcia, I.
(2023). AI Sustainability in Practice Part Two: Sustainability Throughout the AI Workflow.
The Alan Turing Institute.
AI Sustainability in Practice
2
Part Two: Sustainability Throughout the AI Workflow
Contents
31 Proportional Governance of Engagement Goals
About the Workbook Series and Methods
40 Stakeholder Profiles
10 Introduction to Sustainability: Stakeholder
Impact Assessments 45 Stakeholder Impact Assessment
(Design Phase)
14 A Closer Look at Stakeholder Impact
Assessments 46 Project Proposal
AI Sustainability in Practice
3
Part Two: Sustainability Throughout the AI Workflow
About the AI Ethics and
Governance in Practice
Workbook Series
Who We Are
The Public Policy Programme at The Alan Turing Institute was set up in May 2018 with the
aim of developing research, tools, and techniques that help governments innovate with
data-intensive technologies and improve the quality of people’s lives. We work alongside
policymakers to explore how data science and artificial intelligence can inform public policy
and improve the provision of public services. We believe that governments can reap the
benefits of these technologies only if they make considerations of ethics and safety a first
priority.
In 2021, the UK’s National AI Strategy recommended as a ‘key action’ the update and
expansion of this original guidance. From 2021 to 2023, with the support of funding from
the Office for AI and the Engineering and Physical Sciences Research Council as well
as with the assistance of several public sector bodies, we undertook this updating and
expansion. The result is the AI Ethics and Governance in Practice Programme, a bespoke
series of eight workbooks and a forthcoming digital platform designed to equip the
public sector with tools, training, and support for adopting what we call a Process-Based
Governance (PBG) Framework to carry out projects in line with state-of-the-art practices in
responsible and trustworthy AI innovation.
AI Sustainability in Practice
4
Part Two: Sustainability Throughout the AI Workflow
About the Workbooks
The AI Ethics and Governance in Practice Programme curriculum is composed of a series
of eight workbooks. Each of the workbooks in the series covers how to implement a
key component of the PBG Framework. These include Sustainability, Technical Safety,
Accountability, Fairness, Explainability, and Data Stewardship. Each of the workbooks also
focuses on a specific domain, so that case studies can be used to promote ethical reflection
and animate the Key Concepts.
2 AI Sustainability in Practice 6
AI Safety in Practice
Part One
AI in Transport
AI in Urban Planning
4 8
AI Fairness in Practice AI Accountability in Practice
AI in Healthcare AI in Education
Explore the full curriculum and additional resources on the AI Ethics and Governance in
Practice Platform at aiethics.turing.ac.uk
aiethics.turing.ac.uk..
Taken together,, the workbooks are intended to provide public sector bodies with the skills
required for putting AI ethics and governance principles into practice through the full
implementation of the guidance. To this end, they contain activities with instructions for
either facilitating or participating in capacity-building workshops.
Please note, these workbooks are living documents that will evolve and improve with input
from users, affected stakeholders, and interested parties. We need your participation.
Please share feedback with us at [email protected]
[email protected].
AI Sustainability in Practice
5
Part Two: Sustainability Throughout the AI Workflow
Programme Roadmap
The graphic below visualises this workbook in context alongside key frameworks, values
and principles discussed within this programme. For more information on how these
elements build upon one another, refer to AI Ethics and Governance in Practice: An
Introduction.
Introduction
1 2 3 4 5 6 7 8
S F D S E A
C A R E
ACT
1 2 3 4 5 6 7 8
Intended Audience
This workbook series is primarily aimed at civil servants engaging in the AI Ethics and
Governance in Practice Programme - either AI Ethics Champions delivering the curriculum
within their organisations by facilitating peer-learning workshops, or participants
completing the programme by attending workshops. Anyone interested in learning about
AI ethics, however, can make use of the programme curriculum, the workbooks, and
resources provided. These have been designed to serve as stand-alone, open access
resources. Find out more at aiethics.turing.ac.uk
aiethics.turing.ac.uk.
• Facilitator Workbooks (such as this document) are annotated with additional guidance
and resources for preparing and facilitating training workshops.
AI Sustainability in Practice
6
Part Two: Sustainability Throughout the AI Workflow
Introduction to This Workbook
This workbook is part two of two workbooks:
Both workbooks are intended to help facilitate the delivery of a two-part workshop on the
concepts of SUM Values and Sustainability.
This workbook explores how to put the SUM Values and the principle of Sustainability into
practice throughout the Design, Development, and Deployment Phases of the AI lifecycle.
It discusses Stakeholder Impact Assessments in depth, providing tools and training
resources to help AI project teams to conduct these. This workbook is divided into two
sections, Key Concepts and Activities:
This section discusses frameworks for establishing the foundations for sustainable AI
projects:
1 2
3 4
AI Sustainability in Practice
7
Part Two: Sustainability Throughout the AI Workflow
Activities Section
Case studies within the AI Ethics and Governance in Practice workbook series are grounded
in public sector use cases, but do not reference specific AI projects.
Balancing Values
Practise weighing tensions between values when assessing the ethical permissibility
of AI projects by considering consequence-based and values-based approaches and
engaging in deliberation.
Practise using SIAs to formulate proportional monitoring activities for the development
and deployment of AI models.
Additionally, you will find facilitator instructions (and where appropriate, considerations)
required for facilitating activities and delivering capacity-building workshops.
AI Sustainability in Practice
8
Part Two: Sustainability Throughout the AI Workflow
AI Sustainability in Practice Part Two:
Sustainability Throughout the AI Workflow
Key
Concepts
21 Weighing the Values and Considering Trade- 31 Deployment Phase Re-Assessment and
Offs Other Necessary Monitoring, Updating, and
Deprovisioning Activities
22 Consequences-Based and Principles-Based
Approaches to Balancing Values
Key Concepts 9
Introduction to Sustainability:
Stakeholder Impact Assessments
AI systems may have transformative and long-term effects on individuals and society.
Designers and users of AI systems should remain aware of this. To ensure that the
deployment of your AI system remains sustainable and supports the sustainability of the
communities it will affect, you and your team should proceed with a continuous sensitivity
to its real-world effects. You and your project team should come together to evaluate
the social impact and sustainability of your AI project through a Stakeholder Impact
Assessment (SIA).
The SUM Values introduced in the AI Sustainability in Practice Part One workbook form
the basis of the SIA. They are not intended to provide a comprehensive inventory of
moral concerns and solutions. Instead, they are a launching point for open and inclusive
conversations about the individual and societal impacts of data science research and AI
innovation projects. When starting a project, the SUM Values should provide the normative
point of departure for collaborative and anticipatory reflection. They should also allow for
the respectful and interculturally sensitive inclusion of other points of view.
DPIAs and EIAs provide relevant insights into the ethical stakes of AI innovation projects.
However, they go only part of the way in identifying and assessing the full range of
potential individual and societal impacts of the design, development, and deployment
of AI and data-intensive technologies. Reaching a comprehensive assessment of these
impacts is the purpose of SIAs. SIAs are tools that create a procedure for, and a means
of, documenting the collaborative evaluation and reflective anticipation of the possible
harms and benefits of AI innovation projects. SIAs are not intended to replace DPIAs or
EIAs, which are obligatory. Rather, SIAs are meant to be integrated into the wider impact
assessment regime. This demonstrates that sufficient attention has been paid to the ethical
permissibility, transparency, accountability, and equity of AI innovation projects.
The purpose of carrying out an SIA is multidimensional. SIAs can serve several purposes,
some of which include:
• To re-examine and re-evaluate the potential impacts you have already identified in your
PS Report.
You might find it helpful to refer back to the Project Summary Report found
in AI Sustainability In Practice Part One,
One while answering these questions.
Have you assessed whether building an AI d. the resources (material and human)
model or tool is the right solution to help available to your project;
you deliver the desired services given:
e. the nature of the policy problem you
a. the existing technologies and are trying to solve; and
processes already in place to solve the
problem; f. whether an AI-based solution is
appropriate for the complexity of its
b. current user needs; potential use contexts?
Do these initial assessments support the justifiability and reasonableness of choosing to build
an AI system or tool to help you deliver the desired services?
For more details on “Assessing if artificial intelligence is the right solution” see guidance by
the Office for AI and Central Digital and Data Office. For further details about understanding
user needs, see Section 1 of the Data Ethics Framework and the user research section of the
Gov.UK Service Manual.
Manual
• Has a thorough assessment of the human rights compliant business practices of all
businesses, parties, and entities involved in the value chain of the AI product or service
been undertaken? This would include all businesses, parties, and entities directly
linked to your business lifecycle through supply chains, operations, contracting, sales,
consulting, and partnering. If not, do you have plans to do this?
a. How are you defining the outcome c. Is this translation justifiable given the
(the target variable) that the system general purpose of the project and the
is optimising for? Is this a fair, potential impacts that the outcomes
reasonable, and widely acceptable of its implementation will have on the
definition? communities involved?
b. Does the target variable (or its d. Where appropriate, have you
measurable proxy) reflect a reasonable engaged relevant stakeholders to
and justifiable translation of the gather input on their views about
project’s objective into the statistical reasonableness and justifiability of the
frame? outcome definition and target variable
determination?
a. How, if at all, might the use of your d. How, if at all, might the use of your
AI system impact the abilities of system impact freedoms of thought,
affected stakeholders to make free, conscience, and religion or freedoms of
independent, and well-informed expression and opinion?
decisions about their lives? How might
it enhance or diminish their autonomy? e. How, if at all, might the use of your
system infringe on the privacy rights
b. How, if at all, might the use of of affected stakeholders, both on the
your system affect their capacities data processing end of designing the
to flourish and to fully develop system and on the implementation
themselves? end of deploying it? When appropriate,
this question should supplement the
c. How, if at all, might the use of your completion of a Data Protection Impact
system do harm to their physical, Assessment.
Assessment
mental, or moral integrity? Have risks
to individual health and safety been
adequately considered and addressed?
a. How, if at all, might the use of your system adversely affect each stakeholder’s fair
and equal treatment under the law? Are there any aspects of the project that expose
historically marginalised, vulnerable, or protected groups to possible discriminatory
harm? These questions should supplement the completion of an Equality Impact
Assessment.
Assessment
f. How, if at all, might the use of your l. How could the use of the AI
system affect the right of individuals system you are planning to build or
and communities to participate in the acquire—or the policies, decisions,
conduct of public affairs? and processes behind its design,
development, and deployment—lead
g. How, if at all, might the use of your to the discriminatory harassment of
system affect the right to effective impacted individuals?
remedy for violation of rights and
freedoms, the right to a fair trial
and due process, the right to judicial
independence and impartiality, and
equality of arms?
In this section, you should consider the sector-specific and use case-specific issues
surrounding the social and ethical impacts of your AI project on affected stakeholders.
Compile a list of the questions and concerns you anticipate. State how your team is
attempting to address these questions and concerns. Where appropriate, engage with
relevant stakeholders to gather input about their sector-specific and use case-specific
concerns.
a. Considering SIA results, does the PBG Framework for this project still accurately reflect
the human chain of responsibility and create the baseline conditions for the project
team to be actively accountable for system impacts? (For further details on the PBG
Framework, see Workbook 8, AI Accountability in Practice.)
After reviewing the results of your initial SIA, answer the following questions:
a. Are the trained model’s actual b. Have any other areas of concern arisen
objective, design, and testing results with regard to possibly harmful social
still in line with the evaluations and or ethical impacts as you have moved
conclusions contained in your original from the Design to the Development
assessment? If not, how does your Phase?
assessment now differ?
You must also set a reasonable timeframe for Public Consultation and Development Phase
re-assessment:
Once you have reviewed the most recent version of your SIA and the results of the public
consultation, answer the following questions:
a. What steps can be taken to rectify c. Have the maintenance processes for
any problems or issues that have your AI model adequately taken into
emerged? account the possibility of distributional
shifts in the underlying population?
b. Have any unintended harmful Has the model been properly retuned
consequences ensued in the wake of and retrained to accommodate
the deployment of the system? If so, changes in the environment?
how might these negative impacts be
mitigated and redressed?
You must also set a reasonable timeframe for Public Consultation and Deployment Phase re-
assessment:
The issue of adjudicating between conflicting values has long been a crucial and thorny
dimension of collective life. The problem of discovering reasonable ways to overcome the
disagreements that arise as a result of the plurality of human values has occupied thinkers
for just as long. Nonetheless, over the course of the development of modern democratic
and plural societies, several useful approaches to managing the tension between conflicting
values have emerged.
We can find a concrete and agent-centred approach to managing the tension between
conflicting values in two of the standard schools of modern ethics:
These positions offer tools for thinking through a given dilemma in weighing values.*
A Consequences-Based Approach
A consequences-based approach asks that, in judging the moral
correctness of an action, you prioritise considerations of the goodness
produced by an outcome. In other words, the consequences of your
actions and the achievement of your goals matter most. The goodness
of these consequences should be maximised. In this view, standards of
right and wrong (indicators of what one ought to do) are determined
by the goal served as a result of an action taken, rather than by the
principles or standards one applies when acting.
A Principles-Based Approach
A principles-based approach takes the opposite track. From this
standpoint, the rightness of an action is determined by the intentional
application of a universally applicable standard, maxim, or principle. This
approach does not base the morality of conduct on the ends served by
it. Instead, it anchors rightness in the duty or obligation of the individual
agent to follow a rationally determined (and therefore “universalisable”)
principle. Deontological or principles-based ethics holds that the integrity
of the principled action and intention matters most, and such constraints
must be put on the pursuit of the achievement of one’s goals when the
actions taken as means to achieve these ends come into conflict with
moral standards.
* Learn more about ethics and governance in Leslie, D., & Fischer, C. (2023). Introduction to
Normative Ethical Theories. In AI Ethics and Governance (Turing Commons Skills Track). The Alan
Turing Institute. https://alan-turing-institute.github.io/turing-commons/skills-tracks/aeg/chapter1/
normative/
To take a familiar example, lying to a murderer who appears at your front door would
save an innocent victim whom you are concealing in your cellar. The prioritisation of
consequences makes more sense than the prioritisation of the principle of not lying.
However, in another situation, the principle matters. For instance, where you would be
constrained, on principle, from deceiving others by taking credit for someone else's work in
order to advance in your job.
Yet, it may, among other things, simultaneously do damage to the value of Connect.
This value safeguards interpersonal dialogue, meaningful human connection, and social
cohesion. The implementation of the AI system would eliminate time intensive consultation
processes that contribute to interpersonal communication, trust building, and social
bonding between council staff and residents.[7]
The rational exchange and assessment of ideas and beliefs plays a central role in
meaningful dialogue about balancing values. The validity of the claims we make in
conversations about values is bounded by practices of giving, and asking for, reasons. A
claim about values that is justified is one that convinces by the unforced strength of the
better or more compelling argument. Rational justification and persuasive reason-giving
are, in fact, central elements of legitimate and consensus-oriented moral decision-making.
And, along the same lines, claims made about moral value or properties need to be
carefully evaluated in terms of their inferential strengths and weaknesses.
To answer this question, moral thinkers over the past century have endeavoured
to reconstruct the practical assumptions behind, and presuppositions of, rational
communication (a summary of the most essential of such assumptions and presuppositions
is provided below).[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Creating a reflective and practicable
awareness of these assumptions and presuppositions among members of your team can
play a crucial role in creating an innovation environment that is optimally conducive to
meaningful and inclusive deliberation:
Non-Coercion
All interlocutors must be treated with respect
and given equal opportunity to contribute to
Meaningful deliberation must be free from any sort the conversation. All voices are worthy of equal
of implicit or explicit coercion, force, or restriction consideration in processes of exchanging reasons.
that would prevent the open and unconstrained
exchange of reasons.
Sincerity
Anyone whose interests are affected by an issue
and who could make a contribution to better
Meaningful deliberation must be free from any sort understanding it must not be excluded from
of deception or duplicity that would prevent the participating in deliberation. All relevant voices must
authentic exchange of reasons. Interlocutors must be heard and all relevant information considered.
mean what they say.
2. Environmental Factors
The effectiveness of your project team’s ability to bring AI sustainability into practice will
largely hinge on the governance actions and procedures you set up to ensure that the AI
innovation workflow is sufficiently responsive to changing production, implementation, and
environmental factors. These procedures and mechanisms should involve both the public-
facing, engagement dimension of your project and internal processes of reassessment,
updating, monitoring, and deprovisioning.
At all events, Deployment Phase re-assessment and other necessary monitoring, updating,
and deprovisioning activities should be determined by:
• changes in production and environmental factors that may influence the system’s
performance; and
Activities
54 Balancing Values
Activities 33
Activities Overview
In the previous sections of this workbook, we have presented an introduction to the core
concepts of AI Sustainability. In this section we provide concrete tools for applying these
concepts in practice. Activities related to AI Sustainability in Practice Part Two will help
participants conduct and respond to SIAs throughout the design and development of AI
systems. Your team will continue engaging with the interactive case study presented
in the previous workshop, playing the role of a local authority developing an AI model
aimed to identify suitable building sites for housing development. Your team will plan
stakeholder engagement activities, schedule impact assessments, and determine how to
incorporate results from these engagements and assessments. These activities are to be
conducted following the completion of activities from AI Sustainability in Practice Part
One. Although new participants may join this session, outputs from the previous workshop
One
board are necessary materials for the delivery of this workshop.
We offer a collaborative workshop format for team learning and discussion about the
concepts and activities presented in the workbook. To run this workshop with your team,
you will need to access the resources provided in the link below. This includes a digital
board and printable PDFs with case studies and activities to work through.
Case studies within the Activities sections of the AI Ethics and Governance in Practice
workbook series offer only basic information to guide reflective and deliberative activities.
If activity participants find that they do not have sufficient information to address an issue
that arises during deliberation, they should try to come up with something reasonable that
fits the context of their case study.
In this section, you will find the participant and facilitator instructions required for
delivering activities corresponding to this workbook. Where appropriate, we have included
Considerations to help you navigate some of the more challenging activities.
Balancing Values
Practise weighing tensions between values when assessing the ethical permissibility
of AI projects by considering consequence-based and values-based approaches and
engaging in deliberation.
Practise using SIAs to formulate proportional monitoring activities for the development
and deployment of AI models.
25% 53%
The number of homeless applications The number of households in
has risen by 25% in the past temporary accommodation has
three years. risen by 53%, with an unprecedented
number of applications submitted since
2020.
10,000 50%
New homes Affordable homes
Your current method for allocating new development sites can take
up to ten months to complete and considers a limited number of
sites proposed by developers, landowners, and estate agents.
These sites are manually reviewed by your team to ensure they meet
policy standards (i.e. sites’ ability to provide basic amenities) and are
suitable for development in practice.
Sites that pass your review process are taken forward for a
public consultation. This gives residents the opportunity to object to
certain sites being open for planning applications. Your team considers
public input to help determine which site proposals are accepted.
Your team conducted a Stakeholder Analysis and advised your council on how to engage
stakeholders throughout the Design and Development process. The council has reviewed
your advice and has decided to open your assessment to the public.
They have approved your engagement They deemed that this model would
objective, as they deemed that this model have significant social impacts and have
would have significant social impacts. Your decided that your team should partner
team is now to (partner with or empower) with stakeholders as an engagement
stakeholders when conducting SIAs for this objective. Your team is now to engage with
model. stakeholder when conducting SIAs for this
model.
George
60 He/him Impacted Stakeholder
Profile
George is a 60-year-old black British man. He is a member of the Local Small Business
Association and owns a popular restaurant at one of the local high streets.
Alex
35 He/him Project Team Member
Profile
Alex is a 35-year-old Chinese man who lives in another borough. He is the Planning
Authority Lead and has been working for the council for the past six years. Alex traditionally
leads the site searching process for council developments, and will be involved in the
Housing Delivery Plan.
Profile
Hayley is a 40-year-old white British woman on the housing register. She lives in an
overcrowded flat with her family of five, she is waiting for a bigger home ideally in proximity
to affordable childcare and a specialist school for her son who has autism spectrum disorder
(ASD).
Ali
17 He/him Impacted Stakeholder
Profile
Ali is a mixed-race 17-year-old local. He moved to the UK from Jamaica with his family when
he was two and has lived locally since. His family rents a house near a community garden he
helps run.
Profile
Terry is a 27-year-old black British man. He was born and raised locally and works at a local
corner store owned by a family friend.
Katherine
73 She/her Impacted Stakeholder
Profile
Katherine is a 73-year-old white British woman. She is a regular at the local library, leisure
centre, and church. She currently lives with her daughter and grandchildren but has recently
been placed on a priority waiting list within the housing register in order to move into council
housing that supports her mobility needs.
Profile
Mia is a 28-year-old British Indian woman and data scientist who rents an apartment in
another borough. She has been a council employee for two years, and has experience using
a variety of ML techniques.
Nick
55 He/him Impacted Stakeholder
Profile
Nick is a 55-year-old white British man and electrician who has recently been placed in
emergency accommodation after losing his job and after his private tenancy agreement
wasn’t renewed.
Jamie
31 He/him Impacted Stakeholder
Profile
Jamie is a 31-year-old white British man and graphic designer for a creative agency.
Profile
Tom is a 59-year-old Black French real estate owner. He inherited a property portfolio that
includes a variety of local commercial properties, which he has been managing for around 10
years.
Michael
31 He/him Project Team Member
Profile
Michael is a 31-year-old white British man and the product manager for the proposed
project.
Stakeholder Impact
Assessment (Design Phase)
Objective
Practise answering key questions within SIAs.
Role Play
In this activity, your team will conduct a Design Phase SIA. Your group will be assigned
stakeholder profiles in order to consider a variety of perspectives that may be present in
stakeholder engagements.
Team Instructions
1. This activity will start with your facilitator 3. Once team members have read over the
reading out the activity context. They Project Proposal, the team will have
will split the team into groups, each with some minutes to answer the questions on
assigned personas. your assigned section of the Stakeholder
Impact Assessment (Design Phase).
2. Once groups have been assigned, take a Consider how each persona might respond
few minutes to individually read over the differently to questions.
Project Proposal. Team members are to
consider the note on case studies at the 4. A group member is to volunteer to write
beginning of the activities section of this answers on the board and report back to
workbook, imagining how stakeholders the team.
might relate to the content.
5. You will then reconvene as a team, having
volunteer note-takers share each group’s
answers to the questions and discussing
the answers.
1 2 3 4 5 6 7 8 9 10 11 12
Project Data Extraction Preprocessing & Model Testing & System System Use
Planning or Procurement Feature Engineering Validation Implementation & Monitoring
Stages in focus
Problem Formulation
DESIGN
pre-approved for planning applications for developments that
include at least 50% affordable housing.
Data Extraction
DEPLOYMENT
Site Validation
Your team would review suitable sites and adjust your list as
deemed appropriate based on local policy, landowners’ interest
in development, and a public consultation. This process would
take no longer than three months since suitable sites will
reflect features of currently accepted sites, and key information
for validating sites will be found in a centralised web interface.
Outcome
Accepted sites will be made publicly available in the council
website. Planning applications for these sites will be deemed
pre-approved, waiving time-consuming elements of
planning application reviews, such as consultations
with neighbours. Your work reviewing applications would be
reduced to verifying compliance with building standards (i.e.
compliance with accessibility, health and safety) and requesting
any necessary adjustments. Application results are to be
delivered within two working weeks or deemed approved in the
event of no response. Your public map will be automatically
updated as permissions are granted, reflecting availability.
Each site in the dataset is represented by features used in the Random Forest to determine
whether the site is suitable:
Majority Voting
Suitable
Random Forest models determine classifications based on the majority vote of a large
number of individual decision trees (flow charts analysing features that lead to a
classification).
Stakeholder Impact
Assessment (Design Phase)
1. Read the following statement to the team:
Our team has conducted an iteration of the Stakeholder Engagement Process and advised
our council on an engagement objective for this project. The council has determined we
will conduct our Design Phase SIA by (partnering with or empowering, depending
on your team’s engagement objective, determined in Part One of this workshop)
stakeholders and engaging them in a citizens’ panel. They have provided a model proposal
containing further details about the model and its intended placement within our team.
2. Give the team some minutes to read the Group 1 (Goal Setting and Objective
instructions for this activity. When they Mapping):
finish reading the instructions, ask them if
• Terry Impacted Stakeholder
they have any questions.
• Mia Project Team Member
When facilitating the discussion on potential harms, it may be useful to refer back to
the 'Origins of the SUM Values: Drawing principles from real-world harms' section of AI
Sustainability in Practice Part One.
One In particular, the mapping of risks that emerge from
the use of AI/ML technologies to the ethical concerns underwriting responsible AI/ML
research and innovation provides a helpful starting point for examining potential negative
impacts:[34]
For instance: residents may not be sufficiently consulted about a development project
that results from the AI model’s use and may thus lose a sense of agency and autonomy;
meaningful community participation in decision-making about local affairs may be
circumvented by the use of the model, harming social solidarity and interpersonal
connection;[35] the significance of the professional judgment of city planners and local
officials may be diminished through this form of automation, thereby harming their
agency and decision-making authority; certain citizens may be displaced or severely
inconvenienced as the result of the automated classification of suitability without having a
say in the way suitability is being defined by the system, thereby harming their sense of
autonomy and agency.[36]
For instance, poor quality data (i.e. city records or property and land use information that
contain human errors), gaps in measurement (poor/inconsistent recording of geographic
proximity to essential services and amenities), or out-of-date information (i.e. dated/
obsolete information about current property use, essential services and amenities, or
access to utilities), can lead to outputs that inaccurately indicate a site’s suitability for
development and that end up harming the wellbeing of future residents, who then have to
inhabit inhospitable or deprived living environments.[37] [38] [39] [40]
Balancing Values
Objective
Practise balancing and navigating tensions between values when assessing the ethical
permissibility of AI projects. Learn to employ consequences-based and principles-based
approaches when engaging in deliberation.
Activity Context
When answering questions about the possible impacts of the proposed system, members
of your team may have noticed that, at times, different values come into tension with each
other. Decisions that are made around how to balance these tensions can both influence
the direction that AI projects take, and shape their outcomes.
In this activity, your team will consider three values that can come into conflict when
considerations are undertaken about the ethical permissibility of using a model to
automate the site selection process. Your team is to consider these tensions from both
consequences-based and principles-based approaches, establishing a plan for how you will
balance these values. This plan will be used both to inform your recommendation to the
council on whether to develop the system, and to specify any amendments you would need
to the Project Proposal in order to consider it ethically permissible.
Your group will be assigned a pair of conflicting values in this activity. The goal is for your
group to keep in mind the identities and circumstances of the stakeholder profiles in order
to consider how they might evaluate tensions discussed in this activity.
Team Instructions
Balancing Values
1. Give the team a moment to read over 5. Next, lead a team discussion about the
the activity instructions, answering any extent to which these plans address any
questions. anticipated questions or concerns in the
Design Phase SIA, and how you might
2. Next, split the team into groups, each adjust the plans to address these.
assigned a pair of conflicting values. Let
• Co-facilitator: Write these answers
the team know that they will have some
on sticky notes, placing them under
minutes to answer the questions and
the Potential Project Benefits
come up with a plan.
section on the board.
Revisiting Engagement
Method
Objective
Undertake practical considerations of resources, capacities, timeframes, and logistics as
well as stakeholder needs to establish an engagement method for the following SIA.
Activity Context
Your team has conducted the Design Phase SIA and shared it with your council along with
your recommendation regarding the development of the proposed project. The council has
chosen to move forward with the project but incorporated the following amendments to the
project proposal:
Amendments
• The target variable of suitability will now indicate that sites categorised as
suitable will, once passing your team’s review and public consultation, be made
publicly available for developers to submit planning applications.
• Your team will review individual applications in a process that will include
consulting with neighbours of specific sites. Your team will use the model’s web
interface to assess centralised information about each site, streamlining the
process while enabling human oversight.
With the help of Mia, your team’s Data Scientist, your team has designed and
developed the model. You are now in the Model Reporting step within the
Development Phase of the lifecycle and are finishing up your Development
Phase SIA, which you are conducting through a Citizens' Jury. You are finishing
up your Design to Development Phase SIA and need to schedule proportional
Development Phase assessments and engagement activities.
1. Take a moment to individually look 2. Your team will be split into two groups. Go
over the activity instructions and to your relevant team's instructions below
Development Phase SIA, the Model to continue the activity.
Performance Metrics Report, and the
updates on the Project Lifecycle section
on the board.
Project Lifecycle
Group 1 Instructions
• Which methods meet your established 3. Your group note-taker is to write out
engagement objective? answers within the Notes section of this
activity.
• What resources are available for
conducting engagements? 4. Once instructed to by your facilitator,
jump to the Full Team Instructions .
1. In your group, discuss the results of the 4. Your group note-taker is to write out
Development Phase SIA. Consider the answers within the Notes section of this
questions: activity.
• How might the model update harm 5. Once instructed to by your facilitator,
stakeholders? jump to the Full Team Instructions .
1. Reconvene as a group, having group note- 5. Consider what feedback mechanisms will
takers present their chosen methods and be in place.
each group’s reasoning.
6. Your co-facilitator will place the
2. Have a group discussion about what established Engagement Method Card
method might be best suited to balance on the appropriate section of the Project
practical constrains with stakeholder Lifecycle on the board, and outline
needs. engagement details within the card.
After reviewing the results of your initial SIA, answer the following questions:
• Are the trained model’s actual objective, design, and testing results still in line
with the evaluations and conclusions contained in your original assessment? If
not, how does your assessment now differ?
RESULTS
The model has been adjusted to address concerns with the original
assessment:
• The Deployment Phase of this project has been adjusted for the
model to be deployed in increments, expansion being subject to
SIAs.
RESULTS
The model has been updated to account for a new planning policy that
enables commercial buildings to be repurposed for housing development.
The model now considers commercial buildings as potentially suitable
sites. This update has enabled the model to identify suitable sites
accurately under current local policy.
During model development, our team determined that the model was not
generating enough suitable sites due to a feature indicating the
percentage of green or public space within sites. The model was only
classifying sites with a low percentage of green or public spaces
as suitable. Our team removed this feature in order for the model to
generate a greater number of sites, irrespective of the percentage of
public or green space within these.
As these changes were not accounted for in the Design Phase SIA, our
team will closely monitor potential issues and feedback.
Precision
97% Number of true positives divided by the number of all sites classified as
suitable (true positives and false positives)
Accuracy
97% Number of correct classifications divided by total number of
classifications made
Recall
95% Number of true positives divided by the number of all actual suitable
sites (true positives and false negatives)
Revisiting Engagement
Method
1. Give the team some minutes to 7. After the team discussion, ask the team to
individually look over instructions for this vote on an engagement method.
activity, as well as the Development
• Co-facilitator: place the established
Phase SIA and the Model Performance
Engagement Method Card on the
Metrics Report.
'System Use and Monitoring' step
within the Project Lifecycle on the
2. When time is up, ask the team if they
board.
have any questions.
• Group 2 will discuss stakeholder • How might the team ensure that
needs. method feeds useful information to
your SIA?
4. Give each group some minutes to decide
on an engagement method by discussing 9. Consider what feedback mechanisms will
the questions in their group instructions. be in place.
Stakeholder Impact
Assessment (Deployment
Phase)
Objective
Practise using SIAs to formulate proportional monitoring activities for the development and
deployment of AI models.
Activity Context
Your team has deployed the system within an initial deployment area, and you are due to
conduct your first Deployment Phase SIA.
Team Instructions
1. In this activity, your team will be split into Part One: Production, Implementation, and
three groups. Each group will be assigned Environmental Factors
samples of stakeholder feedback that
represent greater stakeholder reactions to 3. In your groups, discuss how your assigned
the deployment of the model. feedback samples may be connected to
changes in production and implementation
2. Each group will have an assigned note- factors, or to environmental factors.
taker who is to record team discussions
on the group’s section on the board and • Revisit the section Case Study:
report back to the team, considering: Challenges to AI sustainability in AI
for Urban Planning of the workbook for
• What was the feedback sample support in this discussion.
and how, if at all, was it connected
to production, implementation, or Part Two: SIA Question
environmental factors?
4. Next, turn to the Deployment Phase
• What harmful impacts were raised by SIA question assigned to your group
this sample? on your section of the board, and have
a group discussion to come up with an
• Did your team decide updating or answer to the question. Consider any
deprovisioning was a better option? harmful impacts that have arisen from the
What informed this choice? deployment of this system.
Updating or Deprovisioning
Stakeholder Impact
Assessment (Deployment
Phase)
1. In this activity, your team will be split 5. Give the team enough minutes to conduct
into groups. Each group will be assigned this activity. Inform the team of the
samples of stakeholder feedback that maximum allocated minutes for each part
represent wider stakeholder reactions of this activity:
to the deployment of the model, and a
relevant Deployment Phase SIA question. • Part One: Production, implementation,
and environmental factors
2. Harmful impacts raised by each feedback - 10 minutes
sample are connected to a production,
implementation,or environmental • Part Two: SIA question
factor. Each group will deliberate on - 10 minutes
what production, implementation, or
environmental change their assigned • Part Three: Updating or deprovisioning
feedback sample may be connected to: - 15 minutes
4. Next, split the team into groups, asking 9. Once all decisions have been shared and
for a volunteer note-taker per group that discussed, consider the overall group view
will report back to the team. on updating or deprovisioning the model.
Based on the overall group view, choose
the corresponding scenario from the
following section to read out to the group.
These scenarios represent the outcomes
of updating or deprovisioning the model.
Model has been updated to include current datasets, and an updating protocol has been
set up to ensure data remains timely and relevant. The feature indicating the percentage
of green or public spaces within sites was re-integrated into the model to ensure it
doesn’t select sites that have a significant amount of green space, and that less sites
are consequently selected. Model updates that would result in outputs that are at odds
with planning policy were not permitted, but the council has taken note of stakeholders’
feedback for further consultation on the policy itself.
Local development has continued to grow at a pace that meets the target in our 10-year
housing plan, and the model is utilising datasets that accurately reflect real-world access to
essential services, safeguarding the quality of outputs. Residents have responded positively
to the shift to decrease the scale of development and limit the percentage of green and
public spaces being used, although rising prices continue to be an area of concern.
Our team does, however, continue to receive negative feedback regarding the impact of
the model on local businesses. Residents critique the time-consuming nature of updating
local policy compared to the speed at which commercial sites are being repurposed for
housing. We also continue to receive negative feedback regarding what constitutes the
definition of affordable housing.
Model deployment has been stopped while there is public consultation regarding:
3. the constraints that are to be put on what types of commercial sites can be
repurposed for housing.
The outputs of this consultation will serve to define objectives of a new project, for
which components of the current project (i.e. re-validated datasets, model) will serve as
a foundation. Residents are happy to be involved in defining outputs once these points
are democratically addressed. The new project is likely to provide outputs that reflect
residents’ self-articulated interests.
The pace of local housing development has, however, temporarily returned to the growth
rate it had prior to the deployment of this project. This is challenging our team’s ability to
meet the targets in our 10-year housing plan.
Feedback Samples
The model has attracted much more development in a short time period. Our
team has been able to review planning applications faster while considering
residents’ input. Using the model has been useful, but we will need to review all
available feedback prior to assessing next steps.
• Quote from Terry, Local Resident, highlighting harmful impacts of the model, including
high rates of development pricing-out of local residents:
We are seeing development left right and centre, bringing people from outside
the area who can actually afford to rent or buy. It doesn’t seem like the council is
interested in those of us who have always been here. There are new shops
none of us can afford, public spaces turned private, rent prices going up. Your
plan is helping change our neighbourhood for the worse. I myself need affordable
housing, but this plan is kicking us out.
• Quote from Ali, Local Resident, highlighting harmful impacts of the model including
green spaces being built over:
A planning application has been submitted for a development to be built over our
community garden. We won’t let this happen. The council needs to protect the
green spaces that make this neighbourhood a community.
• Q: How does the content of the existing SIA compare with the real-world impacts of
the AI system as measured by available evidence of performance, monitoring data,
and input from implementers and the public?
• A: The deployment of the model seems both to confirm many of the concerns
expressed in the original SIA and to uncover new harms that were previously
unanticipated. Concerns about affordability and the inequitable impacts of gentrification
and displacement have been validated in light of the rapid pace of development and
the influx of new residents. Concerns about the diminishment of public and green
spaces have been confirmed by spreading privatisation and the filing of new planning
applications, though there is disagreement among residents about the costs and
benefits of streamlined planning.
• There are likely significant constraints to changing meaning attributed to the target
variable, namely significant stakeholder consultation and council approval.
Feedback Samples
• Quote from Katherine, Local Resident, highlighting model categorising sites without
access to essential services as suitable:
It was such a relief to hear I was one of the first people offered a disabled-
adapted home in these new houses, I can’t even walk up the flight of stairs
in my current flat! It is a shame that the only leisure within two kilometres of
the building was closed last year. I took the house because I simply cannot
stay here, but I don’t know what I will do without my exercise routine. This
something that needs to be thought about.
• Q: Have the maintenance processes for your AI model adequately taken into account
the possibility of distributional shifts in the underlying population? Has the model been
properly re-tuned and re-trained to reflect changes in the environment?
• A: Katherine's feedback indicates that the model has not been adequately updated
to keep pace with the relevant distributional shift (i.e. that the closing of the leisure
centre has changed certain people's access to essential services). This suggests that
more frequent model updating may be necessary.
Feedback Samples
• Quote from Mia, Project Data Scientist, highlighting the model’s ability to identify sites
that meet requirements set out in current planning policy:
Having tested, validated, and verified the system, our team was happy to see the
model perform with strong performance and safety metrics. Having incorporated
new features, our model is also up to date with local policy.
• Quote from George, local business owner, highlighting harmful impacts of promoting
commercial sites for residential repurposing, namely, it’s correlation with local
businesses not receiving rental contract renewals:
Your model is closing down long-standing local businesses. More and more
property owners are refusing to renovate our contracts. By publishing commercial
buildings, you have attracted purchase offers that small business owners simply
cannot match.
• Q: Have any unintended harmful consequences ensued in the wake of the deployment
of the system?
• A: Incentives for building owners to sell to property developers, who are converting
commercial buildings to residential properties, are driving local businesses out of
their spaces. Though the pace of local development is allowing for the local authority
to meet its targets, negative impacts on local businesses have been an unintended
harmful consequence of this success.
• Updating the model for it to not categorise commercial sites as suitable would entail
significant stakeholder consultation and is likely to raise tensions as the system’s
categorisations would be at odds with policy.
2 Harrington, C., Erete, S., & Piper, A. M. 9 Grice, H. P. (1975). Logic and Conversation.
(2019). Deconstructing Community- In P. Cole, & J. L. Morgan. (Eds.), Syntax and
Based Collaborative Design: Towards More Semantics, Volume 3, Speech Acts (pp. 41-
Equitable Participatory Design Engagements. 58). Academic Press.
Proceedings of the ACM on Human-Computer
Interaction, 3(CSCW), 1-25. https://doi. 10 Arendt, H. (1958). The human condition.
org/10.1145/3359318 University of Chicago Press.
3 OECD. (2005). Evaluating Public 11 Bachrach, P., & Baratz, M. (1962). Two
Participation in Policy Making. https://doi. faces of power. American Political Science
org/10.1787/9789264008960-en Review, 57(4), 947–952. https://doi.
org/10.2307/1952796
4 OECD. (2013). Government at a Glance 2013.
https://doi.org/10.1787/gov_glance-2013-en 12 Bohman, J. (2000). Public deliberation:
Pluralism, complexity, and democracy. MIT
5 Dawkins, C. E. (2014). The principle of press.
good faith: Toward substantive stakeholder
engagement. Journal of Business Ethics, 121, 13 Gutmann, A., & Thompson, D. (1996).
283-295. https://doi.org/10.1007/s10551- Democracy and disagreement. Harvard
013-1697-z University Press.
6 Catapult. (2019, May 11). Building a 21st 14 Habermas, J. (1984). The theory of
century digital planning system: A quick start communicative action I: Reason and the
guide. UKRI Innovate UK. https://cp.catapult. rationalization of society. Beacon Press.
org.uk/news/building-a-21st-century-digital-
planning-system-a-quick-start-guide/ 15 Hindess, B. (1996). Discourses of power: From
Hobbes to Foucault. Blackwell Publishers
7 Leslie, D. (2019). Understanding artificial
intelligence ethics and safety: A guide for 16 Cohen, J. (1989). Deliberation and democratic
the responsible design and implementation legitimacy. In A. Hamlin, & P. Pettit (Eds.), The
of AI systems in the public sector. The Alan good polity: normative analysis of the state
Turing Institute. https://doi.org/10.5281/ (pp. 17–34). Basil Blackwell.
zenodo.3240529
17 Manin, B. (1987). On legitimacy and political
deliberation. Political Theory, 15(3), 338–368.
AI Sustainability in Practice
74
Part Two: Sustainability Throughout the AI Workflow
18 McLeod, J. M., Scheufele, D. A., Moy, P., 25 Parsons, M., Fisher, K., & Nalau, J.
Horowitz, E. M., Holbert, R. L., Zhang, W., (2016). Alternative approaches to co-
Zubric, S., & Zubric, J. (1999). Understanding design: insights from indigenous/academic
Deliberation: The effects of discussion research collaborations. Current Opinion in
networks on participation in a public forum. Environmental Sustainability, 20, 99-105.
Communication Research, 26(6), 743–774. https://doi.org/10.1016/j.cosust.2016.07.001
https://doi.org/10.1177/009365099026006005
26 Lupia, A., & Norton, A. (2017). Inequality is
19 Przeworski, A. (1998). Deliberation and always in the room: Language & power in
ideological domination. In J. Elster (Ed.), deliberative democracy. Daedalus, 146(3), 64-
Deliberative Democracy (pp. 140–160). 76.https://doi.org/10.1162/DAED_a_00447
https://doi.org/10.1162/DAED_a_00447
Cambridge University Press.
27 Tschakert, P., Das, P. J., Pradhan, N. S.,
20 Landwehr, C. (2014). Facilitating deliberation: Machado, M., Lamadrid, A., Buragohain, M.,
The role of impartial intermediaries in & Hazarika, M. A. (2016). Micropolitics in
deliberative mini-publics. In Grönlund, collective learning spaces for adaptive decision-
K., Bächtiger, A., & Maija Setälä (Eds.), making. Global Environmental Change,
Deliberative mini-publics: Involving citizens 40, 182-194. https://doi.org/10.1016/j.
in the democratic process (pp. 77-92). ECPR gloenvcha.2016.07.004
Press.
28 Garcia, A., Tschakert, P., Karikari, N. A.,
21 See for instance Nagoda, S., & Nightingale, A. Mariwah, S., & Bosompem, M. (2021).
J. (2017). Participation and power in climate Emancipatory spaces: Opportunities for
change adaptation policies: Vulnerability (re) negotiating gendered subjectivities and
in food security programs in Nepal. World enhancing adaptive capacities. Geoforum,
Development, 100, 85-93. https://doi. 119, 190-205. https://doi.org/10.1016/j.
org/10.1016/j.worlddev.2017.07.022 geoforum.2020.09.018
22 Lupia, A., & Norton, A. (2017). Inequality is 29 Leslie, D., Katell, M., Aitken, M., Singh, J.,
always in the room: Language & power in Briggs, M., Powell, R., Rincon, C., Perini, A.
deliberative democracy. Daedalus, 146(3), 64- M., & Jayadeva, S. (2022). Data Justice in
76.https://doi.org/10.1162/DAED_a_00447
https://doi.org/10.1162/DAED_a_00447 Practice: A Guide for Policymakers. The Alan
Turing Institute in collaboration with The Global
23 Mendelberg, T., Karpowitz, C. F., & Oliphant, J. Partnership on AI. https://doi.org/10.5281/
B. (2014). Gender Inequality in Deliberation: zenodo.6429475
Unpacking the Black Box of Interaction.
Perspectives on Politics, 12(1), 18–44. http:// 30 Glaberson, S. K. (2019). Coding over
doi.org/10.1017/S1537592713003691 the cracks: Predictive analytics and child
protection. Fordham Urban Law Journal, 46(2),
24 Mendelberg, T., & Oleske, J. (2000). 307-363. https://ir.lawnet.fordham.edu/ulj/
Race and public deliberation. Political vol46/iss2/3
Communication, 17(2), 169-191. https://doi.
org/10.1080/105846000198468
AI Sustainability in Practice
75
Part Two: Sustainability Throughout the AI Workflow
31 The Office for Standards in Education, 37 Kilkenny, M. F., & Robinson, K. M. (2018). Data
Children’s Services and Skills (Ofsted). (2023). quality: “Garbage in–garbage out”. Health
Inspecting local authority children’s services. Information Management Journal, 47(3),
https://www.gov.uk/government/publications/ 103-105. https://journals.sagepub.com/doi/
inspecting-local-authority-childrens-services- pdf/10.1177/1833358318774357
from-2018/inspecting-local-authority-
childrens-services#inspection-methodology 38 Babbage, C. (1864). Passages from the life
of a philosopher. Longman, Green, Longman,
32 Sideris, N., Bardis, G., Voulodimos, A., Roberts, and Green.
Miaoulis, G., & Ghazanfarpour, D. (2019).
Using Random Forests on Real-World City 39 Mellin, W. (1957). Work with new electronic
Data for Urban Planning in a Visual Semantic ‘brains’ opens field for army math experts. The
Decision Support System. Sensors, 19(10), Hammond Times, 10, 66.
2266. https://doi.org/10.3390/s19102266
40 Suresh, H., & Guttag, J. V. (2019). A
33 Sideris, N., Bardis, G., Voulodimos, A., framework for understanding unintended
Miaoulis, G., & Ghazanfarpour, D. (2019). consequences of machine learning. arXiv
Using Random Forests on Real-World City preprint arXiv:1901.10002, 2(8). https://doi.
Data for Urban Planning in a Visual Semantic org/10.48550/arXiv.1901.10002
Decision Support System. Sensors (Basel,
Switzerland), 19(10), 2266. https://doi. 41 O'Neil, C. (2017). Weapons of math
org/10.3390/s19102266 destruction: How big data increases inequality
and threatens democracy. Crown.
34 Learn more about risks in urban planning
through Koseki, S., Jameson, S., Farnadi, 42 Prince, A. E., & Schwarcz, D. (2020).
G., Rolnick, D., Régis, C., Denis. J., Leal, Proxy discrimination in the age of artificial
A., de Bezenac, C., Occhini,, G., Lefebvre, intelligence and big data. Iowa Law Review,
H., Gallego-Posada, J., Chehbouni, K., 105(3), 1257-1318. https://ssrn.com/
Molamohammadi, M., Sefala, R., Salganik, abstract=3347959
R., Yahaya, S., & Téhinian, S. (2022). AI and
Cities Risks, Applications and Governance. UN- 43 d’Alessandro, B., O’Neil, C., & LaGatta, T.
Habitat. https://unhabitat.org/sites/default/ (2017). Conscientious classification: A data
files/2022/10/artificial_intelligence_and_cities_ scientist’s guide to discrimination-aware
risks_applications_and_governance.pdf classification. Big data, 5(2), 120-134. https://
doi.org/10.1089/big.2016.0048
35 Ryan, R. M., & Deci, E. L. (2017). Self-
determination theory: Basic psychological
needs in motivation, development, and
wellness. Guilford Publications.
AI Sustainability in Practice
76
Part Two: Sustainability Throughout the AI Workflow
Bibliography and Further
Readings
AI Sustainability in Practice
77
Part Two: Sustainability Throughout the AI Workflow
AI in Urban Planning
Cabinet Office, & Geospatial Commission. (2021). Sideris, N., Bardis, G., Voulodimos, A., Miaoulis, G.,
Planning and Housing Landscape Review — & Ghazanfarpour, D. (2019). Using Random Forests
Executive Summary. https://assets.publishing. on Real-World City Data for Urban Planning in a
service.gov.uk/government/uploads/system/ Visual Semantic Decision Support System. Sensors
uploads/attachment_data/file/965740/Planning_ (Basel, Switzerland), 19(10), 2266. https://doi.
and_Housing_Landscape_Review.pdf org/10.3390/s19102266
Geospatial Commission. (2019). Future The Open Data Institute. (2020, August 6). Case
Technologies Review. https://www.gov.uk/ study: Unlocking data on brownfield sites.
government /publications/future-technologies- https://theodi.org/article/case-study-unlocking-
review data-on-brownfield-sites/
AI Sustainability in Practice
78
Part Two: Sustainability Throughout the AI Workflow
To find out more about the AI Ethics and
Governance in Practice Programme please visit:
aiethics.turing.ac.uk
This work is licensed under the terms of the Creative Commons Attribution
License 4.0 which permits unrestricted use, provided the original author
and source are credited. The license is available at:
https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode