Case Study
Case Study
Responsible Use
of Technology:
The Microsoft Case Study
WHITE PAPER
FEBRUARY 2021
Cover: Getty Images/Nikolay Pandev
Contents
3 Foreword
6 3.1 Governance
7 4 Microsoft AI principles
18 Conclusion
20 Contributors
21 Endnotes
Foreword
The World Economic Forum Centre for the Fourth intention–action gap, we aim to provide leaders
Industrial Revolution was launched in 2017 with with practical tools for how they might: 1) educate
the mandate to co-create policy and governance and train their employees to think more about
frameworks through a multistakeholder approach to responsible technology; 2) design their organization
accelerate the adoption of emerging technologies. to promote more ethical behaviour and outcomes;
The Centre’s platforms include areas such and 3) design and develop more responsible
as artificial intelligence and machine learning, technology products.
blockchain, data policy and internet of things. At
the heart of this work is the drive towards action, It is with this last goal in mind that the World
transparency, ethics and global public good. Economic Forum and the Markkula Center for
Applied Ethics at Santa Clara University publish
Today, the global Coronavirus pandemic continues this White Paper, the first in a series of case
to cast a dark shadow over all facets of society. studies highlighting tools and processes that
As social distancing measures became a necessity facilitate responsible technology product design
to preserve public health, digital transformation and development. This initial document on the
became a requirement for most businesses to “Responsible Use of Technology: The Microsoft
simply survive. The urgency to maximize the Case Study” will be followed by other companies’
benefits of technology, while mitigating the examples of ethical practices and tools in future
risks and harms, has never been greater. It is papers. We thank Microsoft for having shared
incumbent on all organizations that design, develop, their responsible innovation tools, practices and
procure, deploy and use technology to do so in a expertise for this effort. It is our hope that this
responsible manner. document will inspire others to contribute to the
Forum’s Responsible Use of Technology project by
In our numerous conversations with leaders sharing tools and methods that businesses have
across the various sectors, we’ve learned that a created for the same purpose.
gap exists between organizations’ desire to act
ethically and their understanding of how to follow To achieve these ambitious goals requires the
through on their good intentions. We refer to this collaboration of all global stakeholders. The World
as an intention–action gap. To this end, the Centre Economic Forum, committed to improving the state
is focused on providing practical resources for of the world, is the International Organization for
organizations to operationalize ethics in technology. Public-Private Cooperation. The Markkula Center
This initiative, which began in 2019, with active for Applied Ethics at Santa Clara University in
participation from civil society, governments and California, a key partner in this project, has over
companies, made the case for both human- 30 years’ history and experience in promoting
rights-based and ethics-based approaches to the ethical deliberations. Together, we are pleased to
responsible use of technology. To help bridge this collaborate towards this ambitious vision.
3.1 Governance
As for the “spokes” in this governance approach, Shifting the culture of a corporation is a monumental
Microsoft has found consistent success in deploying a feat, but it is certainly not impossible, and the
“Champs” model, in which respected domain experts resources in industry, civil society and academia for
across teams and regions are appointed by leadership thinking about these efforts are excellent.10
These six principles act as a mental tool or framework in which to organize thinking about ethics at Microsoft.
While specifically phrased as responsible AI principles, they have relevance for much of the work in the
technology industry. A brief explanation of each principle follows:
Fairness Transparency
Focuses on developing systems that treat Seeks to create technology that is intelligible and
everyone in a fair and balanced way. This principle explainable, not only to those who are developing
acknowledges that defining and mitigating fairness the technology but also to those who will be
issues for a system depend on understanding the using it, or will be affected by it. Stakeholders
system’s purpose and context of use, and that a should be able to interpret and understand what
system’s fairness reflects decision-making during a technology is doing and why it is acting that
both development and deployment. way. This allows product teams to contextualize
and improve results.
Reliability and safety
Means developing systems that are robust and Accountability
capable of maintaining safe operations even in Means that people take responsibility for the way
worst-case scenarios. This principle encompasses technology operates and for the impact of that
consideration of the harms that might come from operation on society. This includes considering
a technology, and ways employees can strive to the structures that can be implemented to ensure
minimize those risks, so technologies can give the accountability at multiple levels, including design,
greatest benefits to their users. development, sales, marketing and use, as well
as advocacy for the regulation of technologies
Privacy and security when warranted.
Seeks to protect data and use data in a way that
is secure for all stakeholders. Privacy is a basic Simply having principles does not change a culture
right and protecting it is crucial for ensuring that unless those principles are made concrete through
stakeholders can trust companies with their data. tools and practices that help employees work
Data must be secure at all stages and, to further through how to think ethically. To advance these
this end, actions must be taken to institutionalize principles and make sure they are implemented
privacy and security for the data companies are into the company’s workflows, Microsoft
responsible for. developed several tools for incorporating applied
ethics in technology. All of these tools serve an
Inclusiveness ethical end; some are more procedural, while
Makes sure that no one is left out of the design, others are more technical in nature.
development, deployment and use of technology.
Communities across the full spectrum of humanity
should be meaningfully engaged and empowered
by technology, and technology should not be
limited to only a few privileged communities. This
inclusion should not only involve building for, but
building with, the diverse stakeholders.
Source: Microsoft
To help cultivate empathy during its product The game has a number of benefits:
creation process, Microsoft’s Ethics & Society
team created the Judgment Call game.15 The – Engineers, product managers, designers
game is an interactive team-based activity that and technology executives consider the
puts Microsoft’s AI principles of fairness, privacy perspectives of the impacted stakeholders and
and security, reliability and safety, transparency, imagine the potential outcomes of their product
inclusiveness and accountability into action. on these stakeholders.
During the game, each participant is given a
card that assigns them a role as an impacted – Although the game does not replace the
stakeholder of a digital product (e.g. product valuable benefits of interacting directly with
manager, engineer, consumer). Each is also stakeholders, it builds empathy, especially early
given a card that represents one of Microsoft’s in the product design process.
AI principles, and a card with a number from 1
to 5, representing the stars in a ratings review. – Roles are arbitrarily assigned to participants
Participants are asked to write a review of the due to the random distribution of the cards. The
digital product from the perspective of their game’s dynamics create a safe environment for
assigned role, principle and rating number. Each product team members to discuss potentially
player is asked to share and discuss their review. sensitive ethical topics.
Source: Microsoft
Developed by the Project Tokyo team (see the Tokyo. They learn the human-centric design
Project Tokyo text box below) from Microsoft approach to AI and the resources available to them
Research, Engineering Learning & Insights and to identify the potential effects of the technology on
the Office of Responsible AI, the Envision AI the stakeholders. Participants apply these lessons
workshop is an exercise that educates Microsoft to completing an impact assessment. The Envision
teams on how to conduct an impact assessment, AI workshop helps Microsoft empower teams to
a process required in the Responsible AI Standard. conduct ethical deliberations on their own and take
In an interactive and engaging setting, Envision AI responsibility for the implications of the products
participants examine real scenarios that occurred they create.
while developing the assistive AI system in Project
Conducting an impact assessment is a required assessments are reviewed by peers and executives
step in the product development process of all AI at the company. This process is facilitated and
projects at Microsoft. Teams complete an extensive required by the company’s Office of Responsible
questionnaire, which takes into account the AI. It serves as an important tool to help ensure
intended use cases of a product and its potential the responsible development and deployment of AI
impacts on stakeholders, and a self-assessment across Microsoft.
of the potential risks. The completed impact
Community Jury is a technique that allows moderator is important to allow every voice to be
project teams to directly interact with impacted heard. Moderators need to provide ample time for
stakeholders.16 A group of representative knowledge sharing, deliberation and co-creation.
stakeholders from diverse backgrounds are Finally, it is important to disseminate a report on the
recruited and selected to be jury members. During Community Jury outcome that summarizes the key
a Community Jury session, project teams provide insights for transparency.
the jury members with an overview of the product’s
purpose and its potential use cases, benefits and The benefits of the Community Jury technique from
harms. Participants share information and discuss an ethical perspective are multifaceted. Product
their perspectives of the product’s impacts with the teams and impacted stakeholders are brought
facilitation of a neutral moderator. As key themes together to learn from each other’s perspectives.
emerge from the discussion, the participants jointly The proximity and connection helps build
define the opportunities and challenges presented community and empathy. Especially for product
by the technology. This process can also lead to teams, stakeholder engagements raise their
co-created solutions. awareness of issues otherwise not readily apparent
when the new technologies were conceived.
The planning process starts by aligning goals and This process can build consensus among teams
outcomes with the project teams. It is important and their communities on the challenges and
that product teams allocate time in the product opportunities that technological innovations may
development process to conduct Community Jury pose. It also presents a vital opportunity for teams
sessions. Based on the project objectives, jury to improve their products and solutions to benefit a
recruitment and selection should be diverse and larger group of stakeholders.
inclusive. Strong session facilitation by a neutral
Some of Microsoft’s tools for considering ethics are and researchers to help them assess and improve
technical devices to understand, assess and mitigate fairness in machine learning.17 Fairlearn has two
the ethical risks of machine learning models. They main components: 1) a set of fairness assessment
serve multiple ethical AI principles – namely, that metrics and an interactive data visualization
AI must be fair, reliable, inclusive, transparent and dashboard, which provide an understanding of
accountable. These software tools are constantly how particular groups may be adversely affected
being developed, refined and changed. by models (this allows a comparison of fairness
and performance metrics between models); and
Fairlearn 2) unfairness mitigation algorithms for a variety of
Fairlearn is an open-source toolkit designed for AI tasks, as well as definitions of fairness to allow
data scientists, developers, business stakeholders deeper thought in this context.18
Source: Microsoft
FIGURE 3 Markkula Center for Applied Ethics best practices in technology applied by Microsoft
By turning best practices into principles, Microsoft Envision the technical ecosystem: The name
assures that they are highly visible and likely to “Ethics & Society” hints at Microsoft’s recognition
contribute to and influence ethical conversations. that it is part of a sociotechnical ecosystem. The
company’s willingness to share its responsible product
Other Markkula Center best practices that Microsoft innovation tools publicly is also a strong indication of
has incorporated include: its desire to contribute to the common good.
Keep ethics in the spotlight: The formation of the Treat technology as a conditional good: By
Ethics & Society team in 2017 and the expansion of choosing to develop and release some technologies
its activities (e.g. the goal to create Responsible AI but not others,25 Microsoft shows that it believes
core priorities for every Cloud + AI team member) technology is not an unconditional good. Not all
illustrate that Microsoft has moved beyond technologies ought to exist. Rather, the technologies
compliance towards inspiring culture change. that should exist are those that help people and
have positive social impact, while others should be
Highlight the human lives and interests behind selected against, regulated or perhaps even banned.
the technology: Microsoft’s user-first approach
to product development demonstrates a focus on Make ethical reflection and practice standard,
people rather than on technology. Ethical tools, pervasive, iterative and rewarding: As Microsoft
such as impact assessments, Judgment Call and rolls out ethical practices across its organization,
Community Jury institutionalize this focus as well. it is institutionalizing its ethical tools and scaling
these resources to increasingly large groups. Ethical
Consider downstream (and upstream and lateral) reflection is becoming more common through the
risks for technologies: All of Microsoft’s ethical Responsible AI Champs programme, company-
tools are centred on considering and mitigating wide education training and RAISE activation
ethical risks. Even the more technical tools, Fairlearn, and scaling, and more rewarding through the
InterpretML and the error terrain analysis, serve this implementation of responsible AI OKRs.
function by keeping an eye on the risks of bias and
other problems in machine learning. Model and advocate for ethical tech practice:
By being an industry leader in integrating ethical
Do not discount non-technical actors, interests thinking into its product life cycle and through
and expectations: Microsoft’s Community Jury leadership in supporting organizations, Microsoft has
exercise is specially designed to bring diverse acted to model and advocate for ethical practices in
community voices into the product development effort. technology. Its willingness to share some of the tools
and practices is also indicative of this best practice.
Even with the steps that Microsoft has taken investigating expanding ethical tools and processes
to operationalize ethics in recent years, it is to the entire organization. As with any major project,
not immune to regulatory and public scrutiny, the company will proceed in phases, the specifics
especially as legal frameworks continue to evolve of which are being developed.
to satisfy public sentiments. For example, the
concerns raised by European regulators about the Many other companies in the technology industry
privacy policy and practices in Office 365 are well are also pursuing efforts to institutionalize ethical
documented.27 In response, Microsoft updated thinking in the product development process.
its Online Services Terms for commercial cloud Some of these companies will be part of the World
customers.28 But these issues are not unique Economic Forum’s series of case studies on the
to Microsoft alone. The company has plans for Responsible Use of Technology.
future developments in the area of ethics and is
Making technology ethical will require efforts not only from technology companies but from many types of
organizations worldwide. The World Economic Forum and the Markkula Center for Applied Ethics at Santa
Clara University share Microsoft’s journey in this endeavour in an effort to inspire and enable organizations
with similar intentions to benefit from its experience. This case study aims to promote discussion, critiques,
as well as efforts to build upon Microsoft’s work. The World Economic Forum and its partners in this project
hope more organizations not only operationalize ethics in their use of technology but also share their own
experience with the global community.
In Cambridge, United Kingdom, a 12-year-old boy what is possible, the team learned that people who
named Theo is sitting in a friend’s kitchen wearing are blind or have low vision are extremely skilled at
a modified Microsoft HoloLens headset. Theo is using their senses to determine what is happening
blind. When he turns his head to face a person in around them. This is called sense-making. Yet,
the room, the name of the person is played in his with less redundant information, there is a great
headset along with a bump sound. This is a Microsoft deal of uncertainty about whether their judgements
research prototype that uses artificial intelligence and are accurate. That is where technology can come
augmented reality to assist people who are blind or in. By providing additional information, it can help
have low vision, code named Project Tokyo.29 blind people and those with low vision feel more
confident in their nuanced skills about making
In 2016 a team from Microsoft Research, led by
sense of the world in their own way.
Cecily Morrison, set out to explore explore how AI
that uses computer vision could extend people’s
The research team discovered that social
capabilities rather than replace them. As early
information was the hardest for their participants to
adopters of AI technologies, the team worked
gather and the most important to them. A research
alongside people who are blind or have low vision to
project led the team to focus on enabling the social
innovate. According to Morrison, “our team wanted
to imagine a future with AI that enabled people to agency of blind children in schools, helping them
extend their own capabilities, using a human-centric understand users in their immediate vicinity.
approach to developing algorithms and interactive
AI experiences. A human-centric approach must The AI system Microsoft Research developed uses
also be a process of responsible innovation, a key a modified HoloLens worn on the user’s head to
pillar of our thinking as we developed the research.” scan a 160-degree field of view. A phone or cloud
server processes images from the sensors to detect
Morrison and her team of researchers began people’s position, gaze, pose and identity. It then
Project Tokyo by observing athletes and spectators communicates this information in spatial audio to
on their trip to the Paralympic games in Brazil. the user, with sound coming from the direction of
From observing those who push the boundaries of the person being identified.30
On the left: Image of the adapted HoloLens device; On the right: Schematic description of the core
functionality of the AI system
Source: Interactions,
“Interpretability as a Dynamic
of Human-AI Interaction”31
Brian Green
Director, Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University, USA
Daniel Lim
Project Fellow, Artificial Intelligence and Machine Learning, World Economic Forum LLC; Seconded from
Salesforce
Emily Ratté
Project Coordinator, Artificial Intelligence and Machine Learning, World Economic Forum LLC
Acknowledgements
Kathy Baxter
Principal Architect, Ethical AI Practice, Salesforce, USA
Kay Firth-Butterfield
Head, Artificial Intelligence and Machine Learning; Member of the Executive Committee, World Economic
Forum LLC
Steven Mills
Partner and Chief AI Ethics Officer, Boston Consulting Group, USA
Ben Olsen
Lead, Responsible Innovation, Education and Activation, Facebook, USA
Kay Pang
Senior Director and Associate General Counsel, Global Markets Compliance Officer, VMware, USA
Ann Skeet
Senior Director, Leadership Ethics, Markkula Center for Applied Ethics, Santa Clara University, USA
Leila Toplic
Head, Emerging Technologies Initiative, NetHope, USA
Thor Wasbotten
Managing Director, Markkula Center for Applied Ethics, Santa Clara University, USA