0% found this document useful (0 votes)
23 views6 pages

@interactionsmag Interactions July-August 2024

Uploaded by

v5gbjfwbhb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views6 pages

@interactionsmag Interactions July-August 2024

Uploaded by

v5gbjfwbhb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CREDIT TK

22 INTER ACTIONS J U LY – A U G U S T 2 0 2 4 @INTER ACTIONSMAG


C O V E R S T O RY

UX Matters:
The Critical Role of UX
in Responsible AI
Q. Vera Liao, Microsoft Research
Mihaela Vorvoreanu, Microsoft
Hari Subramonyam, Stanford University

L
Lauren Wilcox, eBay and Georgia Institute of Technology

Insights Let’s imagine a scenario—inspired by being deployed in various domains. A


→ UX practitioners can play an true events—in which a company has responsible approach to AI development
instrumental role in responsible
I M A G E B Y A N D R I J B O R Y S A S S O C I AT E S , U S I N G S H U T T E R S T O C K B Y M A S T E R13 0 5

deployed an AI-powered system in a and deployment should have aimed to


AI (RAI) practices because UX hospital. The system provides prevent these issues. UX practitioners
and RAI share common goals, recommendations for treatment plans. could have been involved to play an
principles, and perspectives. Some clinicians find that the new instrumental role. For example:
→ UX is uniquely positioned to system requires them to change their • UX researchers could have
contribute to RAI throughout routine and significantly adds to their identified stakeholder needs,
the AI development cycle, from workload, so they start resisting the use concerns, and values, including those
understanding the sociotechnical of the system. Other clinicians are of clinicians and patients, to inform
system, ideation, and creating amazed by this new, powerful better choices of AI use cases,
design interventions to technology and overly rely on the AI by datasets, model parameters,
evaluation. accepting its recommendations even evaluation criteria, and so on.
→ Organizations should prioritize when they are incorrect, resulting in • UX designers could have taken a
and incentivize the involvement medical errors with negative effects on leading role in designing the system
of the UX discipline; the UX patients—especially women, for whom interactions to be compatible with
discipline should expand its the AI system tends to underperform. current work processes, and created
toolkit to meet RAI-specific Unfortunately, this is a scenario that interface features that help mitigate
product requirements. happens too often as AI technologies are overreliance on AI.

INTER ACTIONS. ACM.ORG J U LY – A U G U S T 2 0 2 4 INTER ACTIONS 23


COVER STORY
Responsible AI (RAI)—an umbrella perspective. As the NIST AI RMF
term for approaches to understand and document puts it:
mitigate AI technologies’ harms to AI systems are inherently socio-
people—has entered academic work, technical in nature, meaning they are
policy, industry, and public discourse. influenced by societal dynamics and
In this article, we argue that the UX human behavior. AI risks—and
discipline shares common goals, benefits—can emerge from the interplay
principles, and perspectives with RAI, of technical aspects combined with
and that UX practitioners can be societal factors related to how a system is
instrumental to RAI practices used, its interactions with other AI
throughout the AI development and systems, who operates it, and the social
Peer-reviewed deployment cycle. Drawing from our context in which it is deployed.
work studying AI UX practices and the To understand and positively
Resources for RAI ecosystem [1,2,3,4,5,6], we augment this interplay requires the
discuss concrete contributions that the combined expertise of people who
Engaging Students UX discipline is uniquely positioned to specialize in technology and people who
make to RAI and suggest paths specialize in understanding people,
forward to remove current hurdles that such as those working in UX.
prevent realization of these For decades, the UX discipline has
contributions. been at the forefront of championing
human-centered values that directly
EngageCSEdu CONVERGING PATHS: reflect and promote themes that are now
THE INTERSECTION OF UX central to RAI. For example, the widely
provides faculty- AND RAI IN SOCIOTECHNICAL appreciated “Guidelines for Human-AI
contributed, PERSPECTIVES Interaction” [1] make concrete design
To mitigate harms of AI to people— recommendations for providing
peer-reviewed individuals, communities, and
society—RAI uses a set of principles
transparency, enabling human control,
and ensuring reliability, safety, and
course materials and best practices to guide the resilience in human-AI systems. These
development and deployment of AI UX recommendations resonate deeply
(Open Educational systems. For example, a 2020 report with the principle-guided methodology
from the Berkman Klein Center of RAI, which advocates for iterative
Resources) for reviewed 36 sets of prominent development and evaluation focused on
responsible and ethical AI principles human-centric outcomes. RAI’s
all levels of and mapped them to eight themes: emphasis on considering the context-
fairness and nondiscrimination, specific consequences for people along
introductory transparency and explainability, each RAI principle aligns seamlessly
accountability, privacy, safety and with foundational UX methods such as
computer science security, human control, professional human-centered design, which
instruction. responsibility, and promotion of human
values. Recently, the National Institute
prioritizes the needs of end users at
every stage of the design process, and
of Standards and Technology (NIST) value-centered design, which extends
released the AI Risk Management this focus to include the stakeholders,
Framework (RMF), which is becoming a values, ethics, and sociocultural context
guiding document for the AI industry in surrounding the technology.
the U.S. The AI RMF lists
characteristics of trustworthy AI that HOW CAN UX
are generally consistent with these CONTRIBUTE TO RAI?
themes (Figure 1). Given the synergies between the core
More fundamentally, RAI values and practices across UX and
foregrounds a sociotechnical RAI, UX researchers and designers

engage-csedu.org
Blindly adopting AI without solving
a real user problem or mismatching
stakeholders’ needs and goals is not only
a waste of resources but can also be one
of the root causes of unintended harms.
24 INTER ACTIONS J U LY – A U G U S T 2 0 2 4 @INTER ACTIONSMAG
can play an instrumental role in
operationalizing RAI within product Safe
Secure and Explainable and Privacy- Fair, with Harmful
Resilient Interpretable Enhanced Bias Managed Accountable
teams. Below, we consider a non- and
exhaustive list based on our work Valid and Reliable Transparent
interviewing and observing many UX
practitioners who work on AI-
Figure 1. Characteristics of trustworthy AI systems specified in the NIST AI Risk
powered products. Management Framework 1.0 (re-created from Figure 4 in https://nvlpubs.nist.gov/nistpubs/
Explicating the “socio” components ai/NIST.AI.100-1.pdf).
of the sociotechnical system through
UX research. The “socio” scientists to translate what people care currently not involved in the product-
components—who the stakeholders about in the deployment context to feature definition stage of AI-powered
are; what their values, needs, and features or signals for the model). systems, resulting in a missed
concerns are; how the deployment Facilitating purposeful and opportunity for RAI.
context(s) is set up—are often an responsible use of AI through Creating interaction-level RAI
integral part of UX research outcomes responsible ideation. RAI must start interventions. The UX discipline can
for any product. For RAI practice, they with the question of whether an AI significantly expand RAI interventions
can be instrumental. Most important, capability should be built and whether to mitigate potential harms of AI.
knowledge about the “socio” AI is fit for purpose. Blindly adopting Taking the RAI principle of
components is necessary for AI without solving a real user problem explainability as an example, technical
contextualizing and operationalizing or mismatching stakeholders’ needs solutions (e.g., explainable AI
RAI principles for a specific system or and goals is not only a waste of algorithms or techniques) often
product. For example, fairness is a resources but can also be one of the root provide complex outputs about how
socially constructed and negotiated causes of unintended harms. the model works. They may be
concept that varies across contexts and Identifying purposeful and responsible challenging for a stakeholder to
cultures. Developers of an AI-powered use of AI for a given product requires consume and recent research shows
system must choose fairness criteria and understanding problem spaces and that they risk information overload
harm mitigation strategies based on who identifying unmet needs, then and even exacerbating harmful
could be harmed by unfairness and how. considering whether AI is needed or overreliance on AI. UX designers can
The Fairness Checklist [7]—a useful whether there is a better alternative, create AI explanation designs that are
RAI resource aimed at supporting the such as using deterministic software. It easier to understand, such as
development of fairer AI products— also requires exploring possible leveraging the modality (e.g., text or
starts with an “Envision” step, which AI-powered system features and visualization) that the stakeholder
includes scrutinizing the system’s assessing their risks and benefits. Such group is more accustomed to, lowering
potential fairness-related harms to assessment requires understanding how the information workload, or giving
various stakeholder groups and the system features behave in users more control through
soliciting their input and concerns. UX interaction and affect people. All of progressive disclosures and
researchers are well poised to lead this these tasks—identifying needs, interactions. To make an AI-powered
step. Let’s take our opening scenario as exploring the design space of system system more reliable, safe, fair, and
an example. UX researchers could features and envisioning their effect on privacy-preserving, UX can provide a
identify and conduct research with people (as well as guiding the team to do range of design features [2], such as
various stakeholder groups, for so), and testing possible solutions—are system guardrails, alternative
example, communities of patients, within the UX discipline’s core toolbox. modalities to support accessibility,
clinicians, and other hospital staff of In our opening scenario, if UX transparency about the risks and
different roles and career stages. The practitioners are involved early on, they limitations, control mechanisms for
research could help define fairness could start with identifying pain points people to take actions, and paths for
criteria for the AI system, such as in clinicians’ treatment decision-making auditing, feedback, and contestability.
ensuring comparable error rates across process and understanding what aspects Taking the treatment-prediction AI
patient demographic groups and of decision making might or might not system as an example again, to
comparable performance improvement fit a workflow that leverages AI mitigate potential fairness issues, UX
and workload changes across clinician capabilities. They could guide the team designers could create a warning
groups. These criteria should then guide to thoroughly explore possible AI and feature when the current patient
all the following steps, from defining non-AI features to address these pain belongs to a group that the model had
system architecture, dataset choices, points and compare these features scarce similar training data for. They
and UX designs to evaluating system through user testing. The product team could also create a feedback feature for
readiness for deployment. Similarly, UX may end up landing on a more effective clinicians to report erroneous model
research can provide inputs to solution than “treatment prediction,” predictions to help improve the AI
operationalize other RAI principles such as a system to help clinicians gather system’s future performance.
(e.g., identifying what people want to treatment evidence for similar patients Carrying out responsible
understand about the model as or compare treatment options. Despite evaluation. RAI requires iterative
explainability criteria [5]) and inform the critical role of responsible ideation, development and evaluation against
better model or system architectures the unfortunate reality is that, in many RAI principles. RAI evaluation must
accordingly (e.g., working with data organizations, UX professionals are involve much broader approaches than

INTER ACTIONS. ACM.ORG J U LY – A U G U S T 2 0 2 4 INTER ACTIONS 25


COVER STORY
model-centric performance metrics to professionals can act as human advocates
reflect the real effects and potential to reinforce an RAI lens throughout the
harms of AI-powered systems on AI development and deployment cycle.
stakeholders in their deployment
contexts. Human-centered evaluation is PATHS FORWARD FOR UX
a core part of the UX discipline, TO MEET RAI NEEDS
2021 JOURNAL IMPACT including a range of quantitative and The instrumental role of UX and the
FACTOR 14.324 qualitative methods that can cater to above-mentioned UX contributions
different evaluation criteria and have not been fully realized in most
practical situations of stakeholder organizations’ current RAI practices.
ACM Computing groups. RAI practices can benefit from
involving UX practitioners to lead the
Drawing on our research bridging UX
and RAI, we suggest paths forward for
Surveys (CSUR) evaluation steps. UX practitioners can
work with data scientists to define and
organizations and the UX discipline to
enable meaningful impact on RAI.
operationalize evaluation metrics for For organizations that prioritize
ACM Computing Surveys RAI principles, advocating evaluation RAI, consider the following:
methods that involve stakeholders to • Involving UX researchers and
(CSUR) publishes reflect their real needs and behaviors. designers early on, in the product
comprehensive, For example, when evaluating whether ideation and definition stage, and
the treatment-prediction AI system throughout the iterative development
readable tutorials and introduces fairness issues for clinicians, and evaluation of datasets, models, and
survey papers that give any kind of benchmark datasets that systems.
data scientists use may not suffice. • Defining and incentivizing UX
guided tours through Instead, UX researchers can design and professionals’ role in RAI and actively
the literature and conduct user studies to compare the promoting a shared understanding of
outcomes and experiences of different possible UX contributions to RAI
explain topics to those
clinician groups, considering a outcomes—beyond UI designs,
who seek to learn the multitude of experience-related metrics, advocating for UX’s involvement in all
basics of areas outside such as efficiency, workload, and the stages discussed in the “How Can
subjective satisfaction. In addition, UX UX Contribute to RAI?” section. It is
their specialties. These can also expand the RAI evaluation also necessary to incentivize UX
carefully planned and toolbox with practical, relatively individuals’ RAI-specific contributions.
lower-cost “test-drive” approaches that For example, it may be time for
presented introductions could help surface RAI issues early on, organizations to create and recruit for
are also an excellent using approaches such as lab RAI-specific UX roles, as role
experiments and surveys. uncertainty often results in RAI falling
way for professionals to All of these contributions reflect the through the cracks.
develop perspectives on, well-established role of UX professionals • Facilitating collaboration between
and identify trends in, as advocates for stakeholders’ values, UX and non-UX disciplines. The
needs, and concerns. We recognize that traditional separation-of-concerns
complex technologies. an RAI lens is a reflexive and ongoing practice that isolates UX work from
perspective that seeks to account for the engineering work is not compatible with
social responsibilities of those who RAI [3], which requires tightly coupled
develop and shape a technology, which understanding and augmentation of the
is inherent to the role of the UX “socio” and “technical” components.
discipline [4]. UX professionals can play Cross-disciplinary collaboration
a vital role in cultivating a shared RAI requires ongoing organizational work
lens by actively collaborating and and practical steps to break the
negotiating with non-UX professionals expertise boundaries and cultural
such as data scientists, engineers, and barriers by being intentional about
product managers. In short, UX developing a common language and

For further information UX can expand the RAI evaluation toolbox


and to submit your with practical, relatively lower-cost “test-
manuscript, drive” approaches that could help surface
visit csur.acm.org RAI issues early on, using approaches
such as lab experiments and surveys.
26 INTER ACTIONS J U LY – A U G U S T 2 0 2 4 @INTER ACTIONSMAG
establishing frequent touch points across ethical lenses in UX training and responsible AI: Adaptations of UX practice
functions. For example, recent research resources. Many resources have to meet responsible AI challenges. Proc. of
CHI 2023. ACM, New York, 2023.
[8] suggests that AI design guidelines emerged in recent years aiming to
5. Liao, Q.V., Pribić, M., Han, J., Miller, S.,
can be used as a boundary object to educate UX professionals about the
and Sow, D. Question-driven design
facilitate cross-disciplinary affordances and design opportunities of process for explainable AI user experiences.
collaboration by establishing shared AI technologies. While the AI readiness arXiv preprint arXiv:2104.03483, 2021.
goals and languages, and empowering of UX practitioners is important, they 6. Moore, S., Liao, Q.V., and Subramonyam,
UX practitioners to take a lead in also need to be sensitized to the H. fAIlureNotes: Supporting designers in
fostering a human-centered AI culture. limitations, risks, and ethical understanding the limits of AI models for
computer vision tasks. Proc. of CHI 2023.
For the UX discipline (as well as considerations of different AI
ACM, New York, 2023.
academic HCI research informing UX technologies. We note that it may be 7. Madaio, M.A., Stark, L., Wortman
practices) to meet the demand of RAI, insufficient to provide a high-level Vaughan, J. and Wallach, H. Co-designing
consider the following: description of limitations, especially for checklists to understand organizational
• Developing research, design, and current complex and multicapability AI challenges and opportunities around
evaluation methods; design frameworks technologies such as generative AI. fairness in AI. Proc. of CHI 2020. ACM,
and principles; and other UX toolboxes More-sophisticated resources are New York, 2020.
8. Yildirim, N., Pushkarna, M., Goyal,
specific to RAI principles. For example, required to support UX professionals in
N., Wattenberg, M., and Viégas, F.
current UX evaluation metrics may fall exploring and anticipating an AI Investigating how practitioners use
short in capturing the potential harms technology’s risks specific to its human-AI guidelines: A case study on the
of an AI system. These new methods stakeholders and deployment contexts. people + AI guidebook. Proc. of CHI 2023.
and toolboxes should aim to inform not For example, an AI incident database ACM, New York, 2023.
only design solutions but also choices of could be a useful resource for
datasets, model architectures, and other anticipating risks and potential harms of Q. Vera Liao is a principal researcher
algorithmic solutions. Further AI. Recent HCI research has developed at Microsoft Research Montreal, where
assessments should evaluate the tools (e.g., [6]) that allow UX designers to she is part of the Fairness, Accountability,
Transparency, and Ethics in AI (FATE) group.
potential societal impact, ethical “tinker” with AI as a design material—
Her current research interests are in human-
considerations, and unintended for example, observing different inputs AI interaction, explainable AI, and responsible
consequences of AI systems, guiding to and outputs from a model, directly AI, with an overarching goal of bridging
design decisions toward more- incorporating model inputs-outputs into emerging AI technologies and human-centered
responsible outcomes. In addition to the prototyping, and testing and perspectives.
Fairness Checklist [7] we mentioned understanding possible failures of a → [email protected]
earlier, another example of such a UX model in a specific application context. Mihaela Vorvoreanu leads UX Research
tool is a “question-driven design process We hope this article provides a and Responsible AI Education at Aether,
for explainability” that we proposed [5]. starting point to further align RAI and Microsoft’s research and advisory body for AI
It is a design process that starts with UX practices, as well as convincing ethics and effects in engineering and research.
She is an expert in human-centered AI and an
identifying stakeholders’ explainability arguments for the AI industry to
AI and RAI educator, giving frequent talks to
needs (by what questions they ask), which prioritize UX. RAI work is becoming all major companies’ leadership and to Microsoft
then guide the selection of the more important and challenging in employees. Before joining Microsoft, she had a
explainability techniques by the current “AI arms race.” Looking career in academia, most recently as a tenured
collaborating with data scientists, and inside to leverage and build on existing professor at Purdue University.
the design and evaluation of UX expertise and practices within an → [email protected]
explainability features that build on the organization may pave the way for Hari Subramonyam is a research
chosen technique. responsible and human-centered AI assistant professor at Stanford University.
• Expanding UX research and design technologies. His research sits at the intersection of HCI
perspectives from more-transactional and learning sciences. He explores enhancing
human learning through AI, emphasizing
interactions with users to community Endnotes
ethical design, cocreation with educators,
groups and affected stakeholders. 1. Amershi, S. et al. Guidelines for human-AI
and developing transformative AI-powered
Stakeholder groups may include domain interaction. Proc. of CHI 2019. ACM, New
learning experiences.
experts, affected communities (who may York, 2019.
[email protected]
2. Liao, Q.V., Subramonyam, H., Wang, J.
or may not be direct users),
and Wortman Vaughan, J. Designerly Lauren Wilcox has held research and
policymakers, and members of the understanding: Information needs for organizational leadership roles in both
public. This requires adaptation and model transparency to support design industry and academia. At Google Research,
innovation of existing user research ideation for AI-powered user experience. she was a senior staff research scientist
methods: Community-collaborative Proc. of CHI 2023. ACM, New York, 2023. and group manager of the Technology, AI,
approaches to AI involve community 3. Subramonyam, H., Im, J., Seifert, C., and Society and Culture (TASC) team. She holds an
Adar, E. Solving separation-of-concerns adjunct associate faculty position at Georgia
and stakeholder groups and build on
problems in collaborative design of human- Tech’s School of Interactive Computing. She
traditions such as collaborative, is an ACM Distinguished Member and was
AI systems through leaky abstractions.
speculative design and community- Proc. of CHI 2022. ACM, New York, 2022. an inaugural member of the ACM Future of
based participatory or action research. 4. Wang, Q., Madaio, M., Kane, S., Kapania, Computing Academy.
• Strengthening the critical and S., Terry, M., and Wilcox, L. Designing → [email protected]

DOI: 10.1145/3665504. COPYRIGHT HELD BY AUTHORS. PUBLICATION RIGHTS LICENSED TO ACM. $15.00

INTER ACTIONS. ACM.ORG J U LY – A U G U S T 2 0 2 4 INTER ACTIONS 27

You might also like