@interactionsmag Interactions July-August 2024
@interactionsmag Interactions July-August 2024
UX Matters:
The Critical Role of UX
in Responsible AI
Q. Vera Liao, Microsoft Research
Mihaela Vorvoreanu, Microsoft
Hari Subramonyam, Stanford University
L
Lauren Wilcox, eBay and Georgia Institute of Technology
engage-csedu.org
Blindly adopting AI without solving
a real user problem or mismatching
stakeholders’ needs and goals is not only
a waste of resources but can also be one
of the root causes of unintended harms.
24 INTER ACTIONS J U LY – A U G U S T 2 0 2 4 @INTER ACTIONSMAG
can play an instrumental role in
operationalizing RAI within product Safe
Secure and Explainable and Privacy- Fair, with Harmful
Resilient Interpretable Enhanced Bias Managed Accountable
teams. Below, we consider a non- and
exhaustive list based on our work Valid and Reliable Transparent
interviewing and observing many UX
practitioners who work on AI-
Figure 1. Characteristics of trustworthy AI systems specified in the NIST AI Risk
powered products. Management Framework 1.0 (re-created from Figure 4 in https://nvlpubs.nist.gov/nistpubs/
Explicating the “socio” components ai/NIST.AI.100-1.pdf).
of the sociotechnical system through
UX research. The “socio” scientists to translate what people care currently not involved in the product-
components—who the stakeholders about in the deployment context to feature definition stage of AI-powered
are; what their values, needs, and features or signals for the model). systems, resulting in a missed
concerns are; how the deployment Facilitating purposeful and opportunity for RAI.
context(s) is set up—are often an responsible use of AI through Creating interaction-level RAI
integral part of UX research outcomes responsible ideation. RAI must start interventions. The UX discipline can
for any product. For RAI practice, they with the question of whether an AI significantly expand RAI interventions
can be instrumental. Most important, capability should be built and whether to mitigate potential harms of AI.
knowledge about the “socio” AI is fit for purpose. Blindly adopting Taking the RAI principle of
components is necessary for AI without solving a real user problem explainability as an example, technical
contextualizing and operationalizing or mismatching stakeholders’ needs solutions (e.g., explainable AI
RAI principles for a specific system or and goals is not only a waste of algorithms or techniques) often
product. For example, fairness is a resources but can also be one of the root provide complex outputs about how
socially constructed and negotiated causes of unintended harms. the model works. They may be
concept that varies across contexts and Identifying purposeful and responsible challenging for a stakeholder to
cultures. Developers of an AI-powered use of AI for a given product requires consume and recent research shows
system must choose fairness criteria and understanding problem spaces and that they risk information overload
harm mitigation strategies based on who identifying unmet needs, then and even exacerbating harmful
could be harmed by unfairness and how. considering whether AI is needed or overreliance on AI. UX designers can
The Fairness Checklist [7]—a useful whether there is a better alternative, create AI explanation designs that are
RAI resource aimed at supporting the such as using deterministic software. It easier to understand, such as
development of fairer AI products— also requires exploring possible leveraging the modality (e.g., text or
starts with an “Envision” step, which AI-powered system features and visualization) that the stakeholder
includes scrutinizing the system’s assessing their risks and benefits. Such group is more accustomed to, lowering
potential fairness-related harms to assessment requires understanding how the information workload, or giving
various stakeholder groups and the system features behave in users more control through
soliciting their input and concerns. UX interaction and affect people. All of progressive disclosures and
researchers are well poised to lead this these tasks—identifying needs, interactions. To make an AI-powered
step. Let’s take our opening scenario as exploring the design space of system system more reliable, safe, fair, and
an example. UX researchers could features and envisioning their effect on privacy-preserving, UX can provide a
identify and conduct research with people (as well as guiding the team to do range of design features [2], such as
various stakeholder groups, for so), and testing possible solutions—are system guardrails, alternative
example, communities of patients, within the UX discipline’s core toolbox. modalities to support accessibility,
clinicians, and other hospital staff of In our opening scenario, if UX transparency about the risks and
different roles and career stages. The practitioners are involved early on, they limitations, control mechanisms for
research could help define fairness could start with identifying pain points people to take actions, and paths for
criteria for the AI system, such as in clinicians’ treatment decision-making auditing, feedback, and contestability.
ensuring comparable error rates across process and understanding what aspects Taking the treatment-prediction AI
patient demographic groups and of decision making might or might not system as an example again, to
comparable performance improvement fit a workflow that leverages AI mitigate potential fairness issues, UX
and workload changes across clinician capabilities. They could guide the team designers could create a warning
groups. These criteria should then guide to thoroughly explore possible AI and feature when the current patient
all the following steps, from defining non-AI features to address these pain belongs to a group that the model had
system architecture, dataset choices, points and compare these features scarce similar training data for. They
and UX designs to evaluating system through user testing. The product team could also create a feedback feature for
readiness for deployment. Similarly, UX may end up landing on a more effective clinicians to report erroneous model
research can provide inputs to solution than “treatment prediction,” predictions to help improve the AI
operationalize other RAI principles such as a system to help clinicians gather system’s future performance.
(e.g., identifying what people want to treatment evidence for similar patients Carrying out responsible
understand about the model as or compare treatment options. Despite evaluation. RAI requires iterative
explainability criteria [5]) and inform the critical role of responsible ideation, development and evaluation against
better model or system architectures the unfortunate reality is that, in many RAI principles. RAI evaluation must
accordingly (e.g., working with data organizations, UX professionals are involve much broader approaches than
DOI: 10.1145/3665504. COPYRIGHT HELD BY AUTHORS. PUBLICATION RIGHTS LICENSED TO ACM. $15.00