0% found this document useful (0 votes)
95 views4 pages

52 Assignment2 Summary

The document summarizes a research paper about the ethical considerations of conversational AI. It discusses emerging issues like privacy concerns over user data collection, how to build user trust through transparency about the system's capabilities, and avoiding unintentionally reinforcing harmful stereotypes through an agent's persona. The researchers recommend pluralistic approaches tailored to specific applications rather than one-size-fits-all solutions. Designing agents to be androgynous and allowing users to interpret the persona can help avoid gender bias, and response strategies are needed to handle abusive language while protecting users.

Uploaded by

Eden
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views4 pages

52 Assignment2 Summary

The document summarizes a research paper about the ethical considerations of conversational AI. It discusses emerging issues like privacy concerns over user data collection, how to build user trust through transparency about the system's capabilities, and avoiding unintentionally reinforcing harmful stereotypes through an agent's persona. The researchers recommend pluralistic approaches tailored to specific applications rather than one-size-fits-all solutions. Designing agents to be androgynous and allowing users to interpret the persona can help avoid gender bias, and response strategies are needed to handle abusive language while protecting users.

Uploaded by

Eden
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Conversational AI: Social and Ethical

Considerations (A Summary)
*

Elayne Ruane Abeba Birhane


School of Computer Science, University College Dublin School of Computer Science, University College Dublin
Lero - The Irish Software Research Centre Lero - The Irish Software Research Centre
Dublin, Ireland Dublin, Ireland
[email protected] [email protected]

Anthony Ventresque
School of Computer Science, University College Dublin
Lero - The Irish Software Research Centre
Dublin, Ireland

Abstract—This is a summary of a research paper about ethical steps that need to be considered in developing and deploying
and moral dilemmas while using conversational AI which was CAs. The objective of the research is to work as a wake up
conducted by the three above mentioned researchers. They tried call for agent designers, developers, and owners. The paper
to figure out some emerging ethical issues and suggest ways for
agent designers, developers, and owners to approach them with highlights the potential harms of Conversational AI, relevant
the goal of responsible development of Conversational Agents. work from the literature, identifies a number of concerns and
Index Terms—Conversational Agent, Intelligent Systems, So- propose a way forward for critical and ethical engagement
cial Impact, Ethics throughout the design and development process.

I. I NTRODUCTION II. M ETHODS


Conversational AI (CA) is a medium through which human A. Plurality of approaches
interacts with a automatic system using natural language. Ethical concerns that emerge with Conversational AI vary
Its purpose is to serve the users providing information and markedly depending on the application domain, target user
instructions. Conversational AIs are now widely used tech- group, and the goal(s) of the agent.For responsible system
nology.CAs are used in various social domains including cus- design, deep understanding of the user groups characteristics,
tomer service and product recommendation, education support, contexts, and interests is imperative.Additionally, identifying
medical services, entertainment, social outreach, and personal solutions that are the most suitable to the specific scenario
organisation. Various terminological words like automatic should always be prioritized over attempting to fit some
agent, virtual agent, conversational agent etc are used to refer standard principles
CAs depending on their capabilities.The year 2016, dubbed
B. Trust and Transparency
the “Year of the Bot” after Microsoft CEO Satya Nadella
described bots as the new apps, saw the launch of more than Providing users with choices, and consequently with con-
30,000 chatbots on the Facebook Messenger platform alone trol, over how they prefer to interact with an agent, is an im-
[8] [11]. It is very natural that ethical concerns will be raised portant first step towards centring users needs and wellbeing.
like other technological innovation. Most of the time they Transparency about an agents status as automatic (non-human)
are ignored. And when considered they don’t come first as and the limits of its capabilities, for example, is essential in
technical development challenges [7] [36] [51]. As with any order to allow users to make informed choices, which further
technology that permeates our daily lives, the development contributes to users trust. Understanding user expectations of
and application of conversational AI raises various ethical an agent is crucial in ensuring that user trust in not taken
questions. Privacy issues are addressed widely in research advantage of. Reasonable expectations should be identified and
areas. But here the researchers addressed ethical challenges validated before the agent is published.
posed by the integration of conversational systems into human
C. Privacy
interaction as well as the necessary cautions and measured
The interaction of humans and CAs, and sometimes even
the presence of virtual agents such as in-home, always-on
devices, present various ethical and legal questions including public that use unsupervised learning but quickly learn racist,
what data is collected, who has access to it, how long the data homophobic, and sexist language and have to be shut down
is stored and where and what such data is used for. Collecting to avoid abuse of human users. In the case of Microsoft’s Tay
user data raises many privacy concerns, some of which have bot, this took less than 24 hours [48]. Dialogue design should
legal basis and are covered by data protection laws that vary involve response strategies for romantic attention, sexualized
geographically, such as GDPR in Europe. The nature of these messages, and abuse with the aim of protecting the user. If
ethical issues varies significantly depending on the domain in an agent can detect abusive language, which is a difficult
which the agent is deployed and the level of vulnerability of task for both social and technical reasons, it can invoke the
the user group appropriate response strategy. This may be a non-response, a
neutral response, an in-kind response, or escalation to a human
D. Agent Persona agent. In this scenario the domain and goals of the agent are
A large part of agent design decisions relate to persona and important, but the user demographic is the most influential
personality, which can be used to inform specific dialogue factor when designing the agent’s response strategy [10]
choices. Agent persona expressions include gender, age, race,
cultural affiliation, and class. These indications may be more III. R ESULTS
explicit as the level of embodiment increases. It is important A. Plurality of approaches
to consider the impact of agent persona on the types of a recent survey on the use of CAs in education and
relationships users may try to explore with the agent and to associated user concerns revealed that people were open to
determine if the design of the agent persona and accompany- this technology if privacy issues are addressed but found that
ing dialogue is encouraging behaviour that may be harmful. there were significant differences in how adults and children
Agent persona design can also inadvertently reinforce harmful viewed privacy in this context [28]. There is no one-fits-all
stereotypes. There is no clear consensus within the industry ethical standard or principle that can be applied to all CAs.
on this issue. Some recommend allowing users to lead the
agent persona by designing the agent to dynamically respond B. Trust and Transparency
to how the user interacts. Others continue to gender the agents Recent work has shown that users behave and interact
they build in an attempt to humanize the system and increase differently when conversing with an automatic agent compared
user satisfaction at the risk of reinforcing harmful gender bias. to interacting with another human [34]. If users are aware
We recommend designing agents to be androgynous to avoid that they are speaking to an automated agent or a human
gender stereotypes and allow users to interpret according to agent, then they might be able to make informed decisions
their own context. with regards to their own behaviour, in particular regarding
information disclosure [14]. HM’s chatbot which helps users
E. Anthropomorphism and Sexualization to plan and purchase outfits, a user may expect relatively
Humans tend to anthropomorphize machines [26]. This kind unbiased information such that the chatbot will not show
of anthropomor- phism is exacerbated when users can interact clothes fromother retailers but also that it won’t only show
conversationally with a systemand especially if the system has the most expensive HM clothes either. The user’s assumption
been imbued with personality and embodied with an avatar of agent neutrality is part of a widely held but often misguided
or in some other way. This can be seen throughout history belief that AI systems are unbiased
and occurs even when the developers themselves oppose
such anthropomorphism and over-hyping of machines. The C. Privacy
creator of ELIZA (1964-66) Joseph Weizenbaum, for example, We propose that clear legal requirements should be viewed
explicitly insisted that ELIZA could not converse with true as a baseline, not a target,in this area where the default
understanding. Despite this, many users were convinced of approach should be to only collect and store user data if
ELIZAs intelligence and empathy [49]. Possibly a surpris- required for delivery of the stated service and to do so
ing element of human-computer interaction is unsolicited in a transparent manner. User privacy is paramount and is
romantic attention towards the agent. A good example of becoming increasingly important as we see AI systems rolled
this is the popular entertainment chatbot Mitsuku 1 which out into more areas of society where such systems are used to
has won the Loebner Prize four times. Steve Worswick, the make increasingly substantial and far-reaching decisions. This
creator and maintainer of Mitsuku, has described the type of makes the concept of privacy something that should not be
romantic attention ”she” gets and even the correspondence framed entirely as a problem regarding the individual user but
he receives from users demanding her freedom [53]. Due to rather as a wider social concern. The individual user is often
the prevalence of abusive messages directed at conversational not afforded the opportunity or does not have the resources to
agents,unsupervised learning techniques on an unconstrained negotiate terms and conditions that are written by corporations
user group should be avoided. Even with a trusted user group, in a manner that applies to all. How we think about and
oversight is required to ensure the agent has not acquired legislate privacy, therefore, should be considered in light of
harmful concepts or language. There are numerous examples how the collective might be impacted by the introduction of
of chatbots that have been released for use by the general AI systems. This perspective is helpful in re-conceptualizing
privacy in a way that links it to the bigger picture of collective basis to produce agent-specific strategies to address the ethical
aspirations and concerns. considerations described in this paper.

D. Agent Persona R EFERENCES


Many publicly available agents present as female, including [1] Ajunwa, I., Friedler, S., Scheidegger, C.E.,Venkatasubramanian, S.: Hir-
popular assistants such as Siri, Alexa, Cortana and the default ing by algorithm: predicting and preventing disparate impact. Available
at SSRN (2016)
female voice for Google Assistant [50]. While female personas [2] Baum, S.D.: Social choice ethics in artificial intelligence. AI SOCIETY
are often used in subservient contexts, male personas are pp. 1–12 (2017)
often found in situations perceived as authoritative, such as [3] Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. ProPublica
(2016)
an automatic interviewer [23] [45] [55]. Gendering CAs in [4] Bessi, A., Ferrara, E.: Social bots distort the 2016 us presidential election
this manner may reflect market research but in the interests online discussion. First Monday 21(11-7) (2016)
of gender equity, practices that embed and perpetuate socially [5] Boiteux, M.: Messenger a F8 2018 (2018)
[6] Bourdieu, P.: Language and symbolic power. Harvard University Press
held harmful gender stereotypes should be avoided. In some (1991)
domains, there is an increased move towards androgyny such [7] Cheney-Lippold, J.: We are data: Algorithms and the making of our
as banking agent Kai. Research has been conducted on how digital selves. NYU Press (2018)
[8] Constine, J., Perez, S.: Facebook messenger now allows payments in its
users respond to androgynous agents and the effects this has 30,000 chatbots. techcrunch. URL: https://tcrn.ch/2cDEVbk (2016)
on user experience. A study that analysed college students’ [9] Curry, A.C., Rieser, V.: MeToo Alexa: How conversational systems
perceptions of gendered vs androgynous agents [15], found a respond to sexual harassment. In: Proceedings of the Second ACL
Workshop on Ethics in Natural Language Processing. pp. 7–14 (2018)
gender-neutral agent led to more positive views on females [10] Curry, A.C., Rieser, V.: A crowd-based evaluation of abuse response
than a female-presenting agent did. Another similar study strategies in conversational agents. arXiv preprint arXiv:1909.04387
by [42] found that female agents received more abuse than (2019)
[11] Dale, R.: The return of the chatbots. Natural Language Engineering
androgynous agents.Research has been conducted on how 22(5), 811–817 (2016)
users respond to androgynous agents and the effects this has [12] Evans, R.E., Kortum, P.: The impact of voice characteristics on user
on user experience. A study that analysed college students’ response in an interactive voice response system. Interacting with
Computers 22(6), 606–614 (2010)
perceptions of gendered vs androgynous agents [15], found a [13] Ferryman, K., Pitcan, M.: Fairness in precision medicine. Data Society
gender-neutral agent led to more positive views on females (2018)
than a female-presenting agent did. Another similar study [14] Gentsch, P.: Conversational ai: How (chat) bots will reshape the digital
experience. In: AI in Marketing, Sales and Service, pp. 81–125. Springer
by [42] found that female agents received more abuse than (2019)
androgynous agents. [15] Gulz, A., Haake, M.: Challenging gender stereotypes using virtual
pedagogical characters. In: Gender Issues in Learning and Working with
E. Anthropomorphism and Sexualization Information Technol- ogy: Social Constructs and Cultural Contexts, pp.
113–132. IGI Global (2010)
Research has shown users use greater profanity with a [16] Hill, J., Ford, W.R., Farreras, I.G.: Real conversations with artificial
chatbot than with a human and are similarly more likely to intelligence: A comparison between human–human online conversations
and human–chatbot conversations. Computers in Human Behavior 49,
harass a chatbot than a human agent [16], even more so if 245–250 (2015)
the agent has a female persona [42]. Recent work [9] that [17] Howard, R., Kehoe, J.: Mobile consumer survey 2018: The irish cut
explored the capabilities of conversational agents to respond to (2018)
[18] Hutchby, I., Wooffitt, R.: Conversation analysis. Polity (2008)
sexual harassment on the part of the user and collected 360,000 [19] Inkster, B., Sarda, S., Subramanian, V.: An empathy-driven, conversa-
conversations found that 4 percent were sexually explicit, a tional artifi- cial intelligence agent (wysa) for digital mental well-being:
percentage somewhat below previous research into sexually real-world data evalu- ation mixed-methods study. JMIR mHealth and
uHealth 6(11), e12106 (2018)
explicit chatbot interactions. The authors argue handling these [20] Insights, G.M.: Intelligent virtual assistant (iva) market trends share
types of conversations should be a core part of a systems forecast 2024s (2018)
design and evaluation due to their prevalence and conse- [21] Introna, L., Nissenbaum, H.: The politics of search engines. IEEE
Spectrum 37(6), 26–27 (2000)
quences of reinforcing gender bias and encouraging aggressive [22] Jobin, A., Ienca, M., Vayena, E.: The global landscape of ai ethics
behaviour. guidelines. Nature Machine Intelligence pp. 1–11 (2019)
[23] Kim, Y., Baylor, A.L., Shen, E.: Pedagogical agents as learning com-
D ISCUSSION panions: the impact of agent emotion and gender. Journal of Computer
Assisted Learning 23(3), 220–234 (2007)
Assuming agents continue to improve in their functionality [24] Kim, Y., Wang, Y., Oh, J.: Digital media use and social engagement:
and conversational ability, how will their ubiquity and inte- How social media and smartphone use influence social activities of
college students. Cyberpsy- chology, Behavior, and Social Networking
gration in our daily lives change how we live? Who will 19(4), 264–269 (2016)
be most affected by the decisions of agent owners? These [25] Kretzschmar, K., Tyroll, H., Pavarini, G., Manzini, A., Singh, I., Group,
questions are difficult to answer but provide perspective on the N.Y.P.A.: Can your phone be your therapist? young peoples ethical per-
spectives on the use of fully automated conversational agents (chatbots)
ethical issues raised in this paper. Ultimately, there are no one- in mental health support. Biomedical Informatics Insights 11 (2019)
approach-fits-all answers to the concerns we have discussed. [26] Kuipers, B., McCarthy, J., Weizenbaum, J.: Computer power and human
However, designing, building, and deploying an agent into the reason. ACM SIGART Bulletin (58), 4–13 (1976)
[27] Lambrecht, A., Tucker, C.: Algorithmic bias? an empirical study of
social sphere engenders a level of social responsibility that apparent gender-based discrimination in the display of stem career ads.
must be confronted and contemplated on an agent-by-agent Management Science (2019)
[28] Latham, A., Goltz, S.: A survey of the general publics views on the
ethics of using ai in education. In: International Conference on Artificial
Intelligence in Education. pp. 194–206. Springer (2019)
[29] Lau, J., Zimmerman, B., Schaub, F.: Alexa, are you listening?: Privacy
perceptions, concerns and privacy-seeking behaviors with smart speak-
ers. HCI 2, 102 (2018) 30. Linell, P.: Rethinking language, mind, and
world dialogically. IAP (2009)
[30] Luger, E., Sellen, A.: Like having a really bad pa: the gulf between user
expec- tation and experience of conversational agents. In: Proceedings
of the 2016 CHI Conference on Human Factors in Computing Systems.
pp. 5286–5297 (2016)
[31] Meek, T., Barham, H., Beltaif, N., Kaadoor, A., Akhter, T.: Managing the
ethical and risk implications of rapid advances in artificial intelligence:
a literature review. In: 2016 Portland International Conference on
Management of Engineering and Technology (PICMET). pp. 682–693.
IEEE (2016)
[32] Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how. an
overview of ai ethics tools, methods and research to translate principles
into practices (2019)
[33] Mou, Y., Xu, K.: The media inequality: Comparing the initial human-
human and human-ai social interactions. Computers in Human Behavior
72, 432–440 (2017)
[34] Oh, K.J., Lee, D., Ko, B., Choi, H.J.: A chatbot for psychiatric counsel-
ing in mental healthcare service based on emotional dialogue analysis
and sentence generation. In: 2017 18th IEEE International Conference
on Mobile Data Management (MDM). pp. 371–375. IEEE (2017)
[35] Oracle: Can virtual experiences replace reality? (2016)
[36] O’Neil, C.: Weapons of math destruction: How big data increases
inequality and threatens democracy. Broadway Books (2016)

You might also like