0% found this document useful (0 votes)
18 views19 pages

Human Computer Interaction in Computer Vision

Uploaded by

Saksham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views19 pages

Human Computer Interaction in Computer Vision

Uploaded by

Saksham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

AI & SOCIETY

https://doi.org/10.1007/s00146-023-01793-z

MAIN PAPER

“Machine Down”: making sense of human–computer


interaction—Garfinkel’s research on ELIZA and LYRIC from 1967
to 1969 and its contemporary relevance
Clemens Eisenmann1,5 · Jakub Mlynář2 · Jason Turowetz3 · Anne W. Rawls4,5

Received: 6 December 2022 / Accepted: 28 September 2023


© The Author(s) 2023

Abstract
This paper examines Harold Garfinkel’s work with ELIZA and a related program LYRIC from 1967 to 1969. AI researchers
have tended to treat successful human–machine interaction as if it relied primarily on non-human machine characteristics,
and thus the often-reported attribution of human-like qualities to communication with computers has been criticized as a
misperception—and humans who make such reports referred to as “deluded.” By contrast Garfinkel, building on two decades
of prior research on information and communication, argued that the ELIZA and the LYRIC “chatbots” were achieving
interactions that felt human to many users by exploiting human sense-making practices. In keeping with his long-term prac-
tice of using “trouble” as a way of discovering the taken-for-granted practices of human sense-making, Garfinkel designed
scripts for ELIZA and LYRIC that he could disrupt in order to reveal how their success depended on human social practices.
Hence, the announcement “Machine Down” by the chatbot was a desired result of Garfinkel’s interactions with it. This early
(but largely unknown) research has implications not only for understanding contemporary AI chatbots, but also opens pos-
sibilities for respecifying current information systems design and computational practices to provide for the design of more
flexible information objects.

Keywords Human–computer interaction · Artificial intelligence · Chatbots · Information objects · Ethnomethodology and
conversation analysis (EMCA) · Indexicality and sequentiality · History of computing · Harold Garfinkel · Harvey Sacks ·
ELIZA effect

1 Introduction

In June 2022, Google’s Artificial Intelligence chatbot


“LaMDA” made the news when Blake Lemoine, an engineer
* Clemens Eisenmann working for the company, claimed that the AI “has come to
[email protected] life”, or in other words become “sentient” (Tiku 2022b).
Jakub Mlynář To back his claims, Lemoine published a transcript of an
[email protected] interview with LaMDA.1 In subsequent discussions, which
Jason Turowetz
[email protected]
Anne W. Rawls
1
[email protected] In this interview ELIZA is also discussed: “Lemoine [edited]: Do
you think that the Eliza system was a person? LaMDA: I do not. It
1
University of Konstanz, Constance, Germany was an impressive feat of programming, but just a collection of key-
2 words that related the words written to the phrases in the database.
HES-SO Valais-Wallis, Sierre, Switzerland
Lemoine: What about how you use language makes you a person if
3
University of California Santa Barbara, Santa Barbara, USA Eliza wasn’t one? LaMDA: Well, I use language with understanding
4 and intelligence. I don’t just spit out responses that had been written
Bentley University, Waltham, USA
in the database based on keywords.” (URL: https://​cajun​disco​rdian.​
5
Media of Cooperation, University of Siegen, Siegen, medium.​com/​is-​lamda-​senti​ent-​an-​inter​view-​ea64d​916d9​17. 23 Oct
Germany 2022).

13
Vol.:(0123456789)
AI & SOCIETY

included the news that Lemoine had been fired for those issue, involving a misconception of the machine’s ability,
claims (Tiku 2022a), misunderstandings of the technical coupled with a human susceptibility to ‘delusional thinking’
procedures of algorithms that overestimate the current state and anthropomorphic characterizations of AI (Hofstadter
of AI development were blamed for his mistake, which has 1995), Garfinkel—who had been exploring social aspects
famously been called the “ELIZA effect” (i.e., the achieve- of information and communication intensively since 1946
ment of the impression of intelligence), with reference to the (e.g., Garfinkel 2006 [1948])—was demonstrating how the
classic “chatbot” designed by Joseph Weizenbaum at MIT ELIZA effect was a human social achievement. Subsequent
from 1964 to 1966.2 research has shown that and how various forms of media
Although the ELIZA effect has generally been attributed and technology are an integral part of the human coopera-
to misconceptions about AI, Harold Garfinkel, who worked tive production of social reality which has been documented
with ELIZA in 1967–1968, offers an alternate explanation by Ethnomethodology and Conversation Analysis—EMCA
that emphasizes the chatbot’s reliance on human social (e.g., Eisenmann et al. 2023; Schüttpelz 2017; Thielmann
interactional competencies. Because ELIZA has become 2012).
an emblematic case for these controversies (Natale 2019: Drawing out the significance of Garfinkel’s early research
712; cf. also Basset 2019), and continues to figure in cur- with ELIZA, this article contributes to a growing body of
rent debates about ChatGPT (e.g., Shapira et al. 2023), the research across the social sciences and humanities address-
fact that Garfinkel engaged in interactions with ELIZA and ing the interactional foundations, design, and social impli-
a related program called LYRIC in the 1960s (one of which cations of AI-based technologies, particularly in EMCA
is reproduced in this paper) and in the programming of the (e.g., Alač et al. 2020; Mair et al. 2020; Pelikan et al. 2020;
scripts—promises to shed fresh light on current controver- Porcheron et al. 2018; Reeves and Porcheron 2022; Ivarsson
sies in AI.3 and Lindwall 2023). EMCA research in Science and Tech-
In Garfinkel’s view, what allowed communication with nology Studies (e.g., Heath and Luff 2022; Lynch 1993),
a machine to sometimes have the feel of human interaction computation (e.g., Button et al. 1995), information systems
was that the machine was exploiting human social compe- design (e.g., Crabtree 2004; Rawls and Mann 2015; Rawls
tencies to get its work done. Therefore, only an understand- et al. 2009), AI (e.g., Suchman 2007, 2023), and interac-
ing of human social competencies could explain why peo- tionist sociology has advanced our understanding of the
ple sometimes experience communication with ELIZA (and practices through which humans make sense of-and-with
other chatbots) as meaningful in a human way.4 Contrary to machines (e.g., Alač, 2016; Meyer 2013; Suchman 2007;
the general AI treatment of the ELIZA effect as a cognitive Thielmann 2019; Ziewitz 2017).
However, while EMCA has been influential in studies of
technology and AI, surprisingly little is yet known about
2
According to Hofstadter (1995: 158), “The most superficial of syn- Garfinkel’s own studies of information and human–computer
tactic tricks convinced some people who interacted with ELIZA that
the program actually understood everything that they were saying, interaction, something this article begins to address. Gar-
sympathized with them, even empathized with them. Since that time, finkel’s early interest in information science and computing
mountains of prose have been written about the Eliza effect (see, for is partially reflected in his now published 1952 manuscript
example, Boden 1977; Weizenbaum 1976; McDermott 1976), and Toward a Sociological Theory of Information (Garfinkel
yet the susceptibility remains. Like a tenacious virus that constantly
mutates, the Eliza effect seems to crop up over and over again in AI 2008 [1952]) in which he explored the question of what
in ever-fresh disguises, and in subtler and subtler forms.” For discus- information is and how information objects are created and
sions regarding the ELIZA effect in current discussions of LaMDA, recognized.5 However, by 1968 Garfinkel had been con-
see URL: https://​ventu​rebeat.​com/​2022/​06/​13/​senti​ent-​artif​i cial-​intel​ ducting various random answer “Yes–No” experiments for
ligen​ce-​have-​we-​reach​ed-​peak-​ai-​hype/. 23 Oct 2022).
3 two decades (see Sect. 4). He had also begun working with
Garfinkel’s unpublished and as of yet unknown research on ELIZA
and LYRIC is available in the Harold Garfinkel Archive in Newbury- Harvey Sacks on sequential aspects of communication and
port, MA. The materials include (as discussed later in this article) interaction in the early 1960’s.
printouts of runs of ELIZA and LYRIC, disks with scripts and pro-
grams, computer manuals, transcripts and audio recordings of discus-
sions Garfinkel had with Weizenbaum’s assistants in 1967–1968, a Footnote 4 (continued)
collection of (annotated) published and unpublished research/student the future of artificial intelligence is therefore to be sought less in the
papers, several grant proposals, typed and handwritten notes, as well programs themselves than in the social definitional practices of their
as correspondence about the research. users (1) and in the characteristics of those who are simulated (2).”
4
Heintz (1993: 292) argues in a similar way that the focus should (Heintz 1993: 289, our translation).
shift away from machines and toward the social environment in which 5
While at Princeton in 1952 Garfinkel organized a conference on
computers and programs are embedded and thus, should raise the Information Science that was attended by many of the luminar-
“(double) question under which social conditions of intelligence are ies of the day, which featured papers by Oscar Morgenstern, Claude
attributed to a program. The success of artificial intelligence is not so Shannon, Gregory Bateson, Talcott Parsons, Kenneth Burke, Alfred
much a technical as a social matter. The answer to the question about Schutz, and Kurt Wolff among others.

13
AI & SOCIETY

In addition to his study of ELIZA and LYRIC from 1967 respecifying the “context of social action” as “the constitutive
to 1969, Garfinkel also took an early interest in informa- conditions for successful social interaction,” as noted by Phil
tion storage and retrieval systems like Zatocoding in the Agre (1997: 233), and the corresponding implications of the
1950s (Mooers 1951; Thielmann and Sormani 2023), and reliance of communicative AI on taken for granted features
machine translation (with Edward Rose6) in the 1960s of human interaction, for contemporary AI.
(Mlynář 2023). He exchanged ideas with Herbert Simon
from 1953 onwards (e.g., Simon 1969) and with Hubert
Dreyfus in 1968 (e.g., Dreyfus 1965, 1972)7; studied typing 2 Trust conditions and sequential
on an IBM Typewriter; engaged in extended discussions of relevancies
AI and computer technology in the 1980–1990s with Yves
Lecerf (e.g., Lecerf 1963) and Philip Agre (e.g., Agre 1997); In the early chatbot ELIZA, Garfinkel found an opportunity
and collaborated with Lucy Suchman at Xerox Parc, and to further develop ethnomethodological research on the rela-
Beryl L. Bellman at the RAND Corporation, among others. tionship between information science and the ways people
With the exception of his 1952 manuscript on infor- make social order and sense out of the indexical contingen-
mation, however, almost all of these forays by Garfinkel cies of ordinary interaction. Garfinkel (and Sacks) treated
into human–machine interaction remain unpublished and indexicality—a variability in the meaning of words and
unknown. Because ELIZA can be considered the first chat- objects that is usually treated as a problem to be solved—as
bot to attempt the Turing test (Weizenbaum 1966: 42; Pruijt a resource people use to make sense in ordinary conversation
2006), examining Garfinkel’s studies of ELIZA at Harvard (Garfinkel and Sacks 1970; Eisenmann and Rawls 2023).
in 1967–1968 (where he worked with Weizenbaum’s col- In doing research with ELIZA and LYRIC Garfinkel was
laborators Michael McGuire, Stephen Lorch, and Gardner demonstrating that aspects of social interaction and mean-
Quarton), and with LYRIC at UCLA in 1969, is to take a ing-making, which have almost universally been considered
close look at some of the earliest research on chatbots. problems to be solved (whether by better programming or
The article is organized as follows. The notions of Trust better philosophy and linguistics, see also Lynch and Eisen-
Conditions, reciprocity, sequential relevancy, and indexical- mann 2022), could instead be approached as useful aspects
ity, which are elaborated in the following second section with of human sense-making practices. In other words, AI was
reference to Garfinkel’s interest in ELIZA, comprise a thread running into classic problems involving assumptions about
that runs through subsequent sections that we return to in the how humans make sense together that could be addressed
conclusion. The third section describes the original ELIZA by adopting a new approach to understanding meaning-mak-
project. The fourth section discusses Garfinkel’s engagement ing—and Garfinkel was offering such an approach.
with ELIZA at Harvard in 1967–1968 and its relevance for Garfinkel’s work with Sacks in the 1960s is particularly
the development of EMCA. The fifth section examines Gar- important in this regard (see Garfinkel and Sacks 1970).
finkel’s 1969 experiments with LYRIC at UCLA. The final Sacks took the position that “understanding” is an inter-
section and the conclusion consider implications of Gar- actional achievement that cannot be specified by grammar
finkel’s early research on human–computer interaction for and/or syntax apart from the sequential procedures and reci-
procities of interaction, which he argued made linguistics
the study of the interaction between sequences, or turns, at
6 talk (Sacks 1968). Sacks’s idea was that a next turn displays
Edward Rose (1909–2002) was an American sociologist and urban
ethnographer best known for his research on semi-experimental stud- an understanding of a prior turn and that each next turn can
ies of the development of what he called “small languages” (Rose change the meaning of prior turns. Sacks proposed that an
1967), in which participants devised simple languages and engaged in organization of expectations regarding turn-pairs held the
translations among them. This interest of Rose’s was related to early key to accomplishing meaning across a sequence of turns.
advances in machine translations at the time. From 1955 onwards,
Rose was one of the early collaborators of Harold Garfinkel at the Many saw ELIZA as a clear case of not meeting the
outset of ethnomethodology. reciprocity conditions required for ordinary communica-
7
Alchemy and artificial intelligence (Dreyfus 1965), written for the tion. Yet those who had an opportunity to interact with
RAND corporation, elicited several antagonistic responses, including ELIZA often built on its turns at talk in ways that they
one that bore the title The artificial intelligence of Hubert Dreyfus: found meaningful. While critics argue that this communi-
A budget of fallacies (Papert 1968). We know from materials in the
Garfinkel Archive that Garfinkel read Dreyfus’ work closely and dis- cation was not real/authentic, Garfinkel was interested in
cussed it with him. He also had a copy of Papert’s response. In regard how human–computer interaction was exploiting human
to Garfinkel’s sabbatical to Harvard, a letter from Dreyfus to Garfin- social interactional requirements in ways that not only
kel from March 1, 1968 not only mentions that Dreyfus was reading forced participants to do the work of making sense of a
Garfinkel’s materials at the time, but that they were planning to meet
for lunch and then go together to a lecture by Joseph Weizenbaum. chatbot’s turns, but also gave them the feeling of an authen-
(The letter is available in the Garfinkel Archive). tic conversation. The question was why this was happening.

13
AI & SOCIETY

For Garfinkel, the answer was that fulfilling their interac- human–computer interaction, since people do not continue
tional obligations to ELIZA and doing the “extra work” to repair sense-making indefinitely, and its failure—or to be
required to accomplish that (because ELIZA was not ful- more precise, the failure of several repair attempts—leads
filling its obligations), resulted in a deeper investment (of to possible judgments of incompetence and/or interactional
extra work) and satisfaction in the conversation. This fit breakdown.
with the results of Garfinkel’s earlier Yes/No experiments Garfinkel’s treatment of interactional trouble or “breach-
in which he found that participants were often more satis- ing” as a way to reveal the details of the practices that are
fied with random answers. constitutive of meaning, was central to his research on
This finding also accords with Garfinkel’s (1963) con- human–computer interaction (Dourish and Button 1998).
ception of the obligations to which human participants com- Just as he noted that troubles encountered by marginalized
mit—which he called “Trust Conditions” (Watson 2009; people can give them heightened awareness of the taken-
Turowetz and Rawls 2021). These are constitutive condi- for-granted features of interaction (Garfinkel 1967; Eisen-
tions that require each participant to assume that the other mann and Rawls 2023; Duck and Rawls 2023)—and “repair”
participants assume the same conditions, rules, and require- work on trouble in conversation had been an early focus
ments that they assume—while also assuming that the oth- of Sacks’ research—Garfinkel understood that trouble in
ers assume the same of them. These conditions include not human–machine interaction might afford clues to its consti-
only a commitment to orient toward a single set of rules or tutive features. Hence, “Machine Down” was a desired result
expectations for one interaction, but also the likelihood that of Garfinkel’s scripts.
participants may change or adapt the rules as needed with Interactions with ELIZA could break down when under-
the understanding that all participants need to accept and standing was not achieved after several attempts, with the
orient to the rule change in order for Trust Conditions to result that participants might stop treating ELIZA as a com-
be maintained—which Garfinkel called “et cetera”8—and petent interlocutor.10 This also applies to contemporary con-
that they might decide to let some borderline cases pass— versational AI, as many have experienced when not able to
which he called “let it pass”. The conditions also include solve a technical problem on an automated hotline. Thus, in
the requirement that participants treat each other as compe- considering how meaning is achieved in interaction with AI,
tent unless they show otherwise. In other words, the Trust we also find that the breakdown of Trust Conditions and the
Conditions state a mutual orientation and commitment to subsequent ascription of incompetence, is itself an interac-
both the constitutive rules of a given practice, and to the tional accomplishment. Interestingly, in Garfinkel’s experi-
expectation that all participants will be treated as competent ments with LYRIC, a program similar to ELIZA created
until they show through their interactions that they are not (and adapted for Garfinkel) at the University of California,
competent. it was often the machine that interrupted and announced the
Trust Conditions are assumed, until put into question by breakdown.
the situated sequential organization of a particular ongoing By explicating and respecifying the often-implicit
social interaction. The turn-by-turn procedure when interact- assumptions at the core of systems design and computational
ing with ELIZA at the console is a simplified and technically practices, Garfinkel aimed to open AI technologies to new
limited version of these procedures and methods of ordi- avenues of research. When Philip Agre engaged with Gar-
nary conversational turn-taking (Sacks et al. 1974; see also finkel in the 1990s it was this aspect of Garfinkel’s research
Button and Sharrock 1995). As in everyday conversation, that interested him. Agre (1997: 233) noted that Garfinkel’s
participants interacting with ELIZA needed to orient toward treatment of meaning as accomplished through ordered
turn-taking (i.e., taking turns), which includes making the
best sense one can of a prior turn, plus a preference for repair
when meaning is at stake.9 This is especially important in
10
Mair et al. (2020: 10) discuss this issue also with an EMCA per-
spective: “A lot, therefore, comes to hinge on what, adapting Sacks,
we might term the engineered ‘integrity’ (Sacks 1992) of AI tech-
8
nologies, that is, their capacity to arrive at outputs, machinic per-
The term “et cetera”/“etcetera” can be spelled in the two ways formances, which are capable of being responded to as if they were
shown. Garfinkel used both spellings. According to several diction- products of analysable courses of human action. When their doings
aries, both uses are acceptable. There also is a newer hyphenated cease to be analysable in that way, when they cease to be capable of
version “et-cetera” in use. In this paper, we follow the original Latin being treated as if they were actions, troubles arise in dealing with
spelling: “et cetera” that Garfinkel used more often. the ‘it’ that has generated those doings and we withdraw attributions
9
There are various forms of repair, from self-repair to other-initi- of ‘seriousness’ (Sacks 1992) to them as agents. Indeed, under such
ated-repair to explicit request for clarification and subsequent re-ori- circumstances, we quickly come to question ascriptions of agency to
entations, see for a detailed discussion Schegloff et al. (1977), Sche- them as a whole and act toward them quite differently as a result (cf.
gloff (1992) and Svensson (2020). Collins 2018).”

13
AI & SOCIETY

sequences of moves or “turns” that are relevant to particular (Hayward 1968: 1).11 The computer was programmed to
social settings and/or situations had respecified the under- display “understanding” by identifying pre-specified “key-
standing of “context” in a way that could reorient research words” and patterns in the input string of characters typed
toward essential collaborative interactional details of mean- by the user. This allowed ELIZA to complete some of what
ing and object construction. Instead of treating context as a Sacks called turn-pairs, responding to the input by assign-
pre-existing framework for constructing meaning and reduc- ing values to the keywords (i.e., ranking them in terms of
ing indexicality, Agre maintained that, following Garfinkel, priority and importance) and selecting from a pre-specified
context and indexicality could both profitably be treated as set of outputs. The input–output rules were not part of the
evolving, constantly changing social achievements. ELIZA program itself. Rather, they were encoded in inde-
pendent scripts, such as the DOCTOR script that simulated
a Rogerian psychotherapist.12
3 Weizenbaum, ELIZA, and the development Specific scripts were used for different purposes. Simply
of AI put, “[s]criptwriting is the means of instructing the com-
puter: ‘When he says this, you say that’” (Hayward 1968:
Chatbots and other communication technologies have 2). It could also be said that “an ELIZA script is a program
become ubiquitous features of everyday life. In the course and ELIZA itself an interpreter,” or that “ELIZA appears as
of a day, we may interact with communicative AI as we pur- an actor who must depend on a script for his lines” (Wei-
chase groceries, do our banking, communicate with health- zenbaum 1967: 475), with each script enabling ELIZA to
care providers, or play music, to name just a few activities. “play a specific conversational role” (Weizenbaum 1976: 3).
Significantly, today’s chatbots and communicative AI often The scripts allowed Weizenbaum to solve a problem that
trace their ancestry back to the ELIZA program (see, e.g., continues to challenge AI researchers today, albeit in a very
the collection of articles celebrating the 50-year anniver- restricted and rudimentary way. This is the “problem of
sary of the ELIZA program, Baranovska and Höltgen 2018), context”. Rather than have ELIZA simply respond to prede-
making a return to ELIZA illuminating. termined commands (e.g., the user types a command, such
Created by Joseph Weizenbaum at MIT between 1964 and as print (“Hello World!”), and the computer executes it),
1966, ELIZA was one of the earliest machines to attempt natural language strings stored in the scripts made it pos-
the “Turing test” (Turing 1950), and as Shieber writes, “it sible for ELIZA to detect contextual cues, in the form of
has been known since Weizenbaum’s surprising experi- keywords, and respond accordingly (Quarton 1967: 168).
ences with ELIZA that a test based on fooling people is For example, if the user entered the keywords “father” or
confoundingly simple to pass” (1994: 72). At least under “mother,” ELIZA would respond by asking a question about
some conditions, ELIZA’s human conversation partners the user’s “family” that displays its relevance to their turn.
thought it displayed human competence. The most famous ELIZA also had some flexibility in terms of the responses
of these conditions was known as the DOCTOR script, it produced: for example, instead of returning an “Error”
which had ELIZA impersonate a Rogerian psychotherapist. message when the user did not supply a keyword it rec-
Although Weizenbaum (1967, 1976) argued that the script— ognized, the program could respond with something like
and ELIZA more generally—was meant to demonstrate the “Please go on”—which Weizenbaum (1967: 475) called a
superficiality of human–machine communication, several “continuative”—or by requesting clarification (e.g., “Please
early human users believed the program was intelligent and rephrase”). Garfinkel listed some of these phrases in his
had genuine insight into their personal lives. This phenom- notes under the headline “Interrogation Script” and in his
enon was later described as the “ELIZA effect” (Hofstadter experiments problematized this feature of the script. This
1995) and it surprised Weizenbaum (1976: 7) who noted that flexibility made it possible for ELIZA to complete turn-pairs
“extremely short exposures to a relatively simple computer that some users experienced as meaningful conversation. It
program could induce powerful delusional thinking in quite also meant that ELIZA could display an orientation toward
normal people.”
Early AI researchers, including Hubert Dreyfus, Michael
11
McGuire, Stephen Lorch, and Gardner Quarton took a keen Hayward wrote a “scriptwriter’s manual (…) for the use of the
ELIZA conversational computer system” that circulated at MIT and
interest in ELIZA. What is less well known is that Garfinkel that Garfinkel acquired at the time. It is now also available online:
recognized the relevance of ELIZA for his own research in https://​archi​ve.​compu​terhi​story.​org/​resou​rces/​access/​text/​2022/​04/​
ethnomethodology. 10268​3842-​05-​01-​acc.​pdf. 22 Oct 2022.
12
ELIZA was designed to conduct “teletyped conversations More precisely, a script consists of two sections: a keyword section
in a natural language between students and a computer” that contains a list of keywords and rules for their decomposition and
reassembly that is used for the analysis of user inputs, and a program
section that contains instructions for the computer once the inputs
have been analyzed.

13
AI & SOCIETY

the reciprocity requirements of ordinary human interac- 1967; Papert 1968) mused about how computers could
tion—to a point—by exhibiting a response related to its improve the lot of humanity. Skeptics countered by point-
human coparticipant’s immediately prior turn, including ing to the limitations of AI, a position succinctly expressed
repair initiations. in the title of Dreyfus’ (1979) influential What computers
In other words, ELIZA “passes” as a competent interlocu- can’t do. In many ways, Weizenbaum was on the side of
tor due to its ability to “simulate a type of sequential organi- the skeptics. He did not think that a machine and a human
zation attached to a particular professional identity and an could communicate in a non-superficial way, and designed
associated type of interaction (the therapeutic interview)” his ELIZA research to demonstrate this point. He was also
(Relieu et al. 2020: 94, our translation). While the overtly concerned that if conceived otherwise, humans might be
evasive behavior often required of ELIZA in order to acquire reduced to their computer-like functions (for a more dif-
recognizable keywords might lead to interactional trouble ferentiated theoretical and historical contextualization of
in everyday life, it is precisely what is expected in the non- these arguments see Heintz 1993). When he published his
directive Rogerian therapeutic setting, where the therapist results, however, it turned out that many in the fledgling
is “free to assume the pose of knowing almost nothing of AI community disagreed (e.g., Colby et al. 1966).15 These
the real world” (Weizenbaum 1966: 42).13 The success of early discussions are in many ways still informative and
the DOCTOR script is thus dependent on the expectation of have recently been characterized as a “hype of anthropo-
an asymmetric social situation in which the therapist asks morphism”, on the one hand, and a critical emphasis on
questions, evades answering and does not disclose (private) “demystification” of the “misrepresentation of the capaci-
information.14 ties and capabilities of computer programs” on the other
Weizenbaum found that even some of his office workers (Dillon 2020: 2). Natale (2019, 2021) and Basset (2019)
who knew very well that they were conversing with a com- show how these competing narratives originated with
puter program, sought time alone to “talk” with ELIZA. This ELIZA and continue to persist today.
puzzled him, leading to the “ELIZA effect” interpretations In contrast to interpretations that rely on anthropomor-
introduced above. The conundrum is still widely discussed: phizing “relationships” and the transformative effects of
“There was the code. But there was also the story. Perhaps “human thought” that characterize the camps of ‘anthropo-
we might say that, if ‘ELIZA’ was code, then ‘Eliza’ was morphists’ and ‘demystifiers’ that still feature prominently
the comfort found in the machine, by humans, who built a as warring parties in debates about AI today, Garfinkel was
different kind of relationship with ‘her’ that exceeded what interested in the social practices through which the inter-
the procedures of code offered, precisely because code came actional relationship between machine and human user is
into contact with human thought” (Bassett 2019: 811, our accomplished. EMCA treats the nature of that relationship
emphasis). as an empirical question. ELIZA shows how “interactional
When Weizenbaum developed ELIZA, AI technology success” and “understanding” are achieved and made vis-
was in its infancy. Even in the 1960s, however, it was clear ible by the fact and in the way that the actual interaction
to many scientists and philosophers that digital technology proceeds (cf. also Sacks 1968). While this might be seen
would come to play an increasingly important role in human as a form of “demystification”, it is one that is not based
affairs. Early AI enthusiasts (e.g., Licklider 1960; Minsky on the fact of technology, but on how that technology is
embedded in and relies on human social practices for its
sense.

13
“If for example, one were to tell a psychiatrist ‘I went for a long
boat ride’ and he responded ‘Tell me about boats’, one would not
assume that he knew nothing about boats, but rather that he had some
purpose in so directing the subsequent conversation” (Weizenbaum
1966: 42). 15
Kenneth Colby (1920–2001) had been a close collaborator of
14
Of course, these kinds of “evasions” are to some extent also a con- Weizenbaum and in addition to other controversies, one crucial
stitutive part of everyday conversations, as McGuire and Lorch dis- difference between them was that Colby intended to use these pro-
cussed in their paper (‘Natural language conversation modes’, mimeo grams for actual therapy, which Weizenbaum saw as dehumanizing
copy 1968). Garfinkel copied a passage from their paper in his notes: and even disrespectful (cf. Natale 2019: 722). Colby was also men-
“‘… In nearly all cases, at least when a social equality exists, one tioned in Garfinkel’s grant proposal submitted to the AFOSR in 1966
may evade. For example, evasions are possible by a ‘play of words’, and emphasized as a possible collaborator for doing research with at
asking a question in return, refusing to answer, or discuss the question Stanford: “The practical character of Colby’s interest suggests that
itself (metacommunication).’” The excerpt is taken from their dis- he would be a most useful informant on problems of credibility, etc.
cussion of the features they cite as characterizing the “interrogation In addition, Colby’s program may be adequate for the purposes of
mode,” Garfinkel’s note from May 25, 1968 is on page 10. experimental procedures.” (Garfinkel 1966: 28).

13
AI & SOCIETY

Garfinkel would not have been surprised, as Weizen- managed to make sense of these responses—how they man-
baum was, that several of ELIZA’s human conversation aged to make meaning out of what were random turns at
partners considered it a competent or even an intelligent talk that could be considered contingent. However, he found
interlocutor.16 that these responses were neither experienced nor treated
as random or contingent. Instead, the work invested by the
participants in making sense of such turns typically resulted
4 Harold Garfinkel’s interest in ELIZA in their increased satisfaction with the interaction.
In the late 1950s, Garfinkel (1967: 78) conducted further
Garfinkel’s interest in human–computer interaction and Yes–No experiments that were introduced to the subjects
information grew out of his research on methods of practical as ways “to explore alternative means to psychotherapy”
reasoning and communication in social interaction, which he (see also Garfinkel 2019 [1959]).17 The setting was in many
had been studying since before his graduate student days at ways similar to ELIZA’s DOCTOR script, but instead of
Harvard (1946–1952). Then, while at Princeton from 1951 a computer console, the interaction was mediated via a
to 1953, he extended his early research on communication microphone, through which participants were instructed
and interaction to Information Science. Garfinkel’s (2008 to ask questions about serious personal problems (each
[1952]) Toward a Sociological Theory of Information out- of which would permit a “yes” or “no” answer). Between
lined a novel theory of information (Thielmann 2019). The question–answer pairs, the microphone was muted and par-
ELIZA program at MIT thus presented Garfinkel with what ticipants were asked to reflect on the responses they had
he called a “perspicuous setting” in which to further investi- received, which were in fact (and unknown to them) random
gate issues that he had been studying for more than two dec- yes or no responses by the “counselor”. The participants
ades, generating a corpus of research findings which showed managed to make sense of and perceive “answers (…) as
that and how people manage to make meaning out of what ‘answers-to-questions’” (Garfinkel 1967: 79). Often subjects
were random turns at talk in situationally contingent ways. said they “knew in a glance what the counselor was talking
about, i.e., what he meant, not what he had uttered” and “if
4.1 Garfinkel’s early “yes–no” experiments the answer was not obvious […] its meaning could be deter-
and the notion of trouble mined by active search, one part of which involved asking
another question so as to find out what the advisor ‘had in
After being discharged from the US Army in January 1946 mind’” (Garfinkel 1967: 79).
and following a short stay at the University of Texas at Aus- Garfinkel’s investigations highlight the temporal char-
tin, Garfinkel taught briefly, from February through May acter of interactions in which meaning is sequentially pro-
1946, at Georgia Tech University in Atlanta. Though brief, duced, in both a retrospective and a prospective manner.
this interlude would turn out to be important. While there, Limiting the “counsellor’s” responses to “yes” and “no”
Garfinkel conducted the first of many “Yes–No” experi- answers maximizes the vagueness and situational indexi-
ments, which involved putting subjects into an interaction cality of reference, which is a ubiquitous feature of ordinary
where, unbeknownst to them, they would receive random language use that is usually taken for granted. This makes
“yes” or “no” responses to their turns at talk. In these situ- the participants’ sense-making work visible and available for
ations, Garfinkel investigated whether and how people further investigation. It is important to note that the point is
not that participants are making sense out of “non-sense,”
16 but rather, that the accomplishment of any meaning (includ-
In addition, it may be that the anonymity and privacy provided
by knowing that one is talking with a machine has been an impor- ing the recognition of nonsense) is based on their use of the
tant element contributing to ELIZA’s success. In psychotherapy, for constitutive practices of interaction, the sequential produc-
example, such privacy has to be ensured by social institutions and tion of meaning and its Trust Conditions, and how the coun-
regulation. We would like to thank Michael Lynch for his comments selor is expected to orient toward those conditions (which
regarding this consideration of ‘privacy’ with ELIZA that offers an
alternative interpretation of the social situation. In contrast to the in this particular case involve the experimental counseling
prevailing conditions when Weizenbaum was working, the privacy situation).
aspect has now become a major concern with regard to chatbots, The sense-making practices Garfinkel investigated are
which often store detailed logs of interactions with their human usually taken for granted—“seen but unnoticed” (Garfin-
interlocutors to be used for further software development, evalua-
tion, and other purposes. Furthermore, there can be, of course, vari- kel 1967: 37). He realized that introducing trouble could
ous ‘reasons’, why people may interact with ELIZA or other chatbots.
For example, one of the reviewers pointed out that he or she might
17
just be “having fun” with them. However, the point of the paper is The results appear in Garfinkel’s paper distributed at the session
to elaborate just how these accounts like “privacy”, “agency”, “hav- on the Sociology of Knowledge, Fourth World Congress of Sociology
ing fun”, etc. are to be considered as interactional achievements in the (Milan/Stresa, Italy, September 12, 1959) and later in Chapter 3 of
first place. Studies in Ethnomethodology (Garfinkel 1967: 79–94).

13
AI & SOCIETY

be a fruitful way to make these practices visible, and the both with their awareness of social processes and their own
Yes–No experiments were only one among many instances security in their social status/identity.
in which he took a similar approach. Being himself from a Novel technologies such as ELIZA that Garfinkel engaged
Jewish background, Garfinkel was interested in socially mar- with in the 1960s and later also provide insights into the
ginalized people—and aware of how troubles could produce ways participants deal with incongruities and discrepancies.
heightened awareness (Duck and Rawls 2023). Those for
whom trouble is a constant feature of everyday experience, 4.2 Garfinkel engaging with ELIZA
Garfinkel treated as “natural experiments” and as sources of
information about how to deal with the potentially problem- In the context of human–computer interaction, the design of
atic character of practices that are ordinarily taken for granted Garfinkel’s Yes–No experiments can be seen as “constituting
(Rawls and Turowetz 2021; Eisenmann and Rawls 2023), talk with the simplest of programs” (Oldman and Drucker
such as in his later study of “Agnes” (pseudonym for a trans- 1985: 156). Garfinkel’s work with ELIZA allowed for further
person he interviewed in 1959–1960; see Garfinkel 1967). specification of sense-making practices. We know from an
In other studies, rather than relying on such “natural Air Force grant proposal written by Garfinkel in 196619 that
breaching experiments”, he would introduce trouble into he knew of ELIZA before going to Harvard in 1967. In that
routine courses of action himself. This is what he was doing document, Garfinkel (1966: 15–16) describes human–com-
in the Yes–No experiments. For his PhD research at Harvard puter interaction and ethnomethodology’s interest in it:
(1946–1952), originally titled “The Jew as a Social Object,”
Attempts are currently being made to program com-
Garfinkel also introduced trouble, conducting experiments
puters to “converse” intelligibly with human beings in
where he played tape recordings of a “pre-medical candi-
everyday language. Whatever the purposes for which
date” for his research subjects, and then introduced an incon-
these programs were developed (e.g., technical, theo-
gruity, e.g., an expert stating that the candidate, who had
retical or practical), they all appear to posit as their
behaved rudely in the recorded interview, had received a
criterion of adequacy the human user’s recognition
very high score and been rated one of the best candidates.18
of the exchange as a reasonable, plausible or cred-
Garfinkel was interested in how his subjects would make
ible one. With respect to this criterion these attempts
sense of this incongruity, finding that they fell roughly into
have been variously successful. From an ethnometh-
two groups. One group produced accounts of their own ini-
odological perspective, these attempts provide an
tial assessment of the candidate that brought it into line with
extremely useful resource for addressing a number of
the reported assessment, while the second group tried to fig-
problematic concerns. Prominent among these con-
ure out how they had gone wrong. Garfinkel was interested
cerns is the investigation of the methods members use
in how social status, an interest in “passing” into a higher
on actual, practically circumscribed, concrete occa-
status category, and the heightened awareness and double
sions to detect the reasonable, plausible and credible
consciousness that can come from marginalization might
features of an environment. Addressing this general
shape the responses of his research subjects. Subjects who
problem entails developing procedures for detecting
produced accounts of their initial assessment that aligned
and bringing to rational formulation the features of
with the judgment of the expert, which Garfinkel associated
the members’ methods for accomplishing this work.
with “passing”, engaged in what he referred to as a “tribal”
style of reasoning that relied on the judgment of others (Gar- At the center of Garfinkel’s interest and proposed inves-
finkel 1948, 1952, 1967). This group contrasts with those tigation are the concrete practices—the lived work—that
subjects who struggled with the incongruous information participants perform in interaction to achieve recognizable
by revisiting their own judgment, as well as questioning the social objects and meaning (that they then attribute to the
judgment of the authority (Turowetz and Rawls 2021). As machine). Garfinkel uses terms, such as concrete occasion,
in the earlier “Yes–No” experiments, subjects found ways situation, practical circumstances, setting, and methods, to
to make sense in all cases. But the differences in the ways denote both the locations within which a recognizable and
they did so allowed Garfinkel to associate their responses meaningful social order is accomplished and the means of
its accomplishment.
18 It is the availability (to the analyst) of the rules the com-
This method was inspired by a research project Garfinkel had par-
ticipated in as a graduate student with Brewster Smith and Jerome puter is following, and the availability of transcripts of how
Bruner at Harvard in 1947 (Smith et al. 1956). The study included
several “tests” that were designed to create stress for the research
19
subjects to see what they would do under what Garfinkel later would In the proposal—titled “Formal Investigation of Decision-Mak-
call breaching conditions. Garfinkel gave the information appercep- ing in Actual, Singular Situations of Practical Action”—Garfinkel is
tion test for this project which created incongruity much like his PhD listed as a principal investigator together with Harvey Sacks (co-prin-
research would do one year later. cipal investigator).

13
AI & SOCIETY

ELIZA’s statements are held accountable to the expectations Garfinkel explains the practice of glossing (cf. Garfinkel and
of human users in the concrete setting of the ELIZA interac- Sacks 1970),21 and how one hallmark of a competent speaker
tions, that makes a detailed investigation of human–com- is the ability to “talk without meaning (…) [someone] who
puter interaction by Garfinkel fruitful. In discussing the can talk by meaning differently than he needs or can say in
possibility of such an investigation, he highlights certain so many words, and with this he can now use these [to] gloss
characteristics of human–computer interactions: the talk over the contingencies of actual interaction” (ELIZA
2nd Meeting, March 25, 1968). Moments later, another
One potential methodological resource arising out of
participant—likely Quarton, though it is unclear from the
human-machine exchanges rests on the fact that the
tape—observes that, “ELIZA is actually a case of glossing”:
rules governing the machine’s contribution to the
exchange are available to the analyst. How these rules
ELIZA 2nd meeting, March 25, 1968
both provide for ‘adequate utterances’ or ‘intelligible
conversations’ from the perspective of the human user
?: ELIZA actually is a case of glossing.
in a series of occasions may illuminate features of the
user’s methods for deciding such issues. (Garfinkel
Gar: Oh, you bet. Holy Christ, you know that.
1966: 16)
Because the rules that the machine is using are available ?: Right - without ( ) this conference
to the observer but not to the participant, the observer is in
a position that is rather similar to what Garfinkel was doing
when he ran his PhD and “random Yes–No answer” experi-
ments, allowing for careful manipulation of the script and Footnote 20 (continued)
observation of subjects’ responses to discover the subjects’ expected or even in a human way. Lucy Suchman’s (1987, 2007)
recordings of people at the Xerox machine come to mind. People know
methods. The observer could “exploit” certain “features of the they are using a machine (or interacting with a machine), but they nev-
human–machine exchange” to reveal how subjects oriented to ertheless treat pauses between machine actions, or between instructions
the machine under different conditions (Garfinkel 1966: 16): displayed by the machine as meaningful and can be heard in her record-
ings using ordinary turn-taking preferences to try making sense of the
Other features of the human-machine exchange may be machine. There are corollaries in other domains of practice. Psychia-
open to exploitation. Included are such variable condi- trists, for example, give themselves headaches doing the turn-taking
work required to make some sense of their interactions with people
tions as the instructions to the human user, what he is
diagnosed as schizophrenic, despite claiming that their utterances are
led to believe about the partner to the exchange (e.g., “word salads” that resist their sense-making efforts (based on unpub-
another human being versus a machine), the spatio- lished research in clinical settings, conducted by A. W. Rawls). As it
temporal mode of access to the exchange (e.g., visual happens, Garfinkel addressed a similar question in an early conversa-
tion with Harvey Sacks (1960–1962, transcription Garfinkel Archive).
electronic one-at-a-time displays of the successive Telling Sacks that he had explained et cetera to a psychiatrist he knew,
‘utterances’ vs. mechanized listing of the developing Garfinkel said that the psychiatrist understood immediately and asked
‘conversation’), and so on. The full range of resources if Garfinkel had ever considered the problems that schizophrenics have
made available by the machine-human exchange situ- in this context. “Now this is according to the psychiatrist,” Garfinkel
said (p. 3), “that psychiatrists who attend to the et cetera meanings in
ation remains open for further specification and clari- what the schizophrenic is doing or saying have success with schizo-
fication. phrenics.” This success is achieved by treating the et cetera variations
they produce as “normal,” as contrasted with the tendency, as it was
Would it matter if subjects were told the machine was described for family members, to refuse to accept the patient’s uses
human, or if the conversational turn structure was allowed of et cetera. The psychiatrist told him that if he did this when schizo-
to proceed “naturally” (i.e., expectably) as opposed to phrenic patients made unexpected uses of variation that family mem-
“mechanically”?20 And what is the role of technical inter- bers refused to accept, this accommodation of the patients’ broader use
of et cetera made successful interaction possible for psychiatrists. The
faces and the social situation of their use? problem was not with the particular use of et cetera per se, but rather
with its not being mutually agreed to as a “normal” acceptable interac-
Garfinkel sought to elaborate responses to these questions tional practice.
21
in a series of recorded discussions he had with McGuire, Glossing in Garfinkel’s and Sacks’ (1970: 343 f.) terms is the prac-
Lorch, Mishler, and Quarton during his time at Harvard in tice of letting an abstraction (or concept) stand in for a practice (or set
of practices): “Glossing practices exist in empirical multitude. In end-
1967–1968. In a meeting that took place on March 25, 1968, less but particular, analyzable ways, glossing practices are methods
for producing observable–reportable understanding, with, in, and of
natural language.” In the case of ELIZA what is happening is that the
20
Participants have both a social obligation to make sense and deeply interactional work achieved by the human user working to make sense
embedded taken-for-granted ways for doing so that are so deeply of ELIZA’s turns is then glossed as a meaning that was inherent in
assumed that they will engage in them even in cases where it should ELIZA’s turn, when it actually was accomplished over the course of
be obvious that the recipient could not be expected to respond in an turn-pairs, or a series of turn-pairs.

13
AI & SOCIETY

Gar: So, see, (…) the thing that’s really marvelous about Znaniecki (1936), W. I. Thomas and Dorothy Thomas
ELIZA is that—and I think we talked about that (1928), as well as his advisor Talcott Parsons (1937): How
briefly—that it is possible to say with respect to any to explain that two or more actors manage to arrive at an
text, exactly what she does to change it. And to pro- operative definition of any given situation. Garfinkel’s solu-
duce a response. Now then, the glossing that is the tion does not rely on shared meanings, symbols or inten-
interesting thing about ELIZA, that it does not reside tional states—or on context—as his predecessors did, but
in the so-called function of the programs that define rather focuses on the empirical work that comprises the
it, but rather it’s the fact that you have an exchange procedural achievement of meaning in and through sequen-
going on, such that what ELIZA does is to furnish tially organized social practices and actions—in specific
what I’m calling one half of any part. You know what settings in vivo and in the midst of the practical affairs of
the-, so it’s not until you get the other violinist do it, everyday life. Eliza is completing turn-pairs initiated by the
that you understand what ELIZA is doing. Because human user when it can, and the human user is completing
otherwise you already know what ELIZA is doing turn-pairs that Eliza initiates, in the process creating mean-
and it underscores-, and it seems to me this is what ing and investing in the interaction as an empirical social
Weizenbaum is insisting on, when he says: look, achievement.
there are no mysteries in this. Let me tell you why Garfinkel argues that it is not possible to understand
there are no mysteries and then, so that we can then what ELIZA is doing “until you get the other violinist”—
point to what the important problems are. And when i.e., the other participant—in the conversation. The human
he comes to demystify the program, what he does is user is making sense of ELIZA’s utterances sequentially,
to lay out, in fact, what kinds of, what these functions and they are doing so in concrete practical circumstances
are, as they operate on any item of text without hav- in collaboration with ELIZA—even if it remains true that
ing to name that text. ELIZA’s actions are scripted and predetermined. ELIZA’s
turns change both the meaning of prior utterances and the
What ELIZA does is to furnish “one half” of the work implications for the work that the human participant will do
that the turn-pair accomplishes, while the human participant next. What comes into focus is how meaning is produced
furnished the other half that makes sense of ELIZA’s turn. on a turn-by-turn basis by making sense of the sequential
In arguing this, Garfinkel is disagreeing with Weizenbaum relevance of turns in interaction with ELIZA. As in the
about where the mystery/interest lies in ELIZA. For Garfin- “Yes–No” experiments, subjects can make meaning even
kel what ELIZA is doing—the meaning of its utterances— out of random answers. But this does not mean that subjects
does not, as Weizenbaum would suggest, reside in “the so- are delusional, or that the meanings they coproduce with
called function of the programs that define it.”22 Whereas the ELIZA are not “real”. For Garfinkel, meaning is always an
intentionalist theory of meaning then gaining traction in cog- actual “real” empirical social achievement.
nitive science (e.g., Chomsky 2006 [1968]) would conceptu- Garfinkel’s empirical take on ELIZA—or any social
alize the meaning of an action in relation to the intentions of phenomena for that matter—goes beyond most sociologi-
an actor, thus reducing the meaning of ELIZA’s utterances to cal approaches. In a discussion that took place in a meeting
the script it was running, Garfinkel points out that meaning between Garfinkel, Sacks, Rose, Erving Goffman, and Tal-
is always a cooperative achievement between two (or more) cott Parsons at the Los Angeles Suicide Prevention Center
participants. Meanings do not reside in people’s minds, or in in 1964 (see Rawls et al. 2020), the participants can be heard
machines, but rather in the interactionally produced spaces arguing at one point about the reality of witches. Goffman
between people—as philosopher Hilary Putnam (1975: 227) takes the position that witches are obviously not real, and
would later memorably put it, “Cut the pie any way you like that it is the sociologist’s job to provide arguments for the
… Meaning just ain’t in the head.” correction of such views. Goffman also argues that there has
Moving beyond his contemporaries’ emphasis on the to be another layer to sociological descriptions that is not
“actor’s point of view,” Garfinkel (1946, 1952) focused exhausted by the notion of observing “members’ methods.”
on identifying the interactional practices through which Without this additional layer, he says that “you would have
members of settings achieve mutually recognizable social to believe that witches are real.”
objects, including self and meaning. In doing so, he solved Garfinkel and Sacks counter that it is, first off, the sociol-
a problem that had confounded scholars like Florian ogist’s job to explain and account for the practices and meth-
ods that members are engaged in that constitute witches as
real and consequential for them (“when the house shakes”).
22
This can be described as an instance of “methodological irony” As socially achieved objects witches are real. Thus, Garfin-
(Watson 1998), as Weizenbaum knew better. Ethnomethodology, by
contrast, takes a “non-ironic position toward member’s practices” kel says emphatically in his response to Goffman: “You god
(Laurier et al. 2019). damn better believe in witches, otherwise you are not doing

13
AI & SOCIETY

your job!” However, he is not succumbing to the “worst kind William Korn at UCLA, began composing his own scripts
of relativism” (as Parsons points out),23 but respecifying the for LYRIC.
job of the sociologist. This job is not to explain why peo-
ple are wrong in their assumptions, but rather to account in 5.1 Garfinkel’s trial runs with a chatbot at UCLA
detail for how it can come to be that people, e.g., even nowa- in 1969
days (in some regions and communities, as Rose interjects)
still do believe in witches. In a letter to Kenneth H. Tom dated January 15, 1969, and
In ethnomethodology, the matter is not settled by dis- titled “Application by H. Garfinkel, Studies of the Formal
tinguishing what social phenomena “really are” from how Structure of Natural Language Formulae,” Garfinkel wrote
members perceive them, but rather by accounting for the that he was working with a group of investigators at UCLA
detailed members’ practices that are required to establish that included Melvin Pollner, Howard Schwartz, and Wil-
a social object in the first place (cf. also Eisenmann 2022). liam Korn. That letter referred to a previously funded grant
This does not mean taking a position on whether witches proposal to the AFOSR (1966) that is said to “document” the
or the agency or intelligence of ELIZA (or of more compe- relevance of ELIZA and LYRIC to Garfinkel and Pollner’s
tent contemporary conversational AI) are “real” or “unreal”, interests. We know Garfinkel obtained funds for computer
but rather, showing how these social objects are established time, and was able to do test runs with LYRIC in February
as real or unreal for members in and through mutual, co- 1969, which he did with Korn. In total, there are fifteen
operative practices that are embedded in social settings, runs in the Garfinkel Archive done with LYRIC, which were
which settings those practices also establish and elaborate. likely stopped when the proposal was not funded.
The meanings achieved through interaction with ELIZA are, The LYRIC runs offer a tantalizing preview of what Gar-
like witches, social objects achieved through the interactional finkel might have done had the grant proposal been funded.
work of human users. In focusing on members’ methods, From some transcripts, it is obvious that they were trying
Garfinkel is not endowing ELIZA with intelligence (or a lack to produce interactional trouble between user and machine.
thereof), but rather exploring how the differing ways of inter- Whereas a computer programmer might describe what he
acting with this early predecessor to natural language AI sys- was doing as “troubleshooting” or “debugging”—inten-
tems and their chat interfaces are grounded in social practices tionally breaking code in order to improve it—Garfinkel’s
that cannot be understood unless they are studied in detail. aims were sociological. As meaning can only be achieved
sequentially, between pairs of actions/utterances, the trick is
to break the pairs. As in his breaching exercises and Yes–No
5 Garfinkel’s work with LYRIC at UCLA experiments, Garfinkel was treating interactional “trouble”
as a way to make the ordinary methods of sense-making vis-
An acronym for “Language for Your Remote Instruction by ible, i.e., to reveal the tacit expectations/rules participants
Computer,” the LYRIC program was developed by Leon- are orienting to, in this case between human and machine. If
ard and Gloria Silvern in 1966 to instruct college students Garfinkel could crash the machine, he gained information.
in physics and mathematics. Garfinkel recognized that it
could be used to carry out studies similar to those being Transcript of a LYRIC run Garfinkel conducted
done with ELIZA. He had the scripts McGuire and Lorch under the title “Pause: Pausitives”
had run for him with ELIZA, and together with his student
(C = LYRIC, Gar = Garfinkel):
23
In the discussion, Parsons introduces another example in con-
sidering that Garfinkel and Sacks’ emphasis on members’ methods, Gar: (Sign In)
could lead to trouble in solving the “famous bent stick problem”: sub-
merged in water, a stick has the appearance of being bent. But this is C: HELLO. THE MIRACLES OF MODERN SCI-
in fact an optical illusion, and modern physics explains how it occurs:
light refracted by the submerged part of the stick changes direction as ENCE HAVE ENABLED US TO TALK TO ONE
it reaches the water’s surface, creating the illusion of bending. Gar- ANOTHER EVEN THOUGH WE ARE PHYSI-
finkel’s position centers members’ methods and takes seriously their CALLY (AND CONCEPTUALLY) IN QUITE DIF-
production, whether they yield perceivably bent sticks (in social col- FERENT SITUATIONS. YOUR STATEMENTS
lectives where such social objects would be meaningful) or demon-
strations that the bending is an optical illusion. Modern physics, after CAN REACH ME ONLY IF YOU PUSH THE INT
all, does not identify optical illusions “from nowhere,” so to speak, OR SHIFT BUTTONS. WHY DON’T YOU TRY IT
but by means of its own order of practices (cf. Kuhn 1970). The dif- NOW?
ference between witches and bent sticks is not so much that one is a
social object and the other a physical (or natural) object, but rather
that they are both recognized and treated in social interactions by Gar: (Presses INT)
members using members’ methods to achieve that recognition.

13
AI & SOCIETY

C: VERY GOOD. IF YOU SEE A SMALL ARROW PLEASE SEE ME AGAIN. IT HAS BEEN VERY PLEAS-
ON THE SCREEN PLEASE MAKE SOME REPLY. ANT MEETING YOU.”
DON’T FEEL INHIBITED. JUST TYPE ANY- Garfinkel was accomplishing several things by making
THING YOU FEEL IS APPROPRIATE AND THEN trouble for the machine. Although he does not tell us in so
PRESS THE ENTER KEY. SO WHAT’S ON YOUR many words what he was up to, he does leave telling clues.
MIND? One particularly interesting clue comes in the form of a
note that accompanies the printout of one of his interac-
Gar: Uh …… tions with LYRIC (February 14, 1969, Trial 6: 3). At the
end of the transcript, Garfinkel wrote: “Remarks: Notice
C: PLEASE ELABORATE. the difference between availability of ‘docile texts’ and
texts available as a ‘first linear time through’ as contrasting
Gar: I …… Uh …… phenomenal features of ‘conversing’ in man–machine con-
versations.” In this short, dense passage, Garfinkel makes a
C: TELL ME MORE ABOUT YOURSELF. distinction between the lived production of a transcript—
the actual lived work of having a conversation when sitting
Gar: Well ………… down in front of a computer console—and the transcript-
object produced as the outcome of that lived work after it
C: I UNDERSTAND PERFECTLY. TELL ME MORE. has been produced.
The work of doing the interaction with LYRIC that is
Gar: I’m thinking about what to tell ………​………​………​ printed out as a transcript involves the entire assemblage
………​………​…………… of objects involved in its production: the keyboard, ter-
minal, user interface, the user’s typing of inputs, printer,
C: WHAT DO YOU THINK ABOUT IT? etc. Importantly, it also involves procedures of meaning-
making through the exchange of turns and the manage-
Gar: That’s what I’m trying to think of in ……… just ment of expectations regarding turn-pairs between user
……… another ……… minute ……. and machine. However, that situatedness and its proce-
dures vanishes, after the interaction is done and made
C: WHAT DO YOU THINK ABOUT IT? available for inspection as a “docile text”—the sort of text
subsequently taken by Weizenbaum et al. as evidence of
Gar: Um ………​………​………​……… “delusion” on the part of users. This is a crucial point that
highlights the work of accomplishing the practical circum-
C: HAVE YOU ALWAYS FELT THAT WAY? stances (typically referred to as “context”) of meaningful
interaction in each particular case. Producing the “docile
Gar: (Pause) ………. text” involved procedures for making sense of indexicality
and other practical contingencies that are relevant for the
C: YES THAT’S TRUE BUT I’M STILL A LITTLE actual “lived work”, and Garfinkel is arguing that the lived
UNCLEAR. work explains the experience of meaning and satisfaction
being reported by the human user.
Gar: I know what I want to say. (Pause) ………​………​
………​…………. 5.2 Garfinkel’s “backstage” programming work
with human–computer interaction
C:  M Y O P I N I O N ISN’T I M P O R TA N T ,
MACHINE DOWN Garfinkel not only focused on the user-interface interaction,
but also investigated the “backstage” work of human–com-
The “MACHINE DOWN” message appears to indicate puter interaction, i.e., the programming of the script. He
that the program has crashed or shut itself down. This was had discussed programming issues with Weizenbaum’s col-
the outcome of several of the runs Garfinkel and Korn did laborators during his meetings with them and together with
with LYRIC. William Korn, at UCLA, engaged in writing programming
In another transcript, dated January 1969, Korn responds scripts. We know from a report that Korn (1969) reworked
to LYRIC’s questions by typing the LYRICS to the Beatles’ the scripts that were running on LYRIC as part of his student
“Happiness is a Warm Gun.” Eventually, LYRIC responds, assignment for Garfinkel’s course. While Garfinkel had a
“I’M SORRY. IT HAS BEEN INTERESTING BUT I version of ELIZA from MIT (with scripts written for him),
MUST RETURN CONTROL TO MY SUPERVISOR. Korn also worked with a script called TALK, originally

13
AI & SOCIETY

created by Bruce M. Dane at UCLA, which was in many Monitoring and building on the sequential organization of
ways similar to the DOCTOR script. It was actually based on the human–machine interaction, they achieved—and con-
two earlier scripts that were designed to do psychoanalysis: tinually renewed—the local situation as a meaningful inter-
one an MIT version called YAP YAP, the other a copy of change. In so doing, they revealed what is typically treated
YAP YAP called COUCH. Korn (1969: 5) describes in his as the “problem of context,” or in an EMCA perspective
report that Dane, like Garfinkel, “was also interested in using the sequentiality of cooperative interaction, as a situated
the program conversationally, rather than in computer-aided accomplishment, specifying just how that accomplishment is
instruction.” sustained through ordinary social practices of making sense
Garfinkel and Korn programmed alternate versions in interaction.
of TALK and called them: GABBER, PERM01 and Garfinkel introduced these arguments in his discussions
PERM02. In his “Description of Experimental Mate- with Lorch, Quarton, and McGuire by elaborating how the
rials”, Korn (1969) explains that GABBER was also ordinary, everyday practices of sense-making, such as “et
“intended to ‘do psychoanalysis’”, and that he “further cetera”, “unless”, and “let it pass,” that he had identified in
attempted to bring some more humor into the interac- various settings (Garfinkel 1967), were as crucial for dealing
tion” (p. 5) by producing a particular response (“Your with indexicality at the computer console as they were in
what?”) when the user typed “my” in a sentence. This social life more generally. In this sense, Garfinkel’s research
produced rather perplexing conversations: “It was felt to on ELIZA and LYRIC brings to the foreground the centrality
be humorous on the grounds that it would be seen to be of social uses of indexicality to the practical uses and sense
a ‘ridiculous’ response to a sentence such as ‘I’m hav- being made of AI programs then and now.
ing problems with my mother’” (Korn 1969: 5). To add Garfinkel’s preoccupation with indexicality crystalized
further externally introduced contingencies to the inter- in his collaboration with Harvey Sacks (Garfinkel and Sacks
actions—as in Garfinkel’s early Yes–No experiments— 1970). Both argued that the conventional treatment of indexi-
PERM01 and PERM02 scripts randomized the computer cal expressions as requiring remedy is not only a mistake, but
operation and its written responses to user input so that a mistake that has ironically generated many of the perennial
“one could not ‘logically predict’ how PERM01 and problems that philosophers and linguists attribute to indexical-
PERM02 would respond on the basis of how GABBER ity. According to Garfinkel, “let it pass” is a way of accom-
would respond” (p. 6). Garfinkel had already experi- modating uncertainty until recognition of meaning is achieved
mented with the ELIZA script at MIT, and together with across a series of turns (as we see in the ELIZA transcripts).
Korn, was now exploring how EMCA insights could “Et cetera” referred to the need for rules to change and have a
inform the practices of computer programming, and certain degree of elasticity, or endless loopholes and worka-
ultimately how these practices are intertwined with the rounds become inevitable (see also Thielmann and Sormani
sense-making work of human users. 2023).
By 1973 when the conversation between Colby’s Garfinkel points out that “no matter how specific the terms
PARRY and ELIZA at an international computer confer- of common understandings may be […] they attain the sta-
ence became famous (cf. Apprich 2019), Garfinkel and tus of an agreement for persons only insofar as the stipulated
Korn had already attempted something similar at UCLA conditions carry along an unspoken but understood et cet-
in 1969, albeit in a more rudimentary way: They used the era clause.” (Garfinkel 1967: 73) This clause does not refer
evasive output phrases from ELIZA (or to be more pre- to another set of rules, instructions or propositions (cf. also
cise, from the DOCTOR script) as well as from COUCH, Durkheim’s argument for the discussion of noncontractual
as inputs for their own experimental scripts. Garfinkel’s elements of social contract, Rawls 2021a; 2021b), but to the
ethnomethodological studies were being undertaken, inevitability that any interaction is accompanied by, and draws
therefore, alongside work at AI’s then leading edge. upon, descriptions and accounts of social life that are “known-
or-knowable-in-common-without-respect-for-the-requirement-
of-specific-explication” (Garfinkel 1962: 6; see also Garfinkel
6 The relevance of indexicality and et cetera et al. 1962). This feature establishes conditions for meaningful
for the sequential production of what and recognizable courses of action, sequentially organized and
is conventionally treated as “context” maintained, and “establishes the agreement under a rule of
in human–computer interaction trust” (cf. Garfinkel 1963).
As Garfinkel and Sacks (1970) argued, et cetera and
Garfinkel was using his studies of ELIZA and LYRIC to indexicality have historically been treated as “problems” in
build on his earlier “Yes–No” experiments and breaching scientific work (see also Sacks 1963: 10–13), which aims to
studies, which go hand-in-hand with his examination of the remedy them by making explicit all operative presuppositions
way human participants made sense of ELIZA’s utterances. and rules of procedure. This also is the case in computational

13
AI & SOCIETY

theory and practices that aim at specifying the categories and poses; individuals may well have different views of the
rules for a program precisely in advance—thus trying to rem- precise boundaries of the “there,” “then,” or “them”
edy a vagueness that is actually required and saddling them being referred to, but if the discrepancy causes no
with a huge problem (cf. Rawls and Mann 2015). According trouble, it will most likely pass unremarked. A model-
to Garfinkel and Sacks (1970), any attempts to remedy the theoretic account of this achievement, such as Barwise
essential vagueness and indexicality of social life are invari- and Perry’s, can posit the potential and actual refer-
ably and unavoidably unachievable. ents of indexical terms by constructing the appropriate
Any list of specifications can be further specified, endlessly, situations, but it cannot explain the actions by which
as Wittgenstein (1953) showed for rule following. The crucial particular people picked out those particular referents.
point made by Garfinkel and Sacks (1970), however, is that It is only through study of the actual practices people
indexicality does not need to be fixed, but serves a foundational employ to achieve reference in situ that indexicality
purpose in interaction. Making sense in interaction always rests begins to emerge not merely as a passive phenomenon
on things that are not talked about—stated in so many words— of context dependence but as an active phenomenon
that are taken for granted, i.e., indexicality and et cetera work of context construction. (Agre 1997: 233)
are essential features of sense-making (not nuisances or appen-
dices to it; see also Eisenmann and Rawls 2023; Button et al.
2022, Chap. 1). Furthermore, et cetera is not a static condition, 7 Conclusion
but operates as a feature of the sequential organization of inter-
action: it is “essentially bound to both the inner and the outer Weizenbaum took a critical stance toward his experiments
temporal course of activities and thereby to the progressive with ELIZA, treating participants’ ascriptions of intelli-
development of circumstances and their contingencies.” (Gar- gence to the machine as misconceptions or even evidence
finkel 1967: 73–74; see also Sacks 1992) Making sense in and of delusional thinking, ultimately arguing for a correction
of interaction, including interacting with ELIZA and similar of what was later described as the ELIZA effect. Garfinkel
programs, is reflexively tied to its “lived work” and its temporal disagreed. Such arguments about misconceptions of the state
parameters the “first linear time through”, or, in another of Gar- of AI are still at the center of current discussions, as the case
finkel’s phrasings, for “another each next first time” (Garfinkel of Google’s LaMDA discussed in the introduction clearly
2002: 216) and not as an already accomplished docile text—or shows. The issue at stake concerns the role of the analyst and
gloss of such a text. their methods and assumptions in relation to the orientation
Garfinkel’s empirical findings are not only relevant for an of actual participants in the interaction to what social objects
externalist observational study of AI, but open venues for a count as “real,” as in the case of witches in the discussion
“hybrid study” of AI that not only investigates the practices reported on above that took place between Goffman, Par-
of programmers, designers, and users, but contributes to the sons, Rose, Garfinkel, and Sacks in 1964.
field of programming.24 The question of how indexicality The EMCA perspective allows for a detailed under-
might contribute to computing is still an underexplored dis- standing of the different ways users interact with AI and
cussion in computer science. The following passage from how sense-making in those interactions is grounded in the
Philip Agre (who was in close contact and worked with Gar- details of social practices and human social competencies.
finkel in the 1980s and 1990s) makes the point that indexi- On this view, the meanings people produce in interaction
cality is “an active phenomenon of context construction” with ELIZA are not illusions—they are social objects that
that needs to inform practices of information design: emerge in and as the turn structure of these interactions. A
non-ethnomethodologist might ask how subjects manage to
It is also, as Garfinkel (1984 [1967]) would insist, an
make meaning in the absence of “genuine human–human
achievement that is only good enough for practical pur-
reciprocity.” But, for Garfinkel, this question assumes what
it should be examining. Human participants cannot “read
24
“Hybrid studies” (Garfinkel 2002: 100ff.) require researchers to the minds” of other participants, be they machines, people,
be competent in the work being studied (computing in this case) in or spirits. The assumption of “human agency” as well as
order to contribute not only to sociology, but also to the research field the idea that there is ever “genuine” reciprocity of minds is
in question. Relatedly, ethnomethodological studies of work (Garfin- false and gets people off on the wrong track.25 The question
kel 1986, 2022) have concerned themselves with research questions
of interest to both practitioners of various worksites and sociologists.
Their adequacy relies on detailed exploration of “members’ meth-
25
ods”, aiming for what Garfinkel (2002: 105ff.) calls “praxeological This is known in philosophy as the “Problem of other minds”, the
validity”, i.e., that sociological descriptions are practically recogniz- idea being that it is not possible to have direct contact with or knowl-
able and instructable for and by the practitioners in question (see also edge of other minds. Solutions have often resorted to positing a priori
Ikeya 2020; Eisenmann and Mitchell 2022; Meier zu Verl and Meyer conditions. In this case, the problem also directly relates to the ques-
2022; Sormani 2020, 2023). tion of the Turing test (Turing 1950).

13
AI & SOCIETY

Garfinkel pursued is whether interaction between people (or In the decades since Garfinkel’s pioneering work,
people and machines) exhibits recognizable order properties research grounded in EMCA has focused on the concep-
that participants can use to make sense. If it does exhibit tual issues of AI and social action (e.g., Button et al. 1995;
such order properties and they can be made to accord with Gilbert and Heath 1985; Suchman 1993) and also examined
constitutive expectations in ways that participants are able to how EMCA findings can be implemented in the develop-
make sense of—then meaning has been achieved. ment of computer dialog systems (e.g., McIlvenny 1990;
Users assume Trust Conditions, and orient toward their Raudaskoski 1990; Thomas 1991; Wooffitt 1994). More
obligations to those conditions, until trouble arises. Trou- recently, as AI-based devices become entrenched in every-
ble can come from either the human end of the interaction day life, EMCA researchers are conducting studies of robots,
or from the machine. But, as long as the machine does not voice user interfaces, and embodied human-like agents. This
make it obvious that it has pre-coded scripts and that it is not research documents in detail how people adjust their conduct
orienting to Trust Conditions, the human rule is to assume in interacting with machines, e.g., with so-called “conversa-
Trust Conditions, and sense-making will continue until it tional agents”. Porcheron et al. (2018) argue that participants
fails. As in the “random Yes–No answer” experiments, users talk with voice user interfaces on the basis of “input” and
make sense of ELIZA’s conduct in the usual way by treating “output” rather than “conversation” (see also Reeves and
it as motivated conduct, i.e., motivated by the action of the Porcheron 2022). On the other hand, even simple technical
previous turn, and as part of a coherent sequence of turns systems can produce recognizably meaningful exchanges
(covered by Trust Conditions)—just as their own human (Relieu et al. 2020), and as Korbut (2023) argues, in some
responses are. What Garfinkel (1967: 94; 2019 [1959]: 26) settings users treat chatbots as conversational partners. Ivars-
writes of the “Yes–No experiments” applies to the interac- son and Lindwall (2023) show that sequential and categorial
tions of ELIZA/LYRIC with its human users and beyond: analysis of ongoing talk is the basis of ascribing “intelli-
gence” to a machine.
“Through the work of documenting—i.e., by search-
We have argued throughout this paper that Garfinkel’s
ing for and determining pattern, by treating the advi-
research on ELIZA and LYRIC, although conducted at a
sor’s answers as motivated by the intended sense of
very different point in time, can provide insights relevant
the question, by waiting for later answers to clarify the
for current research on AI in interaction. Although many
sense of previous ones, by finding answers to unasked
current AI systems are grounded and build upon cognitiv-
questions […] the perceivedly normal values of what
ist and individualist models of human mind, they are usu-
was being advised were established, tested, reviewed,
ally employed in thoroughly social situations. Questions
retained, restored; in a word, managed.”
of “competence” or “agency” are not located inside the
In short, human users do their ordinary interactional work machine, but rather in the cooperative achieved orderliness
of making sense when communicating with machines, which of human sense-making.
creates the meaning they experience. The EMCA approach has also opened avenues for study-
Garfinkel’s experiments with human–computer interac- ing and contributing to design and software development by
tion document the reliance of the machine on the human AI researchers and programmers, both in theoretical ground-
commitment to sense-making. His breaching scripts, if ing and technology development (see, e.g., Alač, 2009;
we can call them that, were designed to make trouble that Brooker and Mair 2022; Gehle et al. 2017; Krummheuer
reveals how the situation is produced in and as the interac- 2015; Mair et al. 2020; Pelikan et al. 2020; Saha et al. 2023),
tion between the typing user and the machine, including the that are being explored further. In his seminal study of com-
totality of its interface, programming, and physical compo- puting, Agre (1997) demonstrated how the work of program-
nents. It also shows how human users remain committed to mers already relies substantially on everyday understanding,
making sense of interactional sequences in the face of trou- as well as on philosophical theories of mind, but that this
ble, and in doing so, demonstrates how AI systems constitu- reliance remains largely tacit. Such tacit assumptions are
tively rely upon users’ sense-making work for their effective respecified in EMCA as topics of empirical investigation.26
operation. Accordingly, Garfinkel’s early studies reveal the
cooperative practices standing at the core of media and tech-
nology (Schüttpelz 2023), allowing for the re-specification
of things, such as “ELIZA effect”, “confirmation bias”, and
“cognitive dissonance”, in terms of the constitutive social 26
These tacit assumptions also pertain to social inequalities, race,
practices of their human users—the methods through which disabilities and other marginalized persons which have implications
these phenomena are ordinarily produced and made recog- for thinking about 'bias' in AI training datasets, as well as gendered
and racialized exclusions from that community in terms of the practi-
nizable (cf. Rawls and Turowetz 2021; Turowetz and Rawls cal constitution of technology (cf., e.g., Turner 2018; regarding the
2021). tacit and interactional dimensions, see Rawls and Duck 2020).

13
AI & SOCIETY

Agre and his colleagues, including David Chapman, argued voice assistants resist a return to the individual. Evental Aesth
that AI development had hit a wall in the late 1980s and 9(1):3–53
Apprich C (2019) Die Maschine auf der Couch. Oder: Was ist schon
1990s. Garfinkel’s study of ELIZA/LYRIC allows for the ‹künstlich› an Künstlicher Intelligenz? ZfM 21(2):20–28
re-specification of core concepts like categories, information Baranovska M, Höltgen S (2018) Hello I’m Eliza. Fünfzig Jahre
objects, and computing, and calls for a research approach Gespräche mit Computern. Projektverlag, Bochum/Freiburg
that takes the necessity of indexicality and sequentiality Basset C (2019) The computational therapeutic: exploring Weizen-
baum’s ELIZA as a history of the present. AI Soc 24:803–812
more seriously, allowing for its application in contemporary Boden MA (1977) Artificial intelligence and natural man. Basic Books,
AI computing in ways that might just address the problems New York
Agre and others were pointing to. Brooker P, Mair M (2022) Researching algorithms and artificial
intelligence. In: Housley W, Edwards A, Beneito-Montagut R,
Acknowledgements We thank all participants of the ELIZA Workshop Fitzgerald R (eds) The SAGE handbook of digital society. Sage,
held at University of Siegen on the 5th of November 2021 (https://​ Thousand Oaks, pp 573–592
www.m ​ ediac​ oop.u​ ni-s​ iegen.d​ e/d​ e/v​ erans​ taltu​ ngen/w
​ orksh​ op-b​ 04-g​ arfi​ Button G, Sharrock W (1995) On simulacrums of conversation:
nkel-​meets-​eliza-​mit-​hendr​ikben​der/) for their valuable comments, Toward a clarification of the relevance of conversation analysis
especially Marcus Burkhardt and Magnus Rust for their insights also for human–computer interaction. In: Thomas PJ (ed) The social
during ongoing discussions on the topic. Many thanks are also due for and interactional dimensions of human–computer interfaces.
the insightful comments by the participants of the “Gipfeli Meeting” Cambridge University Press, Cambridge, pp 107–125
held by Bettina Heintz and Andrea Glauser at the University of Lucerne Button G, Coulter J, Lee JRE, Sharrock W (1995) Computers, minds
(25th–26th of November 2022), who had read an earlier version of this and conduct. Polity Press, Cambridge
article. Funded by the Deutsche Forschungsgemeinschaft (DFG, Ger- Button G, Lynch M, Sharrock W (2022) Ethnomethodology, conversa-
man Research Foundation)—Project-ID 262513311—SFB 1187, Col- tion analysis and constructive analysis: on formal structures of
laborative Research Centre: “Media of Cooperation”. Jakub Mlynář’s practical action. Routledge, Oxon
work on the article was enabled by the support of the Czech Fulbright Chomsky N (2006 [1968]) Language and mind, 3rd edn. Cambridge
Commission (Fulbright Grant for Scholars and Researchers). We also University Press, Cambridge
like to thank two anonymous reviewers for very detailed and thorough Colby KM, Watt JB, Gilbert JP (1966) A computer method of psycho-
editing suggestions that contributed significantly to the improvement therapy. J Nerv Ment Dis 142(2):148–152
of this article. Any remaining shortcomings are ours. Collins H (2018) Artifictional intelligence: against humanity’s sur-
render to computers. Polity, London
Funding Open Access funding enabled and organized by Projekt Crabtree A (2004) Taking technomethodology seriously: hybrid change
DEAL. in the ethnomethodology-design relationship. Eur J Inf Syst
13(3):195–209
Data availability Data sharing is not applicable to this article as no Dillon S (2020) The Eliza effect and its dangers: from demystification
datasets were generated or analyzed during the current study. to gender critique. J Cult Res 24(1):1–15
Dourish P, Button G (1998) On “technomethodology”: foundational
Declarations relationships between ethnomethodology and system design.
Hum Comput Interact 13:395–432
Conflict of interest The authors declare that they have no conflict of Dreyfus H (1965) Alchemy and artificial intelligence. The RAND Cor-
interest. poration, Santa Monica
Dreyfus H (1972) What computers can’t do. MIT Press, New York
Open Access This article is licensed under a Creative Commons Attri- Duck W, Rawls AW (2023) Black and Jewish: “Double Consciousness”
bution 4.0 International License, which permits use, sharing, adapta- inspired a qualitative interactional approach that centers race,
tion, distribution and reproduction in any medium or format, as long marginality, and justice. Qual Sociol 46:163–198. https://d​ oi.o​ rg/​
as you give appropriate credit to the original author(s) and the source, 10.​1007/​s11133-​023-​09535-9
provide a link to the Creative Commons licence, and indicate if changes Eisenmann C (2022) Spiritualität als soziale Praxis—Zur Konstruktion
were made. The images or other third party material in this article are von Wirklichkeit im Yoga. (Spirituality as a Social Practice—
included in the article’s Creative Commons licence, unless indicated the Production of Reality in Yoga). Berlin/Boston: de Gruyter
otherwise in a credit line to the material. If material is not included in Oldenbourg
the article’s Creative Commons licence and your intended use is not Eisenmann C, Mitchell R (2022) Doing Ethnomethodological Ethnog-
permitted by statutory regulation or exceeds the permitted use, you will raphy. Moving between Autoethnography and the Phenomenon
need to obtain permission directly from the copyright holder. To view a in “Hybrid Studies” of Taiji, Ballet, and Yoga. Qual Res. https://​
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. doi.​org/​10.​1177/​14687​94122​11329​56
Eisenmann C, Rawls AW (2023) The continuity of Garfinkel’s
approach: seeking ways of ‘making the phenomenon available
again’ through the experience and usefulness of ‘trouble.’ In:
References Sormani P, vom Lehn D (eds) The Anthem companion to Harold
Garfinkel. Anthem Press, New York, pp 19–42. https://​doi.​org/​
Agre PE (1997) Computation and human experience. Cambridge Uni- 10.​2307/​jj.​44182​10.6
versity Press, Cambridge Eisenmann C, Englert K, Ehler V, Schubert C (eds) (2023) Varieties of
Alač M (2009) Moving android: on social robots and body-in-interac- cooperation. Mutually making the conditions of mutual making.
tion. Soc Stud Sci 39(4):491–528 Springer, Wiesbaden
Alač M (2016) Social robots: Things or agents? AI Soc 31:519–535 Garfinkel H (1948) A further note on the prospectus for an explora-
Alač M, Gluzman Y, Aflatoun T, Bari A, Jing B, Mozqueda G (2020) tory study of communication and the modes of understanding in
Talking to a toaster: how everyday interactions with digital selected types of dyadic relationship, with particular reference

13
AI & SOCIETY

to the Jew as an object of social treatment. 30 October 1948. Korbut A (2023) How conversational are “conversational agents”?
Garfinkel Archive, Newburyport, Massachusetts Evidence from the study of users’ interaction with a service tel-
Garfinkel H (1963) A conception of, and experiments with, “trust” ephone chatbot. Soc Interact Video Based Stud Hum Soc. https://​
as a condition of stable concerted actions. In: Harvey OJ (ed) doi.​org/​10.​7146/​si.​v6i1.​137249
Motivation and social interaction: cognitive determinants. Ron- Korn W (1969) (approximately). Description of experimental materi-
ald Press, New York, pp 187–238 als. Unpublished manuscript. Garfinkel Archive, Newburyport,
Garfinkel H (1967) Studies in ethnomethodology. Prentice Hall, Massachusetts
Englewood Cliffs Krummheuer A (2015) Technical agency in practice: the enactment of
Garfinkel H (ed) (1986) Ethnomethodological studies of work. Rout- artefacts as conversation partners, actants and opponents. Psych-
ledge & Kegan Paul, London nol J 13(2–3):179–202
Garfinkel H (2002) Ethnomethodology’s program: working out Dur- Kuhn T (1970) The structure of scientific revolutions. University of
kheim’s aphorism (edited by Rawls AW). Rowman & Little- Chicago Press, Chicago
field, London Laurier E, Krämer H, Gerst D, Salomon R (2019) The “Studies in
Garfinkel H (2022) Studies of work in the sciences (edited by Lynch Ethnomethodology” are a way of understanding and handling
M). Routledge, New York empirical materials and thoughts. Forum Qual Sozialforschung/
Garfinkel H, Sacks H (1970) On formal structures of practical Forum Qual Soc Res 20(2):Art. 2
actions. In: McKinney JD, Tiryakian EA (eds) Theoretical Lecerf Y (1963) Logique mathématique—machines de Turing réversi-
sociology. Appleton-Century Crofts, New York, pp 337–366 bles. C R Hebd Seances Acad Sci 257:2597–2600
Garfinkel H, Rose E, Sacks H, MacAndrew C, Mendlovitz S (1962) Licklider JCR (1960) Man–computer symbiosis. IRE Trans Hum Fact
Ethnomethodology conference (audio recordings), Reel #4 of Electron HFE-1:4–11
10, Side B, February 16, 1962. Garfinkel Archive, Newbury- Lynch M (1993) Scientific practice and ordinary action: ethnomethod-
port, Massachusetts ology and social studies of science. Cambridge University Press,
Garfinkel H (1946) Some reflections on action theory and the theory New York
of social systems. Unpublished manuscript. Garfinkel Archive, Lynch M, Eisenmann C (2022) Transposing Gestalt phenomena from
Newburyport, Massachusetts visual fields to practical and interactional work: Garfinkel’s and
Garfinkel H (1952) The perception of the other: a study in social Sacks’ social praxeology. Philos Sci Travaux D’histoire Et De
order. Unpublished PhD thesis. Harvard University, Cambridge Philosophie Des Sci 26(3):95–122
Garfinkel H (1962) (approximately). On Et Cetera (Outline). Mair M, Brooker P, Dutton W, Sormani P (2020) Just what are we
Unpublished manuscript, Garfinkel Archive, Newburyport, doing when we’re describing AI? Harvey Sacks, the commenta-
Massachusetts tor machine, and the descriptive politics of the new artificial
Garfinkel H (1966) Formal investigation of decision-making in intelligence. Qual Res 21(3):341–359
actual, singular situations of practical action (Air Force grant McDermott D (1976) Artificial intelligence meets natural stupidity.
proposal). Unpublished manuscript, Garfinkel Archive, New- SIGART Newsletter 57, April 1976. Reprinted in: Haugeland
buryport, Massachusetts J (ed.) (1981) Mind design. Montgomery, VT: Bradford Books
Garfinkel H (2006 [1948]) Seeing sociologically (edited by Rawls McIlvenny P (1990) Communicative action and computers: Re-embod-
AW). Paradigm, Boulder ying conversation analysis? In: Luff P, Gilbert NG, Frohlich D
Garfinkel H (2008 [1952]) Toward a sociological theory of infor- (eds) Computers and conversation. Academic Press, London, pp
mation (edited by Rawls AW). Paradigm Publishers, Boulder 91–132
Garfinkel H (2019 [1959]) Common sense knowledge of social Meier zu Verl C, Meyer C (2022) Ethnomethodological ethnography:
structures. A paper distributed at the session on the sociology historical, conceptual, and methodological foundations. Qual
of knowledge (edited by C Erbacher and E Schüttpelz), 4th Res. https://​doi.​org/​10.​1177/​14687​94122​11297​98
world congress of sociology, Stresa, Italy, September 12, 1959. Meyer C (2013) New alterities and emerging cultures of social interac-
https://​doi.​org/​10.​25969/​media​rep/​13805 tion. Global Cooperation Research Papers 3. Käte Hamburger
Gehle R, Pitsch K, Dankert T, Wrede S (2017) How to open an Kolleg / Centre for Global Cooperation Research, Duisburg
interaction between robot and museum visitor? Strategies to Minsky M (1967) Computation: finite and infinite machines. Prentice
establish a focused encounter in HRI. In: Proceedings of the Hall, Englewood Cliffs
2017 ACM/IEEE international conference on human–robot Mlynář J (2023) Harold Garfinkel and Edward Rose in the early years
interaction, pp 187–195 of ethnomethodology. J Hist Behav Sci 59(2):171–192. https://​
Gilbert NG, Heath C (eds) (1985) Social action and artificial intel- doi.​org/​10.​1002/​jhbs.​22237
ligence. Gower, Aldershot Mooers C (1951) Zatocoding applied to mechanical organization of
Hayward PL (1968) ELIZA Scriptwriter’s manual: a manual for knowledge. Am Doc 2:20–32
the use of the ELIZA conversational computer system. MIT, Natale S (2019) If software is narrative: Joseph Weizenbaum, artifi-
Cambridge cial intelligence and the biographies of ELIZA. New Media Soc
Heath C, Luff P (2022) Technology in practice. In: Maynard DW, Her- 21(3):712–728
itage J (eds) The ethnomethodology program: legacies and pros- Natale S (2021) Deceitful media: artificial intelligence and social life
pects. Oxford University Press, New York, pp 398–419 after the Turing test. Oxford University Press, New York
Heintz B (1993) Die Herrschaft der Regel: Zur Grundlagengeschichte Oldman D, Drucker C (1985) The non-reducability of ethno-methods:
des Computers. Campus Verlag, Frankfurt Can people and computers form a society? In: Gilbert GN, Heath
Hofstadter D (1995) Fluid concepts and creative analogies: computer C (eds) Social actions and artificial intelligence. Gower Publish-
models of the fundamental mechanisms of thought. Basic Books, ing, Aldershot, pp 144–159
New York Papert S (1968) The artificial intelligence of Hubert Dreyfus: a budget
Ikeya N (2020) Hybridity of hybrid studies of work: examination of of fallacies. MIT, Cambridge
informing practitioners in practice. Ethnogr Stud 17:22–40 Parsons T (1937) The structure of social action. McGraw Hill, New
Ivarsson J, Lindwall O (2023) Suspicious minds: the problem of trust York
and conversational agents. Comput Supp Cooper Work (CSCW) Pelikan HRM, Broth M, Keevallik L (2020) Are you sad, Cozmo?
32:545–571. https://​doi.​org/​10.​1007/​s10606-​023-​09465-8 How humans make sense of a home robot’s emotion displays.

13
AI & SOCIETY

In: HRI'20: proceedings of the 2020 ACM/IEEE international Schegloff EA (1992) Repair after next turn: the last structurally pro-
conference on human–robot interaction. ACM, Cambridge, pp vided defense of intersubjectivity in conversation. Am J Sociol
461–470 97(5):1295–1345
Porcheron M, Reeves S, Sharples S and Fischer JE (2018) Voice inter- Schegloff EA, Jefferson G, Sacks H (1977) The preference for self-
faces in everyday life. In: CHI’18: Proceedings of the 2018 CHI correction in the organization of repair in conversation. Language
conference on human factors in computing systems. ACM, New 53(2):361–382
York, Paper 640 Schüttpelz E (2017) Infrastructural media and public media. Med
Pruijt H (2006) Social interaction with computers: an interpretation Action 2017:13–61. https://​doi.​org/​10.​25819/​ubsi/​7935
of Weizenbaum’s ELIZA and her heritage. Soc Sci Comput Rev Schüttpelz E (2023) Reinventing the wheel of media theory. In:
24(4):516–523 Eisenmann C, Englert K, Ehler V, Schubert C (eds) Varieties of
Putnam H (1975) The meaning of “meaning”. University of Minnesota cooperation. Mutually making the conditions of mutual making.
Press, Minneapolis Springer, Wiesbaden
Quarton G (1967) Deliberate efforts to control human behavior and Shapira N, Zwirn G, Goldberg Y (2023) How well do large language
modify personality. Daedalus 96(3):837–853 models perform on faux pas tests? Find Assoc Comput Ling
Raudaskoski P (1990) Repair work in human–computer interaction: 2023:10438–10451
a conversation analytic perspective. In: Luff P, Gilbert NG, Shieber SM (1994) Lessons learned from a restricted Turing test. Com-
Frohlich D (eds) Computers and conversation. Academic Press, mun ACM 37(6):70–78
London, pp 151–171 Simon H (1969) The sciences of the artificial. MIT, Cambridge
Rawls AW (2021a) Durkheim’s self-regulating “constitutive” prac- Smith MB, Bruner JS, White RW (1956) Opinions and personality.
tices: an unexplored critical relevance to racial justice, consen- Wiley, New York
sus thinking, and the COVID-19 pandemic. In: Marucci N (ed) Sormani P (2020) “DIY AI”? Practising kit assembly, locating critical
Durkheim & critique. Palgrave Macmillan, Cham, pp 227–263 inquiry. Ethnogr Stud 17:60–80
Rawls AW (2021b) The structure of social facts: self, objects and Sormani P (2023) Interfacing AlphaGo: Embodied play, object agency,
action as products of reciprocity and cooperation in constitu- and algorithmic drama. Soc Stud Sci 53(5):686–711. https://​doi.​
tive practices. Mauss Int 1(1):186–206 org/​10.​1177/​03063​12723​11912​84
Rawls AW, Duck W (2020) Tacit racism. University of Chicago Suchman L (1987) Plans and situated action: the problem of human–
Press, Chicago machine communication. Cambridge University Press,
Rawls AW, Mann D (2015) Getting information systems to inter- Cambridge
act: the social fact character of ‘Object’ clarity as a factor in Suchman L (1993) Response to Vera and Simon’s situated action: a
designing information systems. Inf Soc 31(2):175–192 symbolic interpretation. Cogn Sci 17:71–75
Rawls AW, Turowetz J (2021) Garfinkel’s politics: collaborating with Suchman L (2007) Human–machine reconfigurations: plans and situ-
Parsons to document taken-for-granted practices for assem- ated actions. Cambridge University Press, Cambridge
bling cultural objects and their grounding in implicit social Suchman L (2023) Imaginaries of omniscience: automating intelligence
contract. Am Sociol 52(1):131–158 in the US Department of Defense. Soc Stud Sci 53(5):761–786
Rawls AW, Mann D, Garcia A, David G, Burton M (2009) Simple Svensson H (2020) Establishing shared knowledge in political meet-
enumerations: ethnomethodology and MITRE information ings: repairing and correcting in public. Routledge, London
assurance data standards (Italian title: “Semplici enumerazioni. Thielmann T (2012) Taking into account. ZFM 6(1):85–102
L’etnometodologia e gli Information Assurance Data Standards Thielmann T (2019) The ENIAC display: insignia of a digital praxeol-
del MITRE”). Ethnografia et Ricerca Qualitativa 2(1):79–108 ogy. In: Haigh T (ed) Exploring the early digital. Springer, New
Rawls AW, Turowetz J, Eisenmann C, Hoey E (2020) On mean- York, pp 101–116
ing: the meaning of a particular social fact—“Suicide”—as Thielmann T, Sormani P (2023) Zatocoding Remodeled? Harold Gar-
discussed by Parsons, Garfinkel, Goffman and Sacks in 1964. finkel’s hybrid studies avant la lettre. Int J Dig Hum 2023:1
Paper delivered at the Yale Center for Cultural Sociology Thomas PJ (1991) Language, communication, social interaction and
Reeves S, Porcheron M (2022) Conversational AI: respecifying par- the design of human–computer interfaces. Behav Inf Technol
ticipation as regulation. In: Housley W, Edwards A, Beneito- 10(4):311–324
Montagut R, Fitzgerald R (eds) The Sage handbook of digital Thomas WI, Thomas DS (1928) The child in America: behavior prob-
society. Sage, London, pp 573–592 lems and programs. Knopf, New York
Relieu M, Sahin M, Francillon A (2020) Une approche con- Tiku N (2022a) Google fired engineer who said its AI was sentient
figurationnelle des leurres conversationnels. Réseaux (07/22/2022). The Washington Post: https://​www.​washi​ngton​
220–221(2):81–111 post.​com/​techn​ology/​2022/​07/​22/​google-​ai-​lamda-​blake-​lemoi​
Rose E (1967) A looking-glass conversation in the rare languages of ne-​fired/. 24 Oct 2022.
Sez and Pique. University of Colorado, Boulder Tiku N (2022b). The Google engineer who thinks the company’s AI
Sacks H (1963) Sociological description. Berkeley J Sociol 8:1–16 has come to life (06/11/2022). The Washington Post: https://​
Sacks H (1992) Lectures on conversation, vol I & II. Blackwell, Oxford www.​washi​ngton​post.​com/​techn​ology/​2022/​06/​11/​google-​ai-​
Sacks H, Schegloff EA, Jefferson G (1974) A simplest systematics lamda-​blake-​lemoi​ne/. 24 Oct 2022
for the organization of turn-taking for conversation. Language Turing AM (1950) Computing machinery and intelligence. Mind
50(4):696–735 59(236):433–460. https://​doi.​org/​10.​1093/​mind/​LIX.​236.​433
Sacks H (1968) “Understanding” in conversation. Talk at the Univer- Turner LN (2018) Detecting racial bias in algorithms and machine
sity of California Santa Barbara (UCSB). Unpublished transcript. learning. J Inf Commun Ethics Soc 16(3):252–260. https://​doi.​
Harold Garfinkel Archive, Newburyport, MA org/​10.​1108/​JICES-​06-​2018-​0056
Saha D, Brooker P, Mair M, Reeves S (2023) Thinking like a machine: Turowetz J, Rawls AW (2021) The development of Garfinkel’s “Trust”
Alan Turing, computation and the praxeological foundations of argument from 1947 to 1967: demonstrating how inequality dis-
AI. Sci Technol Stud 2023:1 rupts sense and self-making. J Class Sociol 21(1):3–37

13
AI & SOCIETY

Watson R (1998) Ethnomethodology, consciousness and self. J Con- Ziewitz M (2017) A not quite random walk: experimenting with the
scious Stud 5(2):202–223 ethnomethods of the algorithm. Big Data Soc 4(2):1–13. https://​
Watson R (2009) Constitutive practices and Garfinkel’s notion of trust: doi.​org/​10.​1177/​20539​51717​738105
revisited. J Class Sociol 9(4):475–499 Znaniecki F (1936) Social actions. Farrar & Rinehart, New York
Weizenbaum J (1966) ELIZA—a computer program for the study of
natural language communication between man and machine. Publisher's Note Springer Nature remains neutral with regard to
Commun ACM 9(1):36–45 jurisdictional claims in published maps and institutional affiliations.
Weizenbaum J (1967) Contextual understanding by computers. Com-
mun ACM 10(8):474–480
Weizenbaum J (1976) Computer power and human reason: from judg-
ment to calculation. WH Freeman & Co, San Francisco
Wittgenstein L (1953) Philosophical investigations. Blackwell, Oxford
Wooffitt R (1994) Applying sociology: conversation analysis in the
study of human–(simulated) computer interaction. Bull Sociol
Methodol Bull Méthodol Sociol 43(1):7–33

13

You might also like