72% found this document useful (32 votes)
19K views3 pages

Revised Transcription

The document discusses an interview with Abeba Berhane, a PhD student studying AI ethics. Berhane discusses her background in embodied cognitive science and how her interest evolved to focus on the ethics of AI and its unequal impacts. She explains her work on relational ethics, which centers people disproportionately impacted by AI and sees ethics as relationships between groups rather than individual fairness formulations.

Uploaded by

Brigette Domingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
72% found this document useful (32 votes)
19K views3 pages

Revised Transcription

The document discusses an interview with Abeba Berhane, a PhD student studying AI ethics. Berhane discusses her background in embodied cognitive science and how her interest evolved to focus on the ethics of AI and its unequal impacts. She explains her work on relational ethics, which centers people disproportionately impacted by AI and sees ethics as relationships between groups rather than individual fairness formulations.

Uploaded by

Brigette Domingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Intelligent Verbatim by Brigette L.

Domingo
Duration: 17 minutes
[00:00:00.282] - Sam Charrington
All right, everyone. I am on the line with Abeba Berhane. Abeba is a Ph.D. student at University
College Dublin. Abeba, welcome to the TWIML A.I. podcast.

[00:00:12.552] - Abeba Berhane


Thank you so much for having me, Sam.

[00:00:15.522] - Sam Charrington


I'm really excited about this conversation. We had an opportunity to meet in person after a long
while interacting on Twitter at the most recent NeurIPs conference, in particular the Black in
A.I. Workshop, where you not only presented your paper Algorithmic Injustices Toward
Relational Ethics, but you won best paper there. I'm looking forward to digging into that and
some other topics.

Before we do that, I would love to hear you share a little bit about your background. I will
mention for folks that are hearing the sirens in the background: while I mentioned that you are
from University College Dublin, you happen to be in New York now at the A. I. E. S. Conference
in association with AAAI. As folks might know, it's hard to avoid sirens and construction in New
York City. Just consider that background, our mood, ambience, background sounds. Your
background?

[00:01:24.982] - Abeba Berhane


Yes, yes.

[00:01:25.942] - Sam Charrington


How did you get started working in A.I. Ethics?

[00:01:28.482] - Abeba Berhane


My background is in cognitive science and particularly, a part of cognitive science called
Embodied Cognitive Science, which has the roots in cybernetics, in systems thinking. The idea is
to focus on the social and the cultural, on the historical, and to view cognition in continuity with
the world, with historical backgrounds, and all that as opposed to your traditional approach to
cognition, which just creates cognition as something located in the brain or something
formalizable, something that can be computed.

Yes. That is my background. Even during my masters, I lean towards the A.I. side of cognitive
science. The more I delve into it, the more I am much more attracted to the ethics side, to
injustices, to the social issues. The more the Ph.D. goes on, the more I find myself in the ethics
side.

[00:02:49.942] - Sam Charrington


Was there a particular point that you realized that you were really excited about the ethics part
in particular, or did it just evolve for you?
Intelligent Verbatim by Brigette L. Domingo
Duration: 17 minutes
[00:02:59.212] - Abeba Berhane
I think it just evolved. When I started out at the end of my masters and at the start of the Ph.D.,
my idea is that we have this relatively new school way of thinking, which is Embodied CogSci,
which I quite like very much because it emphasizes ambiguities and messiness and
contingencies as opposed to drawing clean boundaries. I like the idea of redefining cognition as
something relational, something inherently social, and something that is continually impacted,
influenced by other people and the technologies we use. The technology aspect, the technology
end was my interest.

Initially, the idea is technology constitutes aspect of our cognition. You have the famous 1998
thesis by Andy Clark and Dave Chalmers, The Extended Mind, where they claimed the iPhone is
an extension of your mind. You can think of it that way. I was kind of advancing the same line of
thought. The more I delved into it, the more I saw digital technology whether it's ubiquitous
computing such as face recognition systems on the street or your phone, whatever.

Yes. It does impact and it does continually shape and reshape our cognition in what it means to
exist in the world. What became more and more clear to me is that not everybody is impacted
equally. The more privileged you are, the more in control of you are as to what can influence
you and what you can avoid. That’s where I become more and more involved with the ethics of
computation and its impact on cognition.

[00:05:23.422] - Sam Charrington


The notion of privilege is something that flows throughout the work that you presented at Black
in A.I., the Algorithmic Injustices paper, and this idea, this construct of relational ethics. What is
relational ethics and what are you getting at with it?

[00:05:45.202] - Abeba Berhane


Relational ethics is actually not a new thing. A lot of people have theorized about it and have
written about it. The way I'm approaching it, the way I'm using it is, I guess, springs from this
frustration that for many folks who talk about ethics or fairness or justice, most of it comes
down to constructing this neat formulation of fairness or a mathematical calculation of who
should be included and who should be excluded, what kind of data do we need, that sort of
stuff. For me, relational ethics is “Let's leave that for a little bit and let's zoom out and see the
bigger picture.” Instead of using technology to solve the problems that emerged from
technology itself, which means censoring technology, let's instead center the people, especially
people that are disproportionally impacted by the limitations or the problems that arise with
the development and implementation of technology.

There is a robust research and you can quantify fairness or algorithmic injustice. The pattern is
that the more you are at the bottom of the intersection level, that means the farther you are
from your stereotypical white cis-gendered domain. The bigger the negative impacts are on
you, whether it's classification or categorization or whether it's being scaled and scored for and
by hiring algorithms or looking for housing or anything like that, the more you move away from
that stereotypical category, the status quo, the more the heavy the impact is on you. The idea
Intelligent Verbatim by Brigette L. Domingo
Duration: 17 minutes
of relationality is to think from that perspective, to take that as a starting point. These are the
groups or these are the…

You might also like