0% found this document useful (0 votes)
9 views

Why Algorithms Remain Unjust - Power Structures Surrounding Algorithmic Activity

Uploaded by

martinland
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Why Algorithms Remain Unjust - Power Structures Surrounding Algorithmic Activity

Uploaded by

martinland
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

W HY A LGORITHMS R EMAIN U NJUST: P OWER S TRUCTURES

S URROUNDING A LGORITHMIC ACTIVITY

A P REPRINT

Andrew Balch
Department of Computer Science
arXiv:2405.18461v1 [cs.CY] 28 May 2024

University of Virginia
Charlottesville, Virginia 22903
[email protected]

May 28, 2024

A BSTRACT
Algorithms play an increasingly-significant role in our social lives. Unfortunately, they often perpetu-
ate social injustices while doing so. The popular means of addressing these algorithmic injustices has
been through algorithmic reformism: fine-tuning the algorithm itself to be more fair, accountable,
and transparent. While commendable, the emerging discipline of critical algorithm studies shows
that reformist approaches have failed to curtail algorithmic injustice because they ignore the power
structure surrounding algorithms. Heeding calls from critical algorithm studies to analyze this power
structure, I employ a framework developed by Erik Olin Wright to examine the configuration of power
surrounding Algorithmic Activity: the ways in which algorithms are researched, developed, trained,
and deployed within society. I argue that the reason Algorithmic Activity is unequal, undemocratic,
and unsustainable is that the power structure shaping it is one of economic empowerment rather than
social empowerment. For Algorithmic Activity to be socially just, we need to transform this power
configuration to empower the people at the other end of an algorithm. To this end, I explore Wright’s
symbiotic, interstitial, and raptural transformations in the context of Algorithmic Activity, as well as
how they may be applied in a hypothetical research project that uses algorithms to address a social
issue. I conclude with my vision for socially just Algorithmic Activity, asking that future work strives
to integrate the proposed transformations and develop new mechanisms for social empowerment.

1 Introduction

With generative artificial intelligence (AI) squarely at the forefront of public consciousness, it seems more pertinent
now than ever before to discuss the role of algorithms in society, how they perpetuate injustice, and most importantly,
why all ameliorative efforts have arguably failed. It is no longer deniable that algorithms shape the ways we see
and act upon the social world, often encoding existing systems of inequality [7, 10, 13–15, 28]. The overwhelming
response from the industry (researchers, developers, and organizations) to these glaring issues has been characterized by
algorithmic reformism. Polack [43] summarizes algorithmic reformism as the methods “whereby calculated evaluations
and critiques of algorithm logic motivate its redesign without changing the underlying problems and design requirements
it is supposed to satisfy". Algorithmic reformism discussions center the algorithm itself, asking how its development
and use can be more ethical, fair, and human-centered in the hopes of relieving algorithmic bias [5, 27]. In practice,
algorithmic reformism usually involves setting ethical development and use guidelines, attempting to explain decisions,
establishing human oversight, assessing algorithm bias with mathematical metrics, and adjusting outcomes to more
evenly distribute an algorithm’s impact across social groups.
The reformist approach has been widely criticized: guidelines are soft and frequently undermined, transparency is
limited by technological advancement and proprietary designs, human-centered design often ignores implicit biases and
norms, and distributive justice-based fairness is local in scope [12, 20, 25, 35, 43, 44]. At the root of all these criticisms
is the recognition that traditional algorithmic reformism ignores larger power structures and social dynamics surrounding
Why Algorithms Remain Unjust A P REPRINT

algorithms themselves. It is evident that algorithmic reformism, while important, has been largely unsuccessful. Critical
algorithm studies have arisen in response to the persistence of algorithmic injustice, pointing out the failings of
reform-only approaches and arguing for perspectives that look around the algorithm, taking structural injustices and
systems of power into account [12, 14, 20, 22, 24–26, 31, 35, 43, 44, 46]. My aim in this paper is to diagnose the root
cause of these algorithmic injustices by providing an analysis of the power configuration surrounding algorithms in
society.
My analysis herein relies on Erik Wright’s discussion of real utopias [48]. In “Transforming Capitalism through Real
Utopias", Wright argues that “many forms of human suffering and many deficits in human flourishing are the result
of existing institutions and social structures” and that, by transforming these structures, we can reduce suffering and
approach flourishing [48]. This transformation starts by identifying “moral principles for judging social institutions",
using them to criticize existing institutions, exploring viable alternatives, and finally, proposing transformations “for
realizing those alternatives" [48]. Wright conducts this process in the context of Economic Activity: the exchange of
goods and services. I focus on employing the tools Wright provides to judge what I call Algorithmic Activity, diagnose
the power structure that perpetuates unjust Algorithmic Activity, and explore transformative solutions. By doing so, our
understanding of how algorithms reproduce injustice can be deepened.
The structure of my paper is as follows: First, I define Algorithmic Activity and demonstrate how it does not meet
Wright’s standards of equality, democracy, and sustainability for social justice. Second, I provide a diagram of the
power configuration surrounding Algorithmic Activity, illustrating how it is dominated by economic power (as opposed
to state or social power). Third, I summarize Wright’s transformative strategies for social empowerment and place them
in the context of Algorithmic Activity. Fourth, I walk through the different roles these strategies may play within a
hypothetical research project.

2 Algorithmic Activity, Society, and Injustice

I define Algorithmic Activity as the ways in which algorithms are researched, developed, trained, and deployed within
society. In this way, Algorithmic Activity looks beyond the impact of individual algorithms, the sole concern of
algorithmic reformism. Building on calls from critical algorithm theorists, this definition considers the actors that
shape Algorithmic Activity itself, and thereby encompasses the powers surrounding the existence and behavior of
algorithms. Furthermore, I define the term “algorithm" as any automatic process by which an input is systematically
and intentionally mapped to an output. Normative ideals about proximity, relation, and the realm of possible end results
are built into an algorithm and act as pre-defined constraints upon the algorithm itself, as well as any interaction with it
[6, 40, 44]. In other words, the developers of an algorithm hold implicit social norms and biases that are imbued into
the algorithm. These norms and biases ultimately shape the algorithm’s output and, coincidentally, the individual lives
in which their decisions play a role.
The system surrounding Algorithmic Activity, as we know it now, does not fulfill Wright’s moral principles of equality,
democracy, or sustainability [48]. Algorithmic Activity is unequal in the sense that it perpetuates biases against
vulnerable social groups [2, 3, 15, 38, 39, 47]. It is undemocratic in the sense that those impacted by algorithmic
decisions often know nothing about the algorithm itself, or even that it exists [35, 41]. Lastly, it is unsustainable in
the sense that algorithms, as they are, do not ensure equal or greater access to social, economic, and environmental
resources for future generations [11, 18, 20, 32, 42, 49].
When an algorithm is deployed in society, it has the capacity to both automatically shape the social world (through its
results and decisions), and to be shaped by it (through data and normative design). Therefore, Algorithmic Activity
may be better understood as a uniquely modern social institution. In sociology, an institution provides a normative
framework for moving through the world. Institutions are self-reproducing, creating a constant feedback loop where
institutions shape individuals and vice versa.
So, what makes Algorithmic Activity distinct from traditional institutions such as language, the family, the state,
a school, or a corporation? The answer is threefold: opacity, efficiency, and minimal human intervention. The
opaque “black box" is a popular characterization of an algorithm, communicating the inability of the general public to
understand its inner workings. The opacity of Algorithmic Activity does not stop “under the hood" of the algorithm
itself. Algorithmic Activity play roles in key life decisions each and every day, often without our knowledge [33,
41]. The efficiency of Algorithmic Activity is much more obvious, so much so that it is an algorithm’s main selling
point. Encoded, systematic processes are quicker to formulate and execute than the more ambiguous nature of other
institutions, which are characterized by slow transformations in the social consciousness or the rational crawl of the
bureaucratic process. However, this efficiency comes at a great social, economic, and environmental cost. The social
cost is perhaps the most concerning: algorithms are enabled to act back on society with minimal human intervention.
No matter how opaque a bureaucratic institution may be, there is always an individual moving the cogs of the machine

2
Why Algorithms Remain Unjust A P REPRINT

Figure 1: A utopic power configuration where Social Power dominates Algorithmic Activity and all other powers are
subordinated to it.

and facilitating its impact on society. When algorithms are deployed to inform or make decisions, there is often no
human actively influencing the ways in which the algorithm produces its results. While an algorithm’s designers may
claim a responsible, human-centered development approach, this does not guarantee a human-centered implementation
in the real world [20].
It is evident that Algorithmic Activity does not just violate the moral principles of equality, democracy, and sustainability.
It perpetuates these injustices in ways that are more invisible, efficient, and fundamentally disconnected from humanity
than ever before seen. To have any hope in transforming this pattern, we must analyze the social power configuration
that surrounds and maintains Algorithmic Activity.

3 Configuration of Power in Algorithmic Activity

In his paper, Wright [48] provides a visual vocabulary to conceptualize how the three types of power (State, Economic,
and Social) interact and exert control over economic activity. State power is rooted in the creation and enforcement
of rules, economic power is rooted in “the use of economic resources", and social power is “rooted in the capacity to
mobilize people for cooperative, voluntary collective actions" [48]. Wright argues that it is not as simple as any one
power having complete control over economic activity. Instead, the three powers coexist in a hybrid configuration that
can be more, or less, capitalist, statist, or socialist.
For reference, Figure 1 represents Wright’s configuration of social empowerment [48]. The interaction between different
forms of power, and ultimate control over Algorithmic Activity, is shown by the arrows. An arrow’s boldness indicates
the strength and autonomy of the power exerted along its path, where bolder is primary and thinner is secondary or
subordinate to a dominant power. This power configuration promotes Wright’s principles of a just society because
Algorithmic Activity would be “controlled through the exercise of social power", ensuring that neither economic nor
state power impede the best interests of society [48].
Figure 2 is the result of my analysis of the configuration of power surrounding Algorithmic Activity. This diagram
demonstrates just how far we are from socially just Algorithmic Activity. Economic Empowerment is the dominant
force in this power configuration, exerting control over not only Algorithmic Activity, but social and state power as well.
As a means of validating this power structure, I will describe how each form of power controls Algorithmic Activity,
illustrating how the influence of other powers shapes the nature of this control.

3
Why Algorithms Remain Unjust A P REPRINT

Figure 2: The modern configuration of power around Algorithmic Activity is one of Economic Empowerment.

Economic Power (fig. 2a)

It is evident that economic actors with the most resources, such as Big Tech (Alphabet/Google, Amazon, Apple,
Meta/Facebook, and Microsoft), can fund bigger research projects, hire the best people, own the most powerful
hardware, and acquire any smaller company or project. In other words, the means of algorithmic production is
effectively owned by Big Tech, which affords them ultimate control over Algorithmic Activity, especially in academia
[1, 21]. Even the means of responsible production is owned by Big Tech. It has been shown that the standards
for responsible machine learning (ML) development, largely influenced by Big Tech, are difficult for less cash-rich
organizations to meet, leaving any progress in this area subject to the whim of Big Tech [23].

State Power upon Economic Power (fig. 2b)

Only state power has a clear influence in the exercise of economic power over Algorithmic Activity. State actors can
regulate the economy and pass bills that provide funding for projects.

State Power (fig. 2c)

State actors have recently expressed an increased interest in Algorithmic Activity. There is the potential for the state to
step in and set policies surrounding the development and use of more powerful algorithms, mandate the creation of
registries for models and training datasets, and conduct audits of sensitive systems. The European Union (EU) has been
at the forefront with its Digital Services Act (DSA), which aims to provide protections via “algorithmic transparency
and accountability", and its AI Act, which establishes “obligations for providers and users depending on the level of risk
from artificial intelligence" [16, 17]. The full impact of these legislative initiatives have yet to be seen as the DSA just
recently came into effect in February 2024, and the AI act was adopted in March 2024 and is yet to be fully applicable.
Data protection regulations like the EU’s General Data Protection Regulation (GDPR) are also an important exercise of
state power.

4
Why Algorithms Remain Unjust A P REPRINT

Economic Power upon State Power (fig. 2d)


The primary reason for the strength of this interaction is the underlying structure of the capitalist state in countries
such as the United States (U.S.) [48]. Because the state depends on the capitalist economic substructure, the dynamic
in Figure 2c is, in some ways, subordinate to economic power. This manifests through actions such as lobbying for
government grants, looser regulations, or more relaxed policies surrounding Algorithmic Activity.

Social Power upon State Power (fig. 2e)


Politics is the central means by which social power mediates state power in a democratic society [48]. People participate
in politics and form parties to shape the behavior of the state.

Social Power (fig. 2f)

Social power’s control over Algorithmic Activity originates from the data generated by society as well as the development
of free and open-source software. The data we create by moving through the world is the main avenue by which social
power influences the design of algorithms. Algorithms are explicitly (as in ML) or implicitly (as in sorting methods)
designed around data, so they consequently encode various social norms, values, concepts, and relations that are latent
in the data itself [6, 40, 44]. The other social activity that directly furthers Algorithmic Activity is the creation of
free and open-source software. In theory, open-source software is a form of nonhierarchical cooperative Algorithmic
Activity where individual social actors organize to create algorithms that are not directly profit-motivated.

Economic Power upon Social Power (fig. 2g)


Social power’s control over Algorithmic Activity is subordinate to economic power. As in Figure 2e, this is because the
underlying substructure of U.S. society is capitalism and economic power owns the means of algorithmic production (fig.
2a). More precisely, economic power controls the exercise of social power through conventional means such as the labor
market and political action (e.g. uncapped donations to super PACs). This occurs alongside more algorithm-specific
controls, such as extracting social data and shaping the social concept of the algorithm. The sale of the algorithm as a
product is the mechanism that allows these avenues of subordination. Economic actors sell us products that we use to
navigate the world around us, and that then extract our data as we do so (e.g. smartphones, social media, ChatGPT).
This directly undermines the social control over Algorithmic Activity through data (fig. 2f) for two main reasons. First,
we are not always in control or aware of the data we generate [41]. Second, there is an implicit bias surrounding whose
data is collected, who collects it, and who determines its “truth". Access to technology is stratified across socioeconomic
status [34, 50], data collection is conducted primarily by Western universities and corporations, and ground-truth labels
are determined by a relatively small portion of society. Thus, a relatively small portion of society has a disproportionate
influence on algorithms that impact society at large. Control exerted by free and open-source software (fig. 2f) is also
facilitated by economic power. In practice, the most prominent software libraries for developing ML algorithms (e.g.
PyTorch, TensorFlow, and CUDA) are maintained or otherwise funded by Big Tech (Meta/Facebook, Google, and
Nvidia, respectively). Even when freely given this software, the means of producing the most powerful, state-of-the-art
algorithms (e.g. GPT-4) remain far out of reach for individual social actors [50]. Proprietary hardware (Nvidia’s
CUDA-capable GPUs) is necessary to train algorithms like neural networks at-scale or sometimes just as one-offs.
This leads the individual interested in contributing to Algorithmic Activity back to Big Tech. There, they can purchase
computing capability à la carte via platforms like Amazon Web Services or Google Colab. The last and most potently
invisible force exerted by economic power is the ability to influence the social concept of the algorithm. Since economic
power is the dominant force in Algorithmic Activity, economic actors like Big Tech can shape the algorithm as a
concept through its design, use, and surrounding marketing. Through these mechanisms, AI has become a buzzword
associated with productive efficiency and rational, objective results, despite the fact that the reality of Algorithmic
Activity is anything but fact-driven [6].

State Power upon Social Power (fig. 2h)


State power has the potential to shape the social understanding of Algorithmic Activity through public education
initiatives, such as those proposed by a recent Virginia executive order [19].

It should now be evident how the power configuration surrounding Algorithmic Activity in Western, capitalist nations
such as the U.S. serves to empower the interests of economic actors (fig. 2) at the expense of the interests of society
(fig. 1). Algorithmic Activity remains unjust because it is a fundamentally profit-driven activity. Economic power is
interested in a just society so much as such a society rewards those economic actors with more capital than the unjust

5
Why Algorithms Remain Unjust A P REPRINT

status-quo. To make algorithmic activity more just, we must consider how the power configuration can be transformed
to be one of social empowerment.

4 Transforming Power for a More Just Society

Wright describes three main strategies for transformation through which an alternative power structure can be realized
[48]. Symbiotic transformations are those that work to extend and deepen “institutional forms of social empowerment",
often through the state [48]. Interstitial transformations seek out the edges of the existing system in which to “build
new forms of social empowerment" [48]. Lastly, raptural transformations deconstruct the existing power structure and
reform it around “new, emancipatory institutions" [48]. These three transformative strategies are weak on their own,
and more effective in combination. Here, I provide an overview of proposed alternatives for conducting Algorithmic
Activity that would be socially-empowering, and how they may fit into Wright’s three transformations.

Symbiotic Transformations

Algorithmic reformism constitutes symbiotic transformations of Algorithmic Activity. I have already demonstrated
that the overall weakness of reformist initiatives is how they place the algorithm itself front and center, ignoring the
surrounding power configuration and associated systemic injustices. Simply put, reformism attempts to deepen social
empowerment without threatening the status-quo, and does so under the control of the dominant power. Take the most
potent tool in the reformist’s belt, algorithmic transparency, as an example. On the surface, making an algorithm’s
decisions ’transparent’ involves making its strengths and weaknesses evident to all stakeholders [27]. When achieved, it
can improve stakeholder trust, empowering them with a more comprehensive understanding about the model and how
it acts back upon them. Unfortunately, economic actors benefit from the opacity of the “black box". Transparency is
in direct opposition to the interests of the economic actor, where the protection of intellectual property (IP) is of the
upmost importance. Therefore, the inner-workings of the most powerful, influential algorithms (e.g. GPT-4) are often
kept secret to protect IP [41]. But this secrecy provides a dual function. When an algorithm inevitably reproduces
social biases, companies, governments, and other organizations can hide behind the notion of the “rogue algorithm" and
claim a lack of understanding in its processes. By doing so, the owners of the algorithm can shirk responsibility for
its actions. The notion of the algorithmic “black box" is thus a great asset to economic power, and they can use their
dominance over Algorithmic Activity to cyclically create bigger, more opaque algorithms that have to be “explained".
Besides its local scope and reliance on Big Tech, reformism is a failure because there is a lack of institutional forms
of social empowerment to entrench. The state must first fully implement enforceable policies and regulations upon
Algorithmic Activity before symbiotic transformations can have any real impact. Beyond the pending EU regulations,
audit requirements for algorithmic decision support systems have been discussed, but these would need to be carefully
designed to promote social justice and would likely require state force to be effective [45]. Similarly, avenues for
recourse against algorithmic decisions can be provided [4]. This recourse may use other symbiotic ideals of fairness,
accountability, and transparency to identify a wrongdoing, then pursue action against it in the form of an appeal or
compensation. Again, effective recourse that empowers oppressed social groups relies on the goodwill of economic
power as well as state oversight. The state may also reach for the carrot instead of the stick, opting to pass funding bills
that incentivize economic actors to participate in Algorithmic Activity that is socially just. Symbiotic transformations
are important endeavors, but they are not enough to make for more just Algorithmic Activity on their own.

Interstitial Transformations

As the weaknesses surrounding algorithmic reformism are made more apparent and critical algorithm studies gain
traction, proposals for interstitial transformations have begun to surface. Some strategies use algorithms themselves to
analyze the structural biases in Algorithmic Activity [29], others target data (fig. 2g) attempting to de-Westernize it and
“recognize nonmainstream ways of knowing the world through data" [36]. Alternatives to algorithmic reformism have
been proposed that situate reformist approaches in the context of the broader social problems they are trying to address,
evaluating them against non-technical social justice reforms [28, 37, 43]. Participatory or democratic design seeks to
actively involve communities and social groups in the development of algorithms that would impact them [8, 9, 35].
Confronting “ethics washing" in AI, the setting of ethics guidelines that fails to account for power structures, Resseguier
[44] calls for a “power to" ethics that emphasizes the “opening of possibilities" for social groups that have historically
been dominated and discriminated against. While it is not specific to Algorithmic Activity, collective bargaining may
also be an effective interstitial transformation, creating the missing interaction between social power and economic
power (fig. 2). Each of these methods would create new avenues for social empowerment by working within the power
structure itself.

6
Why Algorithms Remain Unjust A P REPRINT

Raptural Transformations

A raptural transformation in Algorithmic Activity could be as dramatic as breaking up the data and compute monopoly
that is Big Tech or redistributing technological resources. More nuanced reforms have also been suggested. Joyce et
al. [24] sees the discipline of sociology take a leading role in shaping algorithms. This envisioned future is rooted in
the ability for sociologists to “identify how inequalities are embedded in all aspects of society and to point toward
avenues for structural social change" [24]. In a similar vein, Green [20] argues that computer science often takes a
technology-centric, “greedy approach" to social reform, often resulting in long term harm. They call upon computer
scientists to interrogate their normative understanding of “social good", look to social thinkers and activists for more
viable ways to bring about positive social change, and to actively work against the assumption that an accurate algorithm
is the best solution to a problem. Lazer et al. [30] explores, among other things, how the university as an institution must
be reorganized to emphasize interdisciplinary collaboration and effective ethical guidelines to meet the social challenges
posed by the modern algorithm. Intersectional thought and practice has also inspired “algorithmic reparation", where
the implicit bias of an algorithm is leveraged to empower marginalized experiences [12].

5 Transformations in a Research Context

To solidify these ideas, let us explore a hypothetical scenario in Algorithmic Activity. My goal as we walk through
this scenario is to demonstrate how the aforementioned transformations may be applied in practice to yield a more
socially-empowered Algorithmic Activity. In choosing the scenario, I wished to deviate from more popular discussions
about recidivism prediction or credit scoring to show how the ideas presented in this paper can be applied to projects
that are less controversial, or perhaps even altruistically-motivated. With this said, consider a team of researchers that
wish to create an algorithm that determines how food should be distributed from a food bank. This algorithm has a clear
impact on society, it determines how much food a person gets from the food bank. The goal for our researchers is an
optimal algorithm where the algorithm’s decisions result in the most food given to the most people. Thus, they have a
fairly normative research question: Is the proposed algorithm more or less “optimal" than other algorithms? Let us
assume that the starting point for our researchers is an algorithm that manages to distribute all the day’s food to the
most people, but does so by giving out the least amount of food possible to everyone except for the last patron, who is
given the remainder of the food. Such a solution is clearly ineffective at helping people, and must be transformed to be
more egalitarian.

Symbiotic Transformation

So, our researchers turn to symbiotic transformations to fix this problem. Their research question must be adjusted to
match this new strategy: Is the proposed algorithm more or less fair, accountable, and transparent than other algorithms?
Now, the system is adjusted such that it distributes food equitably to everyone, explains why a patron received the
amount they got, and offers them a fair avenue to dispute this amount. As a result, individuals are treated more fairly
and even have more say in how this Algorithmic Activity impacts them. However, there is still room for more social
empowerment.

Interstitial Transformation

Applying an interstitial transformation requires looking beyond the algorithm itself to the social problem it is trying to
address. In this scenario, the social issue is undernutrition. Identifying the root social problem allows our researchers
to improve how it is addressed by their algorithm in a few ways: First, they can engage with social theories about
undernutrition and use them to inform potential solutions. Second, they can directly involve patrons of the food bank in
the development and use of their algorithm. Third, they can explore how food banks (and their algorithm, by association)
under- or over-serve different social groups. Fourth, and most importantly, our researchers can ask whether their
algorithm is better at combatting undernutrition than alternative avenues of food distribution. This research question is
distinct because of how it focuses on the social problem. It is therefore much more valuable than normative or symbiotic
questions alone because its answer guides future work in the direction that best addresses the social issue. This entire
process is an exercise in social empowerment because it directly confronts a social issue, engages those affected,
and diligently works to alleviate its impact through the most effective mechanisms available. Note that interstitial
transformations do not completely exclude the algorithm as a possible tool for fixing undernutrition, they simply ask
what alternative approaches could be more effective. Any of these alternatives may also be able to use algorithms as a
tool to increase their potency.

7
Why Algorithms Remain Unjust A P REPRINT

Raptural Transformation

Sometimes, a social injustice is so persistent that the only solution is to depart from the structure that upholds it and
forge new mechanisms of confrontation. In our scenario, undernutrition in disenfranchised social groups may be
perpetuated by how funding and food donations are distributed to food banks themselves, by the material inequalities
that determine access to healthy food, or by larger institutions like industrial farming, which prioritizes sheer magnitude
over nutritional value. Power structures can even impede symbiotic and interstitial transformations. For example, our
team of researchers may be rushed to complete this particular project by some deadline set by a funding agency, a
publisher, or the organization they are employed by. In so doing, an upper limit is placed upon their capacity to consider
symbiotic and interstitial transformations that would deepen social empowerment. In such a situation, it is of the upmost
importance for our team of researchers to follow the lead of a social theorist. They are the most well-equipped to
deconstruct the power structure surrounding injustice and offer a viable alternative that empowers social actors. While
it may be possible to use the algorithm as a tool for this deconstruction, such as to analyze and highlight disparities in
food access, this should take a back seat to initiatives for genuine social change. And so, our research question becomes
the same as the social theorist’s: How are injustices embedded within society and what structural changes are necessary
to make the world more just?

6 Conclusion
The persistence of algorithmic injustice within society indicates a deeper issue with how its prevalence has been
addressed through algorithmic reformism. Inspired by Wright and proponents of critical algorithm studies, I offered a
diagnosis of the modern power configuration surrounding Algorithmic Activity and demonstrated how economic power
dominates the realm. To create socially just Algorithmic Activity, this configuration of economic empowerment must
be transformed to empower social actors instead. I described how this could be achieved through different symbiotic,
interstitial, and raptural transformations to Algorithmic Activity and provided an example of these transformations in
practice.
My vision for Algorithmic Activity that is equitable, democratic, and sustainable has two criteria. First, Algorithmic
Activity must be fundamentally socially-driven. This means that algorithms are applied as tools when they can create a
more just society, and set aside when they can not. Second, algorithmic problems must be seen as social problems. For
example, we must recognize that bias perpetuated by an algorithm’s decisions may not be able to be fully addressed by
adjusting the algorithm. Sometimes the underlying social bias must be confronted in a deep and meaningful way first.
Such an alternative requires Wright’s symbiotic, interstitial, and raptural transformations together. As a society, we
need meaningful legislative protections against unjust Algorithmic Activity that uses reformist approaches for recourse
against economic power. We need the literacy to ask if Algorithmic Activity actually addresses social issues, and the
space within the current structure to ask these questions. We need to break away from being sold greedy, technocratic
solutions and produce our own reparative alternatives, grounded in social theory. This all involves learning from the
failure of lone algorithmic reformism and working to build new tools and mechanisms for social empowerment. Future
work that strives for socially just Algorithmic Activity should implement the proposed transformations where they are
appropriate, innovate new strategies for deepening social empowerment, and continuously re-evaluate what it means to
be socially just in light of evolving social theories.

Acknowledgements
I would like to thank Professor Afsaneh Doryab, Professor Andreja Siliunas, and Natalie Bretton for their help and
support developing this paper.

References
[1] Mohamed Abdalla and Moustafa Abdalla. “The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on
Academic Integrity”. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’21.
New York, NY, USA: Association for Computing Machinery, July 2021, pp. 287–297. ISBN: 978-1-4503-8473-5.
DOI : 10.1145/3461702.3462563. URL : https://doi.org/10.1145/3461702.3462563 (visited on
04/21/2024).
[2] Abubakar Abid, Maheen Farooqi, and James Zou. “Persistent Anti-Muslim Bias in Large Language Models”. In:
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’21. New York, NY, USA:
Association for Computing Machinery, July 2021, pp. 298–306. ISBN: 978-1-4503-8473-5. DOI: 10.1145/
3461702.3462624. URL: https://doi.org/10.1145/3461702.3462624 (visited on 04/21/2024).

8
Why Algorithms Remain Unjust A P REPRINT

[3] Hammaad Adam et al. “Write It Like You See It: Detectable Differences in Clinical Notes by Race Lead to
Differential Model Recommendations”. In: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and
Society. AIES ’22. New York, NY, USA: Association for Computing Machinery, July 2022, pp. 7–21. ISBN:
978-1-4503-9247-1. DOI: 10 . 1145 / 3514094 . 3534203. URL: https : / / dl . acm . org / doi / 10 . 1145 /
3514094.3534203 (visited on 04/21/2024).
[4] Ali Alkhatib and Michael Bernstein. “Street-Level Algorithms: A Theory at the Gaps Between Policy and
Decisions”. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI ’19.
New York, NY, USA: Association for Computing Machinery, May 2019, pp. 1–13. ISBN: 978-1-4503-5970-2.
DOI : 10.1145/3290605.3300760. URL : https://dl.acm.org/doi/10.1145/3290605.3300760 (visited
on 04/09/2024).
[5] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportu-
nities. MIT Press, 2023.
[6] David Beer. “The social power of algorithms”. In: Information, Communication & Society 20.1 (Jan. 2017).
Publisher: Routledge _eprint: https://doi.org/10.1080/1369118X.2016.1216147, pp. 1–13. ISSN: 1369-118X.
DOI : 10.1080/1369118X.2016.1216147. URL : https://doi.org/10.1080/1369118X.2016.1216147
(visited on 04/23/2024).
[7] Abeba Birhane. “Algorithmic injustice: a relational ethics approach”. en. In: Patterns 2.2 (Feb. 2021), p. 100205.
ISSN : 26663899. DOI : 10.1016/j.patter.2021.100205. URL : https://linkinghub.elsevier.com/
retrieve/pii/S2666389921000155 (visited on 04/07/2024).
[8] Abeba Birhane et al. “Power to the People? Opportunities and Challenges for Participatory AI”. In: Proceedings
of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. EAAMO ’22.
New York, NY, USA: Association for Computing Machinery, Oct. 2022, pp. 1–8. ISBN: 978-1-4503-9477-2.
DOI : 10.1145/3551624.3555290. URL : https://dl.acm.org/doi/10.1145/3551624.3555290 (visited
on 04/23/2024).
[9] Elizabeth Bondi et al. “Envisioning Communities: A Participatory Approach Towards AI for Social Good”. In:
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’21. New York, NY, USA:
Association for Computing Machinery, July 2021, pp. 425–436. ISBN: 978-1-4503-8473-5. DOI: 10.1145/
3461702.3462612. URL: https://doi.org/10.1145/3461702.3462612 (visited on 04/21/2024).
[10] Jenna Burrell and Marion Fourcade. “The Society of Algorithms”. en. In: Annual Review of Sociology 47.Volume
47, 2021 (July 2021). Publisher: Annual Reviews, pp. 213–237. ISSN: 0360-0572, 1545-2115. DOI: 10.1146/
annurev - soc - 090820 - 020800. URL: https : / / www . annualreviews . org / content / journals / 10 .
1146/annurev-soc-090820-020800 (visited on 04/23/2024).
[11] Jude Coleman. AI’s Climate Impact Goes beyond Its Emissions. en. Dec. 2023. URL: https : / / www .
scientificamerican . com / article / ais - climate - impact - goes - beyond - its - emissions/ (vis-
ited on 05/12/2024).
[12] Jenny L. Davis, Apryl Williams, and Michael W. Yang. “Algorithmic reparation”. en. In: Big Data & Society
8.2 (July 2021). Publisher: SAGE Publications Ltd, p. 20539517211044808. ISSN: 2053-9517. DOI: 10.1177/
20539517211044808. URL: https://doi.org/10.1177/20539517211044808 (visited on 04/07/2024).
[13] Kate Devlin. “Power in AI: Inequality Within and Without the Algorithm”. In: The Handbook of Gender,
Communication, and Women’s Human Rights (2023). Publisher: Wiley Online Library, pp. 123–139. DOI:
10.1002/9781119800729.ch8.
[14] Benjamin Eidelson. “PATTERNED INEQUALITY, COMPOUNDING INJUSTICE, AND ALGORITHMIC
PREDICTION”. In: American Journal of Law and Equality 1 (Sept. 2021), pp. 252–276. ISSN: 2694-5711. DOI:
10.1162/ajle_a_00017. URL: https://doi.org/10.1162/ajle_a_00017 (visited on 04/07/2024).
[15] Virginia Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Mar-
tin’s Press, 2018. URL: https : / / books . google . com / books ? hl = en & lr = &id = pn4pDwAAQBAJ &
oi = fnd & pg = PP10 & dq = info : itLHzWffA - 0J : scholar . google . com & ots = gFZNFgftui & sig =
aLwhu3mTzydCfjg40xLNIvFm0lA (visited on 04/26/2024).
[16] European Centre for Algorithmic Transparency. European Centre for Algorithmic Transparency - European
Commission. en. Apr. 2024. URL: https://algorithmic- transparency.ec .europa.eu/index_en
(visited on 05/05/2024).
[17] European Parliament. EU AI Act: first regulation on artificial intelligence. en. Aug. 2023. URL: https://www.
europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-
artificial-intelligence (visited on 05/05/2024).
[18] Victor Galaz et al. “Artificial intelligence, systemic risks, and sustainability”. In: Technology in Society 67
(Nov. 2021), p. 101741. ISSN: 0160-791X. DOI: 10.1016/j.techsoc.2021.101741. URL: https://www.
sciencedirect.com/science/article/pii/S0160791X21002165 (visited on 04/11/2024).

9
Why Algorithms Remain Unjust A P REPRINT

[19] Governor of Virginia. Governor Glenn Youngkin Signs Executive Order on Artificial Intelligence. en. Government.
Jan. 2024. URL: https://www.governor.virginia.gov/newsroom/news-releases/2024/january/
name-1019979-en.html (visited on 05/06/2024).
[20] Ben Green. “Good” isn’t good enough”. In: Proceedings of the AI for Social Good workshop at NeurIPS. Vol. 17.
2019. URL: https://aiforsocialgood.github.io/neurips2019/accepted/track3/pdfs/67_aisg_
neurips2019.pdf (visited on 04/27/2024).
[21] Steve G. Hoffman. “Managing Ambiguities at the Edge of Knowledge: Research Strategy and Artificial Intelli-
gence Labs in an Era of Academic Capitalism”. en. In: Science, Technology, & Human Values 42.4 (July 2017).
Publisher: SAGE Publications Inc, pp. 703–740. ISSN: 0162-2439. DOI: 10.1177/0162243916687038. URL:
https://doi.org/10.1177/0162243916687038 (visited on 04/26/2024).
[22] Steve G. Hoffman et al. “Five Big Ideas About AI”. en. In: Contexts 21.3 (Aug. 2022). Publisher: SAGE
Publications, pp. 8–15. ISSN: 1536-5042. DOI: 10.1177/15365042221114975. URL: https://doi.org/10.
1177/15365042221114975 (visited on 04/11/2024).
[23] Aspen Hopkins and Serena Booth. “Machine Learning Practices Outside Big Tech: How Resource Constraints
Challenge Responsible Development”. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and
Society. AIES ’21. New York, NY, USA: Association for Computing Machinery, July 2021, pp. 134–145. ISBN:
978-1-4503-8473-5. DOI: 10 . 1145 / 3461702 . 3462527. URL: https : / / dl . acm . org / doi / 10 . 1145 /
3461702.3462527 (visited on 04/21/2024).
[24] Kelly Joyce et al. “Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and
Structural Change”. en. In: Socius 7 (Jan. 2021). Publisher: SAGE Publications, p. 2378023121999581. ISSN:
2378-0231. DOI: 10.1177/2378023121999581. URL: https://doi.org/10.1177/2378023121999581
(visited on 04/11/2024).
[25] Atoosa Kasirzadeh. “Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy”.
In: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’22. New York, NY, USA:
Association for Computing Machinery, July 2022, pp. 349–356. ISBN: 978-1-4503-9247-1. DOI: 10.1145/
3514094.3534188. URL: https://dl.acm.org/doi/10.1145/3514094.3534188 (visited on 04/21/2024).
[26] Maximilian Kasy and Rediet Abebe. “Fairness, Equality, and Power in Algorithmic Decision-Making”. In:
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT ’21. New York,
NY, USA: Association for Computing Machinery, Mar. 2021, pp. 576–586. ISBN: 978-1-4503-8309-7. DOI:
10.1145/3442188.3445919. URL: https://dl.acm.org/doi/10.1145/3442188.3445919 (visited on
04/07/2024).
[27] Davinder Kaur et al. “Trustworthy Artificial Intelligence: A Review”. In: ACM Computing Surveys 55.2 (Jan.
2022), 39:1–39:38. ISSN: 0360-0300. DOI: 10.1145/3491209. URL: https://dl.acm.org/doi/10.1145/
3491209 (visited on 04/23/2024).
[28] Nima Kordzadeh and Maryam Ghasemaghaei. “Algorithmic bias: review, synthesis, and future research direc-
tions”. In: European Journal of Information Systems 31.3 (May 2022). Publisher: Taylor & Francis _eprint:
https://doi.org/10.1080/0960085X.2021.1927212, pp. 388–409. ISSN: 0960-085X. DOI: 10.1080/0960085X.
2021.1927212. URL: https://doi.org/10.1080/0960085X.2021.1927212 (visited on 04/07/2024).
[29] Jill A Kuhlberg, Irene Headen, and Ellis A Ballard. “Advancing Community Engaged Approaches to Identifying
Structural Drivers of Racial Bias in Health Diagnostic Algorithms”. en. In: Data for Black Lives (May 2020). URL:
https://d4bl.org/reports/89- advancing- community- engaged- approaches- to- identifying-
structural-drivers-of-racial-bias-in-health-diagnostic-algorithms (visited on 05/05/2024).
[30] David M. J. Lazer et al. “Computational social science: Obstacles and opportunities”. In: Science 369.6507
(Aug. 2020). Publisher: American Association for the Advancement of Science, pp. 1060–1062. DOI: 10.1126/
science.aaz8170. URL: https://www.science.org/doi/10.1126/science.aaz8170 (visited on
04/26/2024).
[31] Francis Lee and Lotta Björklund Larsen. “How should we theorize algorithms? Five ideal types in analyzing
algorithmic normativities”. en. In: Big Data & Society 6.2 (July 2019). Publisher: SAGE Publications Ltd,
p. 2053951719867349. ISSN: 2053-9517. DOI: 10.1177/2053951719867349. URL: https://doi.org/10.
1177/2053951719867349 (visited on 04/23/2024).
[32] Pengfei Li et al. Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models.
en. arXiv:2304.03271 [cs]. Oct. 2023. URL: http://arxiv.org/abs/2304.03271 (visited on 05/12/2024).
[33] Caitlin Lustig et al. “Algorithmic Authority: the Ethics, Politics, and Economics of Algorithms that Interpret,
Decide, and Manage”. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in
Computing Systems. CHI EA ’16. New York, NY, USA: Association for Computing Machinery, May 2016,
pp. 1057–1062. ISBN: 978-1-4503-4082-3. DOI: 10.1145/2851581.2886426. URL: https://dl.acm.org/
doi/10.1145/2851581.2886426 (visited on 04/09/2024).

10
Why Algorithms Remain Unjust A P REPRINT

[34] Christoph Lutz. “Digital inequalities in the age of artificial intelligence and big data”. en. In: Human Behavior and
Emerging Technologies 1.2 (2019). _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbe2.140, pp. 141–
148. ISSN: 2578-1863. DOI: 10.1002/hbe2.140. URL: https://onlinelibrary.wiley.com/doi/abs/
10.1002/hbe2.140 (visited on 04/07/2024).
[35] Jonne Maas. “Machine learning and power relations”. en. In: AI & SOCIETY 38.4 (Aug. 2023), pp. 1493–1500.
ISSN : 1435-5655. DOI : 10.1007/s00146-022-01400-7. URL : https://doi.org/10.1007/s00146-022-
01400-7 (visited on 04/23/2024).
[36] Stefania Milan and Emiliano Treré. “Big Data from the South(s): Beyond Data Universalism”. en. In: Television
& New Media 20.4 (May 2019). Publisher: SAGE Publications, pp. 319–335. ISSN: 1527-4764. DOI: 10.1177/
1527476419837739. URL: https://doi.org/10.1177/1527476419837739 (visited on 04/26/2024).
[37] Shakir Mohamed, Marie-Therese Png, and William Isaac. “Decolonial AI: Decolonial Theory as Sociotechnical
Foresight in Artificial Intelligence”. en. In: Philosophy & Technology 33.4 (Dec. 2020), pp. 659–684. ISSN: 2210-
5441. DOI: 10.1007/s13347-020-00405-8. URL: https://doi.org/10.1007/s13347-020-00405-8
(visited on 04/26/2024).
[38] Safiya Umoja Noble. “Algorithms of Oppression: How Search Engines Reinforce Racism”. en. In: Algorithms
of Oppression. New York University Press, Feb. 2018. ISBN: 978-1-4798-3364-1. DOI: 10 . 18574 / nyu /
9781479833641 . 001 . 0001. URL: https : / / www . degruyter . com / document / doi / 10 . 18574 / nyu /
9781479833641.001.0001/html (visited on 04/26/2024).
[39] Ziad Obermeyer et al. “Dissecting racial bias in an algorithm used to manage the health of populations”. eng.
In: Science (New York, N.Y.) 366.6464 (Oct. 2019), pp. 447–453. ISSN: 1095-9203. DOI: 10.1126/science.
aax2342.
[40] Davide Panagia. “On the Possibilities of a Political Theory of Algorithms”. en. In: Political Theory 49.1 (Feb.
2021). Publisher: SAGE Publications Inc, pp. 109–133. ISSN: 0090-5917. DOI: 10.1177/0090591720959853.
URL: https://doi.org/10.1177/0090591720959853 (visited on 04/09/2024).
[41] Frank Pasquale. “The Black Box Society: The Secret Algorithms that Control Money and Information”. In: Book
Gallery (Jan. 2015). URL: https://digitalcommons.law.umaryland.edu/books/96.
[42] David Patterson et al. Carbon Emissions and Large Neural Network Training. arXiv:2104.10350 [cs]. Apr. 2021.
DOI : 10.48550/arXiv.2104.10350. URL : http://arxiv.org/abs/2104.10350 (visited on 05/12/2024).
[43] Peter Polack. “Beyond algorithmic reformism: Forward engineering the designs of algorithmic systems”. en. In:
Big Data & Society 7.1 (Jan. 2020). Publisher: SAGE Publications Ltd, p. 2053951720913064. ISSN: 2053-9517.
DOI : 10.1177/2053951720913064. URL : https://doi.org/10.1177/2053951720913064 (visited on
04/27/2024).
[44] Anais Resseguier. “Power and inequalities: lifting the veil of ignorance in AI ethics”. eng. In: Handbook of
Critical Studies of Artificial Intelligence. Section: Handbook of Critical Studies of Artificial Intelligence. Edward
Elgar Publishing, Nov. 2023, pp. 402–412. ISBN: 978-1-80392-856-2. URL: https://www.elgaronline.com/
edcollchap-oa/book/9781803928562/book-part-9781803928562-43.xml (visited on 04/23/2024).
[45] Briana Vecchione, Karen Levy, and Solon Barocas. “Algorithmic Auditing and Social Justice: Lessons from
the History of Audit Studies”. In: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms,
Mechanisms, and Optimization. EAAMO ’21. New York, NY, USA: Association for Computing Machinery, Nov.
2021, pp. 1–9. ISBN: 978-1-4503-8553-4. DOI: 10.1145/3465416.3483294. URL: https://dl.acm.org/
doi/10.1145/3465416.3483294 (visited on 04/11/2024).
[46] Rae Walker, Jess Dillard-Wright, and Favorite Iradukunda. “Algorithmic bias in artificial intelligence is a
problem—And the root issue is power”. In: Nursing Outlook 71.5 (Sept. 2023), p. 102023. ISSN: 0029-6554.
DOI : 10.1016/j.outlook.2023.102023. URL : https://www.sciencedirect.com/science/article/
pii/S0029655423001288 (visited on 04/23/2024).
[47] Robert Wolfe and Aylin Caliskan. “American == White in Multimodal Language-and-Image AI”. In: Proceedings
of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’22. New York, NY, USA: Association for
Computing Machinery, July 2022, pp. 800–812. ISBN: 978-1-4503-9247-1. DOI: 10.1145/3514094.3534136.
URL: https://dl.acm.org/doi/10.1145/3514094.3534136 (visited on 04/21/2024).
[48] Erik Olin Wright. “Transforming Capitalism through Real Utopias”. en. In: American Sociological Review 78.1
(Feb. 2013). Publisher: SAGE Publications Inc, pp. 1–25. ISSN: 0003-1224. DOI: 10.1177/0003122412468882.
URL: https://doi.org/10.1177/0003122412468882 (visited on 05/01/2024).
[49] Aimee van Wynsberghe. “Sustainable AI: AI for sustainability and the sustainability of AI”. en. In: AI and
Ethics 1.3 (Aug. 2021), pp. 213–218. ISSN: 2730-5961. DOI: 10.1007/s43681-021-00043-6. URL: https:
//doi.org/10.1007/s43681-021-00043-6 (visited on 05/01/2024).

11
Why Algorithms Remain Unjust A P REPRINT

[50] Peter K. Yu. “The Algorithmic Divide and Equality in the Age of Artificial Intelligence”. eng. In: Florida Law
Review 72.2 (2020), pp. 331–390. URL: https://heinonline.org/HOL/P?h=hein.journals/uflr72&i=
330 (visited on 04/07/2024).

12

You might also like