Bias and Discrimination
Bias and Discrimination
There is widespread ignorance among the general population regarding what AI truly is, its uses, the
potential consequences of its development, why it is morally right or wrong to allocate so much
budget to its creation, and so on. Reducing a person to their rational dimension is a reductionism
that overlooks their volitional or emotional dimension. In modern times, Kant alluded to the notion of
human dignity as the value inherent in a person, which eliminates any possibility of them being
bought, replaced, or instrumentalized. Unlike objects that have a price, a person has incalculable
value simply by existing—they are the ones who give value to things and the universe itself.
Intelligence is defined in many ways. However, the definition from philosopher Burgos is taken here
because it emphasizes aspects that highlight the difference between artificial and human
intelligence: “(It is) the capacity of a person to go beyond themselves, transcending, accessing the
world around them, understanding and possessing it immaterially.” In other words, this conception
assumes that intelligence allows human beings to understand, know, and access reality, and in this
sense, possess it, emphasizing abstraction and the immateriality of knowledge.
As pointed out in the introduction, these biases refer to statistical, structural, cognitive, and social
errors that result in ethically objectionable disadvantages, as they lead to discriminatory outcomes
against people or groups, or systematically benefit some over others. This refers to a probabilistic
and statistical disparity generated by a computer algorithm following specific rules that allow it to
make decisions based on different codes. Statistics always contain errors, so instead of focusing on
this point, two questions are raised. The first addresses the need to know whether these errors are
balanced across the different populations that make up the community, and the second aims to
understand where the inequity in the statistical rules originated. The solution lies in the fact that
statistical rules are not learned by automated systems out of nowhere; they may contain biases
present in their designer.
It has been believed that AI results are more objective and neutral than those a person would arrive
at, as they exclude, for example, feelings and emotions, thus achieving better outcomes that meet
the needs of the group they target. Despite this, algorithmic systems are sometimes nothing more
than “opinions written in code,” according to mathematician and data expert Cathy O'Neil.
Therefore, it is important to recognize that it’s not just about algorithms or mathematical models;
they have an impact on people’s lives. We forget that it is human beings who develop and design
this technology, which implies that the biases they possess may be transferred to AI, consciously or
unconsciously. Regarding this, Coeckelbergh states: “Bias is often unintentional: it is common for
developers, users, and others involved, such as company management, not to foresee
discriminatory effects against certain groups or individuals.” This leads us to conclude that if the
initial variables and data used to train AI are biased with prejudice, its results, no matter how good
the algorithm, will be flawed. If these algorithms are used in a social program, to analyze whether a
mega-project is viable in a territory where a certain group resides, whether a person deserves
credit or to be hired by a company, etc., the decision does not depend solely on data; it involves an
entire contextual framework that must be identified, analyzed, and becomes an irreplaceable part of
AI algorithm development.
The biases learned by AI are not isolated cases; they have been identified in various environments.
For example, the company Clearview AI promised to predict where a crime would be committed and
identify the perpetrator. It ceased operations in many countries like Canada when it became clear
that it tended to identify non-Caucasian people as criminals. That is, simply having Latino or African
American features—minorities in many of the territories where this system was used—implied a
greater likelihood of being suspected of criminal behavior. Another case involves Amazon, which
attempted to use an AI-based recruitment system. However, the system was biased against women,
as resumes that included the words "woman" or "women" when applying for technical roles
automatically received lower ratings. Amazon's criterion was to train its recruitment tool by
identifying the most commonly used keywords in the resumes of its top employees, but it lacked the
capacity to understand the social context.
According to these examples, we find that algorithmic biases have increasingly significant
repercussions. They don't just affect the ten or five hundred people who weren't hired; they create
a broader exclusion of certain groups who are denied opportunities based on something as
irrelevant as their ethnicity or gender, rather than their abilities. We should not blindly trust an
algorithm to make decisions without first ensuring it has been analyzed and adheres to acceptable
criteria, especially when evaluating people whose lives could be significantly impacted by partial or
unreflective views. Therefore, some guidelines must be established to lead us to at least a
temporary solution.
The core problem of algorithmic biases is thought to reside in the data introduced to systems, the
low representation of some groups, etc. Thus, a first solution would be to introduce parity.
Paradoxically, this brings the possibility that errors appear as consistently across different groups.
That is, not only will a few groups be affected, but anyone could be. This leads us to the question: Is
this desirable? The answer does not lie in perfecting facial recognition or setting a parity quota to
ensure the number of data entered into the system is acceptable. In the end, this does not
guarantee the elimination of discrimination. The real solution is to shift the focus back to the
person, in their entirety. We must recognize that before us is a valuable, dignified being, deserving
of achieving human maturity, of becoming the best version of themselves, and responding to their
calling.
Cognitive biases will not cease to exist in how we observe the world and make decisions. Still, it is
possible to reduce their impact or even eradicate them if, before deciding who is worth more or is
better, we reflect on everything we have learned, the beliefs we have acquired, and instead see the
person. Algorithmic biases are nothing more than a reflection of a society divided by unjustified
prejudices, which has historically been more concerned with drawing distinctions than building
bridges, placing other interests above the person.
A further step will be to recognize that algorithms cannot be considered universal or permanent.
They must be constantly reviewed based on solid foundations like dignity and respect for
differences that allow us to navigate the new horizon of AI. Once a flaw is identified, the first step
should be to rely on parities and algorithmic audits, but in the face of multiple inconsistencies, they
must be discarded, and new ones created. It is the disparity of the other that will break the fragility
of the biases and prejudices we have so far accepted as immovable.
https://revistas.anahuac.mx/index.php/bioetica/article/view/1669/1683