PDF 6
PDF 6
As artificial intelligence (AI) becomes increasingly integrated into critical domains—such as hiring,
law enforcement, and healthcare—concerns about algorithmic bias have intensified. Algorithmic
bias refers to the systematic and repeatable errors in AI outputs that unfairly privilege or
disadvantage particular groups, often reflecting and amplifying existing social inequalities (Barocas
& Selbst, 2016).
One of the most cited examples is the COMPAS system used in the U.S. criminal justice system to
assess recidivism risk. A 2016 ProPublica investigation revealed that the algorithm
disproportionately labeled Black defendants as high-risk compared to white defendants with
similar records (Angwin et al., 2016). Such outcomes challenge assumptions about AI neutrality
and expose the embedded human biases in training data and design choices.
Ethically, algorithmic bias raises questions about fairness, accountability, and transparency.
Scholars like Mittelstadt et al. (2016) argue that current legal frameworks are ill-equipped to
address AI-driven discrimination, especially when decision-making processes are opaque (“black
box” models). Calls for algorithmic audits, diverse datasets, and interdisciplinary oversight are
growing, but implementation remains limited.
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–
732.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms:
Mapping the debate. Big Data & Society, 3(2).