Private Accountability in the Age of Artificial Intelligence
Photo by Ben Sweet on Unsplash

Private Accountability in the Age of Artificial Intelligence

I recently delivered a keynote address on Bias in Artificial Intelligence (AI) at the Chicago Chapter Meeting of the International Association of Outsourcing Professionals. For most of the senior executives that attended my talk, it wasn't a huge surprise to learn that we observe bias in AI. It was a surprise, though, to learn the extent to which bias can be observed, and the degree to which it has the potential to negatively influence outcomes in the lives of those that are affected.

During the keynote's Q&A session, senior leaders posed several questions along the dimensions of bias mitigation, algorithmic accountability and transparency. In the context of bias mitigation, I encouraged leaders to explore opportunities to develop bias literacy programs, strengthen data and AI governance, and harness the power of AI itself to recognize bias. I did not articulate well a reasonable framework for algorithmic accountability and transparency. For that reason, I set out to learn more about the issue.

In a recent paper titled Private Accountability in the Age of Artificial Intelligence, Sonia K. Katyal, Haas Distinguished Chair, University of California at Berkeley, and Co-director, Berkeley Center for Law and Technology, explored the issue. The author argued that leveraging AI Codes of Conduct, Human Impact Statements in Algorithmic Decision-Making and Whistleblower Protections strengthen algorithmic accountability and transparency while respecting the intellectual property rights of private entities (Katyal, 2019).

AI Codes of Conduct

A code of conduct is a written framework that serves as a guide for human behavior and decision-making. Many organizations have a workplace code of conduct, but few set fourth a standard for governing the creation and use of AI. Since the presence of a workplace code of conduct can help reduce occurrences of wrongdoing, it's reasonable to expect that an AI code of conduct could have a similar effect.

Katyal (2019) pointed to the Association of Computing Machinery's (ACM) seven principles of Algorithmic Transparency and Accountability, which can be used in the development of a standard:

  1. "Awareness of biases in design, implementation, and use";
  2. "Access and redress mechanisms to allow individuals to question and address adverse effects of algorithmically informed decisions";
  3. "Accountability, ensuring that individuals are held responsible for decisions made by algorithms that they use";
  4. "An explanation regarding both the procedures that the algorithm follows as well as the specific decisions that are made";
  5. "Data provenance, meaning a description of the way that the training data was collected, along with 'an exploration of the potential biases induced by the human or algorithmic data-gathering process'";
  6. "Auditability, enabling models, algorithms, data and decisions to be recorded for audit purposes";
  7. "Validation and testing, ensuring the use of rigorous models to avoid discriminatory harm" (Katyal, 2019, p. 109).

The presence of an AI code of conduct, much like a workplace code of conduct, will not completely eliminate unethical behavior or negative outcomes. However, it can provide the guardrails that serve to reduce the risk of introducing bias in AI.

Human Impact Statements in Algorithmic Decision-Making

An impact statement is a written declaration of the extent to which a set of actions affect something or someone. Borrowing from environmental impact statements, Kaytal (2019) introduced human impact statements in algorithmic decision-making. In contrast to environmental impact statements, which focus on the extent to which a set of actions impact the environment, human impact statements describe the extent to which algorithmic decisions impact people. The statements include:

  • "The adoption of a substantive, rather than procedural, commitment to both algorithmic accountability and anti-discrimination";
  • "The employment of a structure, similar to the GDPR, which relies upon a clear division between the controller (who is responsible for compliance) and the programmer (who is responsible for the algorithm and data processing)";
  • "Thorough examination (and structural division), both ex ante and ex post, of both the algorithm and the training data that it is employed to refine the algorithm" (Katyal, 2019, p. 115-116).

Even if it's not your intention to introduce bias in AI, it's entirely possible that your work promotes bias. Human impact statements force you to have a deeper awareness of how algorithms make decisions, and the extent to which those decisions may adversely affect people.

Whistleblower Protections

Almost everyone has heard of a whistleblower; a person that exposes illegal or unethical behavior. There are three main reasons that whistleblower protections are relevant in the context of algorithmic accountability and transparency:

  • Intellectual property rights serve as an obstacle to disclosure;
  • Governments turn to private entities for governing activities;
  • Private entities largely self-regulate (Katyal, 2019, p. 128).

If you have weak whistleblower protections, whistleblowers may feel discouraged to report unethical activity. Through enhanced protections, particularly those supporting intra-organizational disclosures, it's possible for private entities to resolve legitimate ethical and legal matters with minimal financial or reputational risk, while also quickly clarifying misunderstandings as they arise (Katyal, 2019).

Conclusion

In this article, I summarized my key takeaways from Sonia's Private Accountability in the Age of Artificial Intelligence. In the paper, Sonia argued that AI Codes of Conduct, Human Impact Statements in Algorithmic Decision-Making and Whistleblower Protections can help strengthen algorithmic accountability and transparency, while respecting the intellectual property rights of private entities.

If you are interested in the issues presented here, I highly encourage you to read Sonia's paper. It's one of the best papers that I've read on algorithmic accountability and transparency. Although this article quickly summarizes my key takeaways from the paper, it simply does not give the paper the more detailed attention that it deserves.

References

Katyal, S.K. (2019). Private Accountability in the Age of Artificial Intelligence, U.C.L.A. Law Review, 66(1), 54–144.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics