Fairness Frameworks Packages: An Overview of Some Useful &
Fairness Frameworks Packages: An Overview of Some Useful &
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
Content
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the
measurement of fairness in large scale machine learning workflows. The library
can be deployed in training and scoring workflows to measure biases in training
data, evaluate fairness metrics for ML models, and detect statistically significant
1
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
The tool is currently actively used internally by many of our products. We would
love to partner with you to understand where Fairness Indicators is most useful,
and where added functionality would be valuable. Please reach out
at [email protected].
· a comprehensive set of metrics for datasets and models to test for biases,
5. Algofairness
BlackBoxAuditing
3
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
fairness-comparison
fatconference-2019-toolkit-tutorial
fatconference-2018-auditing-tutorial
runaway-feedback-loops-src
Knight
Specification
4
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
django: Python-based backend framework for serving API of data and running
machine learning work
Aequitas is an open-source bias audit toolkit for data scientists, machine learning
researchers, and policymakers to audit machine learning models for
discrimination and bias, and to make informed and equitable decisions around
developing and deploying predictive tools.
5
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
Concerns within the machine learning community and external pressures from
regulators over the vulnerabilities of machine learning algorithms have spurred
on the fields of explainability, robustness, and fairness. Often, issues in
explainability, robustness, and fairness are confined to their specific sub-fields
and few tools exist for model developers to use to simultaneously build their
modeling pipelines in a transparent, accountable, and fair way. This can lead to
a bottleneck on the model developer’s side as they must juggle multiple methods
to evaluate their algorithms. In this paper, we present a single framework for
analyzing the robustness, fairness, and explainability of a classifier. The
framework, which is based on the generation of counterfactual explanations1
through a custom genetic algorithm, is flexible, model-agnostic, and does not
require access to model internals. The framework allows the user to calculate
robustness and fairness scores for individual models and generate explanations
for individual predictions which provide a means for actionable recourse
(changes to an input to help get a desired outcome). This is the first time that a
unified tool has been developed to address three key issues pertaining towards
building a responsible artificial intelligence system.
6
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
10. scikit-fairness
Fairness, in data science, is a complex unsolved problem for which many tactics
are proposed - each with their own advantage and disadvantages. This packages
aims to make these tactics readily available, therefore enabling users to try and
evaluate different fairness techniques.
This is the pytorch implemention for paper “Mitigating Gender Bias In Captioning
system”. Recent studies have shown that captioning datasets, such as the COCO
dataset, may contain severe social bias which could potentially lead to
unintentional discrimination in learning models. In this work, we specifically
focus on the gender bias problem.
More info:
https://github.com/CaptionGenderBias2020/Mitigating_Gender_Bias_In_Ca
ptioning_System
This might be also of interest: A collection of useful Slides & Quotes on AI-
Ethics and XAI
8
Created by Murat Durmus (CEO AISOMA)
LinkedIn: https://www.linkedin.com/in/ceosaisoma/
aisoma.de
Contact
https://www.linkedin.com/in/ceosaisoma/
https://www.aisoma.de