Literature
Table of Contents
- Review and General Papers
- Measurements of Fairness
- Demonstration of Bias Phemomenon in Various Applications
- Mitigation of Unfairness
- Mitigation of Machine Learning Models
- Adversarial Learning
- Calibration
- Incorporating Priors into Feature Attribution
- Data Collection
- Other Mitigation Methods
- Mitigation of Representations
- Mitigation of Machine Learning Models
- Fairness Packages and Frameworks
- Conferences
- Other Fairness Relevant Interpretability Resources
Review and General Papers
- Fairness in Deep Learning: A Computational Perspective
- The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
- Fairness and machine learning
- A Survey on Bias and Fairness in Machine Learning
- The Frontiers of Fairness in Machine Learning
- Ensuring fairness in machine learning to advance health equity
- Mitigating Gender Bias in Natural Language Processing: Literature Review
- Fairness in Recommender Systems
- Implementations in Machine Ethics: A Survey
Measurements of Fairness
- On Formalizing Fairness in Prediction with Machine Learning
- Equality of Opportunity in Supervised Learning
- Certifying and removing disparate impact
- Does mitigating ML’s impact disparity require treatment disparity?
- Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements
- Beyond Parity: Fairness Objectives for Collaborative Filtering
- 50 Years of Test (Un)fairness: Lessons for Machine Learning
- Fairness Definitions Explained
- Algorithmic Fairness
- Bias in data‐driven artificial intelligence systems—An introductory survey
- Fairness is not Static: Deeper Understanding of Long Term Fairness via Simulation Studies
- Delayed Impact of Fair Machine Learning
Demonstration of Bias Phemomenon in Various Applications
Bias in Machine Learning Models
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- Deep Learning for Face Recognition: Pride or Prejudiced?
- Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems
- Demographic Dialectal Variation in Social Media: A Case Study of African-American English
- Feature-Wise Bias Amplification
- ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases
Bias in Representations
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
- Gender Bias in Contextualized Word Embeddings
- Assessing Social and Intersectional Biases in Contextualized Word Representations
Mitigation of Unfairness
Mitigation of Machine Learning Models
- Adversarial Learning
- Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
- Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction
- Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
- Mitigating Unwanted Biases with Adversarial Learning
- Adversarial Removal of Demographic Attributes from Text Data
- Compositional Fairness Constraints for Graph Embeddings
- Calibration
- Incorporating Priors into Feature Attribution
- Data Collection
- Other Mitigation Methods
- Mitigating Bias in Gender, Age and Ethnicity Classification: a Multi-Task Convolution Neural Network Approach
- InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
- Reducing Gender Bias in Abusive Language Detection
- Fairness-aware Learning through Regularization Approach
- Fairness Constraints: Mechanisms for Fair Classification
- Penalizing Unfairness in Binary Classification
- Fairness Constraints:A Flexible Approach for Fair Classification
- A General Framework for Fair Regression
- Fair Regression: Quantitative Definitions and Reduction-based Algorithms
Mitigation of Representations
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
- Learning Gender-Neutral Word Embeddings
- Flexibly Fair Representation Learning by Disentanglement
- Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings
Fairness Packages and Frameworks
- FairMLHealth
- AI Fairness 360
- Fairlearn: Fairness in machine learning mitigation algorithms
- Fairness-comparison
- Algofairness
- MEASURES
- Themis-ml
- FairML
- Black Box Auditing
- What-If Tool
- FairSight: Visual Analytics for Fairness in Decision Making
- GD-IQ: Spellcheck for Bias (code not available)
- Aequitas: Bias and Fairness Audit Toolkit
- DECAF
- REPAIR
- CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models
- ML-fairness-gym: Google’s implementation based on OpenAI’s Gym
- Fairness-indicators: Tensorflow’s Fairness Evaluation and Visualization Toolkit
- Adv-Demog-Text
- GN-GloVe
- Tensorflow Constrained Optimization
- scikit-fairness
- Mitigating Gender Bias In Captioning System
- Dataset Nutrition Label
Conferences
- FAT-: ACM Conference on Fairness, Accountability, and Transparency
- FATML: Fairness, Accountability, and Transparency in Machine Learning Workshop
- AIES: AAAI/ACM Conference on AI, Ethics, and Society