Design of Fair AI Systems via Human-Centric Detection

Overview 

AI systems may not only reproduce data bias but even amplify it. Unfortunately, defining data bias is difficult, let alone detecting and mitigating it. For example, consider bias by omission: transgender people, refugees or stateless people, or formerly incarcerated individuals may be simply overlooked in data. Bias can create harmful systems that commit “data violence,” negatively affecting people’s everyday lives as they interact with institutions, from social service systems and security checkpoints to employment background-check systems. Data and algorithm bias can hurt people downstream in ways that are difficult to anticipate.  

Methods and Findings 

The project team engaged people to help identify bias in datasets and fairness of alternative algorithms and evaluation metrics. 

Three key conclusions have emerged from recent research. First, it is very difficult to remove bias from data alone because it can creep in through various insidious ways. Second, determining the best algorithmic criterion (loss function or evaluation score) is very challenging. Finally, improvement in one criterion may increase bias in another, and neither may properly capture human evaluations of fairness.  

Human-centered approaches to assess bias and fairness can address a critical gap to inform research on algorithmic fairness. 

 

Team Members


Anubrata Das
Graduate Student
Soumyajit Gupta
Computer Science
Aditya Jain
Computational Engineering

Documents


Select Publications


Joydeep Ghosh and Risto Miikkulainen. “Trustworthiness in AI,” Pulse of AI Podcast. July 2020. 

Joydeep Ghosh. Webinar: “Detecting Racial Bias in Healthcare using Trusted AI.” August 2020. 

Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. “Explainable machine learning in deployment.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648-657. 2020. 

Shubham Sharma, Jette Henderson, and Joydeep Ghosh. “CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 166-172. 2020. 

Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. “Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation.” Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW. 2019.