“Understanding Fairness in Fair Machine Learning”
The issues surrounding bias, discrimination, and fairness in prediction results generated by machine learning (ML) has attracted increasing interest. Numerous fair ML algorithms are proposed to address this issue. However, even though these algorithms can provide prediction results that are fair based on the notion of discriminations, they may influence the behavior of prediction subjects such that bias and discrimination continue to persist outside the prediction results. The purpose of this research project is to examine economic and societal implications of fairness when a fair ML algorithm is used in realistic settings. This research project leverages a unique synergy that combines cross-discipline expertise in ML, economics, operation research, and information systems, to understand how fair ML affect the behavior of prediction subjects in realistic settings and use this knowledge to design a fair ML algorithm that takes into account the behavior of prediction subjects and welfare of relevant stakeholders.
Prof. Cohen is principal investigator with Prof. Khern-am-nuai as co-investigator on this grant.
Supported in part by funding from the Social Sciences and Humanities Research Council.
En partie financé par le Conseil de recherches en sciences humaines.