News

Strategic best-response fairness framework for fair machine learning

Published: 20 May 2025

Authors: Hajime Shimao, Warut Khern-Am-Nuai, Karthik Kannan, and Maxime C. Cohen

Publication: Information Systems Research
Forthcoming
Articles in Advance: published online 7 Apr 2025

Abstract::

Discrimination in machine learning (ML) has become prominent as ML is increasingly used for decision-making. Although many “fair-ML” algorithms have been designed to address such discrimination issues, virtually all of them focus on alleviating disparity in the prediction results by imposing additional constraints. Naturally, in response, prediction subjects alter their behaviors. However, the algorithms never consider those behavioral responses. So, even if the disparity in prediction results may be removed, the disparity in behaviors may persist across different subpopulations of prediction subjects. When these biased behavioral outcomes are used for training ML algorithms, they can perpetuate discrimination in the long run. To study this issue, we define a new notion called “strategic best-response fairness” (SBR-fairness). It is defined in a context involving subpopulations that are ex ante identical and also have identical conditional payoffs. Even if an algorithm is trained on biased data, will it lead to identical equilibrium behaviors of subpopulations? If yes, we define the ML as SBR-fair. We then use this SBR-fairness framework to analyze the property of existing fair ML algorithms. We also discuss how the SBR-fairness framework can inform the design of fair ML algorithms and the practical and policy implications of SBR-fairness.

Back to top