Join us for an AI and the Law talk with Nicole Rigillo, PhD, who is a Research Fellow at the Berggruen Institute and Element AI.
Artificially intelligent systems are increasingly being used to both augment and replace human decision-makers of all kinds: a human resources representative screening a job applicant, a judge assessing a prisoner’s bail request, or an immigration agent issuing a visa, for example. The automation of high-stakes decisions has led to concerns about bias, fairness, and transparency. But a question that remains largely unexplored is how the application of machine learning to decisions made about human beings asks us to revise our understanding of what a decision is.
This is not a trivial question considering that, with some exceptions, our laws, expectations about explanations, and notions of accountability have been developed around humans as the primary agents responsible for decisions made about other humans. This presentation first situates how human decisions have been framed in administrative law, cognitive science and behavioural economics. It then draws on interviews with AI engineers to illustrate the major differences between human decision-making processes and the modes of reasoning used in second-wave AI.
A key issue here is the problem of interpretability, necessitating the development of a set of methods known as explainable AI, along with attendant concerns about the quality and utility of these explanations. The presentation closes by arguing that the differences in modes of reasoning between humans and machines necessitates a rethinking of laws and notions of accountability to better account for the specificity of decisions made by artificially intelligent agents.
About the speaker
Nicole Rigillo is an anthropologist and Research Fellow at the Berggruen Institute's Transformations of the Human Program. She is based at Element AI in Montreal, where she engages AI scientists in dialogue on how artificial intelligence is changing what it means to be human.
Her current research centers around explainable AI and spaces of epistemic negotiation between humans and intelligent machines, ethical AI processes, and data collection in insurance and retail contexts. Her postdoctoral research at the University of Edinburgh examined how civic and environmental activists use WhatsApp to improve municipal governance in Bangalore, India, raising questions concerning the effects of encrypted dark social networks on democracy and the public sphere. Her PhD research at McGill University focused on how mandatory corporate social responsibility in India is altering an earlier model of welfare universalism by redistributing social responsibilities among groups of non-state actors.
AI and the Law Series
The AI and the Law Series is brought to you by the Montreal Cyberjustice Laboratory; the McGill Student Collective on Technology and Law; the Private Justice and the Rule of Law Research Group; and the McGill Centre for Intellectual Property Policy; and the Autonomy Through Cyberjustice Technologies Project.
This event is eligible for inclusion as 1.5 hours of continuing legal education as reported by members of the Barreau du Québec.