Are You Even a Person? LLMs and Agentic Distrust
Work-in-Progess hosted by Alexis Morin-Martel a PhD candidate at McGill University.
Leacock 927 from 1:30-3:30 on December 18th
Open to all, registration required
To register please email: alessandra.destison [at] mail.mcgill.ca
Paper abstract :
Meaningful online conversation is increasingly difficult, not only because of bad-faith arguments or disinformation, but because we can no longer be sure our interlocutors are even agents. What looks like ordinary human communication can now just as easily be generated by a large language model (LLM), and empirical studies show that humans are poor at distinguishing LLM-generated from human-generated content. I call this increasingly reasonable doubt about whether others are agents agentic distrust and argue that it constitutes the primary moral wrong of LLM proliferation. I first consider and reject three familiar explanations of what is troubling about LLMs: harmful content, deception, and their inability to satisfy familiar norms of assertion. None captures the distinctive interpersonal harm introduced by agentic distrust. I then argue that agentic distrust generates a potentially intractable moral dilemma. If we treat suspicious interlocutors as bots, we risk improperly withholding the recognition respect owed to persons. If we treat them as human, we risk misallocating our moral attention and becoming complicit in destabilizing the norms of interpersonal accountability that make discourse meaningful.