Event Work-in-Progress Session- Hugo Cossette-Lefebvre: "Can AI Systems Hold Wrongful Beliefs? Doxastic Wrongs in Machine Reasoning"
The Jarislowsky Chair will be hosting Hugo Cossette-Lefebvre, a Postdoctoral Researcher for the chair, for a WIP session on聽March 17, 2025 at noon.
This event is open to all. Please write to聽julia.houwen [at] mcgill.ca聽to confirm attendance and to receive a copy of the draft that will be discussed. A vegan lunch will be provided to聽those who confirm their attendance BEFORE noon on March 13th.
Hugo will discuss a draft of an upcoming paper titled "Can AI Systems Hold Wrongful Beliefs? Doxastic Wrongs in Machine Reasoning". See the abstract below:
"In this paper, I argue that AI systems can hold beliefs and doxastically wrong individuals, even in the absence of harm. To illustrate, I argue that if we accept that a human employer's racist belief wrongs an applicant, then we should also accept that an AI system's racially biased classification does the same. I first examine different theories of belief 鈥 functionalism, dispositionalism, and representationalism 鈥 and show that AI systems can be said to hold beliefs under these frameworks. Second, I build on the literature on doxastic wrongs to argue that the beliefs of AI systems should be subject to moral evaluation, just as human beliefs are, since AI systems are increasingly integrated into socio-political relations. Third, I note two key differences between humans and AIs: AI beliefs are more controllable by designers and AI lacks moral agency. I argue that these differences do not undermine the claim that AI beliefs can wrong persons. Finally, I explore design implications. I argue that AI systems should be designed to respect 鈥渙ught-to-be norms鈥 that constrain the beliefs they should be allowed to form. This analysis highlights the ethical significance of AI reasoning and the necessity of evaluating AI systems beyond harm-based assessments."