Seminar Series in Analytic Philosophy 2023-24: Session 24

Can Conversational AIs Testify?

Domingos Faria (Universidade do Porto)


24 May 2024, 16:00 (Lisbon Time – WET)

Faculdade de Letras de Lisboa

Sala Mattos Romão [C201.J] (Departamento de Filosofia)


Abstract: We learn new things, we acquire knowledge, based on the “say-so” of conversational AIs (such as ChatGPT). How should we understand these attributions of knowledge? Can it be understood as testimonial knowledge? The orthodox view, as defended by Coady (1992), Lackey (2008), Tollefsen (2009), Goldberg (2012), and Pagin (2016), is that conversational AIs cannot be considered testimonial sources, but at most instrumental sources of knowledge (in a similar way to the knowledge we obtain when we consult a thermometer). The main argument for this orthodox view can be summarized as follows: An entity S can testify that p only if S believes that p, S has the intention to deliver testimony that p, S is a responsible epistemic agent for transmitting that p, S is object of trust, and S is able to assert that p. But conversational AIs cannot believe that p, nor intend to testify that p, nor are they responsible epistemic agents who transmit that p, nor are they objects of trust, nor are they able to assert that p. Therefore, conversational AIs cannot testify that p. In this paper, I intend to show that this argument is not sound, since there are plausible reasons to reject both premises. Furthermore, by developing the framework conceived by Tyler Burge (1998), it is possible to argue that some instruments can testify, as is the case with conversational AIs.