- 11-11.40 Nikolaj Pedersen (Yonsei University) - "On the significance of language model agents: themes from Lazar "
Much recent work on the significance of AI has been purely (or mostly) reactionary in virtue of only (or mostly) responding to negative consequences of AI once they have occurred or purely (or mostly) speculative in virtue of being focused entirely (or mostly) on existential risk that AI might pose in a distant future. Against these trends, in a series of recent articles Seth Lazar has argued that there is an urgent need for work that falls in between these two categories—investigations that seek to anticipate the potential significance of AI in the medium-term and aim to provide philosophically informed guidance and policy recommendations, with a view to making it more likely that AI has an overall positive impact on society. Lazar’s main focus is on the significance of language model agents (LMAs), artificial agents built on language models and capable of autonomous planning and task execution. In particular, he discusses the potential ethical and political benefits and threats associated with LMAs. This talk offers an introduction to key themes from Lazar’s work on the significance of LMAs as well as a a critical appraisal.
- 11.40-12.20 Duncan Pritchard (UC, Irvine) - "AI and the Epistemology of Education"
Many commentators are concerned that artificial intelligence (AI) may pose an existential threat to education. While I agree that it may have a transformative effect on educational practices, I think that these concerns are overstated, at least with regard to the kind of large language model (LLM) variety of AI that is currently in vogue. Indeed, I will be suggesting that the use of this form of AI in educational contexts may actually serve a positive role in reminding us what the actual purpose of education is. To this end I will be revisiting some of the debates about the overarching epistemic goal of educational practices and defending the thesis that this should be understood as the cultivation of virtuous intellectual character. Crucially, however, while many intellectual skills may be off-loaded in the future to AI, such that there is little need to educate for these skills anymore, this is not a feasible option for virtuous intellectual character. At most, AI can be a tool that may assist the development of virtuous intellectual character. Properly understood, then, while educational practices may be transformed by AI, they cannot be undercut by it, at least so long as we keep in mind what the true overarching epistemic goal of education is.
- 12.20-12.50 Lunch break
- 12.50-1.30 Peter Graham (UC, Riverside) - "Did Claude Tell You That? Testimony & AI: Some Distinctions"
Conversational AIs and LLMs sound more and more like people telling us things in conversation with every new release. How should philosophers—philosophers of language and epistemologists, in particular—think about the apparent "testimony" from artificially intelligent machines? Is it just more of the same, or is it drastically different? The current literature on the topic tends towards the latter side. But by drawing a few distinctions, the question gets both clearer and more complex. What distinctions? Distinctions between different types of speech acts (including different types of quasi-speech acts), different ways of responding to (both kinds of) speech acts, and different ways to epistemically evaluate ways of responding to such acts. And so to answer our question, we need to distinguish a variety of distal causes and their representation and uptake (that's the philosophy of language bit) and the epistemic evaluation of the distal causes and their uptake (that's the epistemology bit).
Sponsored by the Center for Knowledge, Technology & Society
Please RSVP above (see button before text)