Ethics in AI Lunchtime Research Seminars - Lost in Conversation? Uncertainty, 'Value-Alignment', and Large Language Models
Abstract: Since LLMs can be used as conversational partners, the types of uncertainty they need to be able to convey are not limited to quantifiable, semantic or factual types. How LLMs relay morally-loaded unknowns will have a qualitative impact on the nature of future conversations. This talk not only draws attention to this hereto under-appreciated systemic risk. It also explores potential avenues towards a symbiotic learning environment where both the LLM and the user community continuously evolve in their understanding and communication of unquantifiable uncertainties.
Find out more about Ethics in AI Lunchtime Research Seminars | Ethics in AI (ox.ac.uk)