06 Mai 2026
dalle 10:30
Abstract
Large Language Models offer new capabilities for supporting the detection, evaluation and debunking of disinformation. This process is currently mostly manual, and debunkers or fact-checkers are frequently overwhelmed by the amount of information they need to evaluate. Basing on a methodology developed by debunkers, the research team at PJAIT has created large datasets of Web content in Polish and English that include examples of disinformation. Disinformation has been evaluated in detail for the presence of persuasive techniques, harmful intentions, and disinformation narratives (repeating patterns of disinformation). Large Language Models can be used to identify these features of disinformation for two purposes: improving the accuracy of disinformation detection (in a chain-of-reasoning approach), or supporting learning about disinformation, improving critical reasoning and teaching skills useful for debunking and fact-checking. The talk introduces a workflow of detecting and debunking disinformation and demonstrates how LLMs can be applied to support this process.
Bio
Prof. Adam Wierzbicki is a Full Professor of Informatics; a researcher with interdisciplinary knowledge and experience in informatics, psychology and sociology. He is employed at the Polish-Japanese Academy of Information Technology (PJAIT), where he is Vice-President, Head of Ph.D programme (since 2010, this interdisciplinary program involved several hundred students), and leader of a research group in Social Informatics. His research interests lie in Social Informatics, an area of informatics that aims to design information systems and algorithms while taking into account their social and psychological impact, as well the reciprocal impact of human behavior on information systems. Prof. Wierzbicki has been a pioneer of research on Web content credibility. In 2013-2016, he has led the Reconcile project that researched methods of Web content credibility evaluation, pioneering research in this area before the term “fake news” was coined. He is an author of the monograph “Web content credibility” (Springer, 2018). His current research interests include Human-AI interaction, AI in Education and Responsible AI. He currently teaches a course on Fairness, Accountability, Transparency and Ethics in ICT and AI at PJAIT.
Host
Alessandro Facchini, Associate Professor in Epistemology, Logic and Ethics of Artificial Intelligence and Co-Head of the Bachelor’s degree programme in Data Science and Artificial Intelligence at the Department of Innovative Technologies of SUPSI, and Co-Head of the scientific area AI and Society at the Dalle Molle Institute for Artificial Intelligence (IDSIA USI-SUPSI).