Lumi sull'Intelligenza Artificiale
Lumi su IA (en. tr. Lights on AI) is a series of (currently online) lectures promoted by IDSIA USI-SUPSI and co-organised by Alessandro Facchini (IDSIA) and Alberto Termine (U. Milano). By regularly inviting speakers not only from the AI community but also from humanities and social sciences, it aims at promoting a multi-disicplinary perspective and dialogue on some of the most pressing foundational, epistemological and ethical issues surrounding contemporary Artificial Intelligence.

[Lumi#2]

 Speaker: Prof. Kevin O’Regan, CNRS, Université Paris Descartes (personal website http://nivea.psycho.univ-paris5.fr)
Title: Why machines will soon have phenomenal consciousness
Abstract: Most philosophers and scientists are convinced that there is a "hard problem" of consciousness and an "explanatory gap" preventing us from understanding why experiences feel the way they do and why they have "something it's like”. My claim is that this attitude may derive from the (understandable) desire to preserve the last remnants of human uniqueness before the onslaught of conscious robots. My "sensorimotor" approach to understanding phenomenal consciousness suggests that this is a mistake. If we really think deeply about what we mean by having a phenomenal experience, then there is no reason why machines should not be conscious very soon, i.e. in the next decades.

[Lumi#1]

Speaker: Carlos Zednik, Assistant Professor for Philosophy of Artificial Intelligence at Eindhoven University of Technology (personal website: http://explanations.ai)
Title: Explainable AI as a Tool for Scientific Exploration
Abstract: Although models developed using machine learning are increasingly prevalent in science, their opacity can limit their scientific utility. Explainable AI (XAI) aims to diminish this impact by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it also plays an invaluable exploratory role. In this talk, I will introduce a series of XAI techniques and in each case demonstrate their potential usefulness for scientific exploration. In particular, I argue that these tools can be used to (1) better understand what an ML model is a model of, (2) engage in causal inference over high-dimensional nonlinear systems, and (3) generate "algorithmic-level" hypotheses in cognitive science.

Forthcoming events

28th May, 5 pm (CET):
Speaker: Hajo Greif, Research Assistant Professor, Philosophy of Computing Group, ICFO, WAiNS, Warsaw University of Technology (personal webpage: http://hajo-greif.net)
Title: Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence

10 June, 5pm (CET)
Speaker: Emily Sullivan, Assistant Professor of Philosophy at Eindhoven University of Technology (personal website http://www.eesullivan.com/about.html)
Title: TBA

st.wwwsupsi@supsi.ch