30 September 2020

Due to the current situation this event will be postponed to September/October 20202 Recent striking success in Artificial Intelligence have made the public believe that in a not so far future machines could be even more intelligent than human beings. The actual and possible developments of Artificial Intelligence open up a series of striking, deep and pressing questions such as: – Can a computer ever think in the way a human being does? – Can a computer have a mind and conscious experiences, such as thoughts, desires, and emotions? – What is artificial intelligence? Is it the same as human intelligence? Are they even comparable or are they something essentially different? – Can a machine be morally responsible for its actions? Can a machine be good or evil? What other moral considerations are related to AI? With the goal of enhancing their scientific and educational collaborations around those important topics, the Swiss AI Lab IDSIA USI-SUPSI and the USI Master in Philosophy Program are organising in Lugano on May 29-30 2020 an international meeting on current trends and perspectives in the Philosophy of AI.
Auditorium USI

3 June 2020 - 3 June 2020

NNAISENSE is a startup originating from IDSIA. The Swiss AI lab has a long lasting track record of ground-breaking results in artificial intelligence (AI). From perception to reinforcement learning, the company follows IDSIA’s steps in the search for super-human performance to take AI technology into manufacturing and control systems. While AI approaches control problems from an information theoretical and statistical perspective, control theory studies the closed-loop behaviour of the physical world with a strong focus on safety, hard constraints and theoretical guarantees. Control approaches can be very robust but they can seldom be conservative due to their assumptions. This is believed to be less of a problem for Neural Networks (NN) and AI, where non-conservative results can be achieved but it is generally harder to have formal results. NN performance depends mainly on the quality and the amount of data and no unified framework exist for the analysis of stability and robustness of NN-driven control and Reinforcement Learning (RL). For this reason, while deep learning is becoming the industry standard for perception, its use in control is mostly limited to simulated or non-critical tasks. Combining the fields of control and AI has the potential for retaining the best of both Worlds. This talk will introduce NNAISENSE’s most significant publications in this emerging field, with a special focus on the latest one: “Neural Lyapunov Model Predictive Control”.

13 February 2020 - 13 February 2020

Probabilistic sentential decision diagrams are a class of probabilistic graphical models natively embedding logical constraints within a “deep” layered structure with statistical parameters. They thence induce a joint probability distribution over the involved Boolean variables that sharply assigns probability zero to states inconsistent with the logical constraints. In this presentation, I will first introduce and motivate such probabilistic circuits. I will then present a set-valued generalisation of the probabilistic quantification in these models, that allows to replace the sharp specification of the local probabilities with linear constraints over them, In doing so, a (convex) set of joint probability mass functions, all consistent with the assigned logical constraints, is induced.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00

16 January 2020 - 16 January 2020

Most literature in the Philosophy of Computing stresses the dual, abstract and physical nature of computational systems. Under many respects, this debate reduces to the problem of explaining the relation which has traditionally been expressed in terms of the duality between specification and implementation. When this problem is analysed from the point of view of the notion of information though, computational systems require to be described at several levels of abstraction, and at each an appropriate notion of information is required. With such a conceptual tool in place, correctness of computational artifacts is adequately defined at functional, procedural and executional levels. A correct physical computational system is one which satisfies all such layers. This tripartite notion of correctness based on information is in turn essential to provide the basic elements of an appropriate logical analysis of efficiency, correctness, explanation and resilience for computational systems.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00

The recent advances in Deep Learning made many tasks in Computer Vision much easier to tackle. However, working with a small amount of data, and highly imbalanced real-world datasets can still be very challenging. In this talk, I will present two of my recent projects, where modelling and training occur under those circumstances. Firstly, I will introduce a novel 3D UNet-like model for fast volumetric segmentation of lung cancer nodules in Computed Tomography (CT) imagery. This model highly relied on kernel factorisation and other architectural improvements to reduce the number of parameters and computational load, allowing its successful use in production. Secondly, I will discuss the use of representation learning or similarity metric learning for few-shot classification tasks, and more specifically its use in a competition at NeurIPS 2019 and Kaggle. This competition aimed to detect the effects of over 1000 different genetic treatments to 4 types of human cells, and published a dataset composed of 6-channel fluorescent microscopy images with only a handful of samples per target class.
Manno, Galleria 1, 2nd floor @12h00