30 March 2021 - 30 March 2021

In recent years, considerable research has been pursued at the interface between dynamical system theory and deep learning, leading to advances in both fields. In this talk, I will discuss two approaches for dynamical system modelling with deep learning tools and concepts that we are developing at IDSIA. In the first approach, we adopt tailor-made state-space model structures where neural networks describe the most uncertain system components, while we retain structural/physical knowledge, if available. Specialised training algorithms for these model structures are also discussed. The second approach is based on a neural network architecture, called dynoNet, where linear dynamical operators parametrised as rational transfer functions are used as elementary building blocks. Owing to the rational transfer function parametrisation, these blocks can describe infinite impulse response (IIR) filtering operations. Thus, dynoNet may be seen as an extension of the 1D-CNN architecture, as the 1D-Convolutional units of 1D-CNNs correspond to the finite impulse response (FIR) filtering case.
Online - 11:30

18 December 2020 - 18 December 2020

Econophysics and economics. Two scientific disciplines that have carved a long way into the subject of financial markets in the last 30 years, providing new theoretical models, methods and results. Nevertheless, despite sharing the same element of scientific investigation, they seem to proceed on strictly separated ways with an absolute lack of dialogue. By considering a crucial problem on a financial market (the early detection of “abnormal behaviors” such as financial crashes or bubbles), aim of my research is to bring in the best of both worlds: the trends and explanations via rational behaviours from economics and the apparent extreme behaviours from econophysics. The conceptual bridge is provided by the introduction of the concept of time asymmetry (i.e. irreversibility) as a fundamental component of economic behaviour. The asymmetry can be easily seen by direct inspection of most time series data for financial instruments in which it is clear that an equilibrium process is not generating the signal. We can model this disruption of equilibrium using concepts from Prigogine’s thermodynamics (the dissipative structures) and in so doing can explain the general dynamics of financial observables, rather than either the trend-like behaviour or the formation of bubbles and crashes. According to the dissipative structures conceptual paradigm, I have identified the news on a financial market (that is the complex system) as the crucial parameter explaining its changes of phase. The role for the AI inside this conceptual model is to detect possible way for demonstrating this role of the news, by measuring the level of entropy implied by the signal conveyed. The fundamental hypothesis is that a high level of entropy of the message inside the news allows for a stable market, whereas the opposite for the case of financial crashes. Therefore, aim of my research will be to develop possible algorithms to make a computer able to classify the financial news as information with low or high entropy, helping therefore the financial operator to identify the trend in the market.

10 December 2020 - 10 December 2020

Multi-period planning problems have experienced an increasing industry interest reflected within IDSIA's project pipeline. On the research side, this demand is complemented by the emergence of strong Monte Carlo search techniques over the recent years. The resulting decision-making agents have demonstrated super-human performance in complicated toy scenarios, such as game-play of Hex, Go, Chess, StarCraft, ... This talk introduces Monte Carlo planning theory and applies it to two planning problems to serve industry demand: The hedging of financial derivative contracts (with UBS) and military decision making (with ArmaSuisse).

29 October 2020

Metabolism is central to all processes of life and the metabolome -- large-scale measurement of the quantities of small molecular entities in cells and tissues -- gives a readout of cellular functioning at a point in time. Harnessing metabolomic information together with transcriptomic information about gene expression allows for multi-level insights into genetic dysregulation and its cellular effects. I will describe a multi-omics approach based on genome-scale modelling that is able to integrate the two levels and provide insights into the systems-level deregulation of cellular function due to ageing by transforming the cellular reaction space into a constraint-based linear optimisation problem. Metabolic models such as these and their interpretation depends on publicly available data about small molecular metabolites. Chemical ontologies provide structured classifications of chemical entities that can be used for navigation and filtering of chemical space. ChEBI is a prominent example of a chemical ontology, widely used in life science contexts including to annotate metabolites in genome-scale models. However, ChEBI is manually maintained and as such does not scale to the full range of metabolites in all organisms. There is a need for tools that are able to automatically classify chemical data into chemical ontologies, which can be framed as a hierarchical multi-class classification problem, based on chemical structures, which are represented as connected graphs of atoms and bonds. I will discuss recent efforts to evaluate machine learning approaches for this task, comparing different learning frameworks including logistic regression, decision trees and LSTMs, and different encoding approaches for the chemical structures, including cheminformatics 'fingerprints' (feature vectors) and character-based encodings from chemical line notation structural representations.

Autonomous systems have become an interconnected part of everyday life with the recent increases in computational power available for both onboard computers and offline data processing. Two main research communities, Optimal Control and Reinforcement Learning stand out in the field of autonomous systems, each with a vastly different perspective on the control problem. While model-based controllers offer stability guarantees and are used in nearly all real-world systems, they require a model of the system and operating environment. The training of learning- based controllers is currently mostly limited to simulators, which also require a model of the system and operating environment. It is not possible to model every possible operating scenario an autonomous system can encounter in the real world at design time and currently, no control methods exist for such scenarios. In this seminar, we present a hybrid control framework, comprised of a learning-based supervisory controller and a set of model-based low-level controllers, that can improve a system’s robustness to unknown operating conditions.
Online - 14h00

The recent advances in Deep Learning made many tasks in Computer Vision much easier to tackle. However, working with a small amount of data, and highly imbalanced real-world datasets can still be very challenging. In this talk, I will present two of my recent projects, where modelling and training occur under those circumstances. Firstly, I will introduce a novel 3D UNet-like model for fast volumetric segmentation of lung cancer nodules in Computed Tomography (CT) imagery. This model highly relied on kernel factorisation and other architectural improvements to reduce the number of parameters and computational load, allowing its successful use in production. Secondly, I will discuss the use of representation learning or similarity metric learning for few-shot classification tasks, and more specifically its use in a competition at NeurIPS 2019 and Kaggle. This competition aimed to detect the effects of over 1000 different genetic treatments to 4 types of human cells, and published a dataset composed of 6-channel fluorescent microscopy images with only a handful of samples per target class.
Manno, Galleria 1, 2nd floor @12h00