Events

Autonomous systems have become an interconnected part of everyday life with the recent increases in computational power available for both onboard computers and offline data processing. Two main research communities, Optimal Control and Reinforcement Learning stand out in the field of autonomous systems, each with a vastly different perspective on the control problem. While model-based controllers offer stability guarantees and are used in nearly all real-world systems, they require a model of the system and operating environment. The training of learning- based controllers is currently mostly limited to simulators, which also require a model of the system and operating environment. It is not possible to model every possible operating scenario an autonomous system can encounter in the real world at design time and currently, no control methods exist for such scenarios. In this seminar, we present a hybrid control framework, comprised of a learning-based supervisory controller and a set of model-based low-level controllers, that can improve a system’s robustness to unknown operating conditions.
Online - 14h00

The recent advances in Deep Learning made many tasks in Computer Vision much easier to tackle. However, working with a small amount of data, and highly imbalanced real-world datasets can still be very challenging. In this talk, I will present two of my recent projects, where modelling and training occur under those circumstances. Firstly, I will introduce a novel 3D UNet-like model for fast volumetric segmentation of lung cancer nodules in Computed Tomography (CT) imagery. This model highly relied on kernel factorisation and other architectural improvements to reduce the number of parameters and computational load, allowing its successful use in production. Secondly, I will discuss the use of representation learning or similarity metric learning for few-shot classification tasks, and more specifically its use in a competition at NeurIPS 2019 and Kaggle. This competition aimed to detect the effects of over 1000 different genetic treatments to 4 types of human cells, and published a dataset composed of 6-channel fluorescent microscopy images with only a handful of samples per target class.
Manno, Galleria 1, 2nd floor @12h00

30 March 2021 - 30 March 2021

In recent years, considerable research has been pursued at the interface between dynamical system theory and deep learning, leading to advances in both fields. In this talk, I will discuss two approaches for dynamical system modelling with deep learning tools and concepts that we are developing at IDSIA. In the first approach, we adopt tailor-made state-space model structures where neural networks describe the most uncertain system components, while we retain structural/physical knowledge, if available. Specialised training algorithms for these model structures are also discussed. The second approach is based on a neural network architecture, called dynoNet, where linear dynamical operators parametrised as rational transfer functions are used as elementary building blocks. Owing to the rational transfer function parametrisation, these blocks can describe infinite impulse response (IIR) filtering operations. Thus, dynoNet may be seen as an extension of the 1D-CNN architecture, as the 1D-Convolutional units of 1D-CNNs correspond to the finite impulse response (FIR) filtering case.
Online - 11:30

12 May 2021 - 12 May 2021

Most philosophers and scientists are convinced that there is a "hard problem" of consciousness and an "explanatory gap" preventing us from understanding why experiences feel the way they do and why they have "something it's like”. My claim is that this attitude may derive from the (understandable) desire to preserve the last remnants of human uniqueness before the onslaught of conscious robots. My "sensorimotor" approach to understanding phenomenal consciousness suggests that this is a mistake. If we really think deeply about what we mean by having a phenomenal experience, then there is no reason why machines should not be conscious very soon, i.e. in the next decades
Online

28 May 2021

The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that gives rise to intransparent models. However, the dichotomy in the pertinent literature between a perceived necessity of making AI algorithms transparent and the countervailing claim that they give rise to an 'essential' epistemic opacity of AI models might be false. Instead, epistemic transparency is primarily a function of the degrees of an epistemic agent's perceptual or conceptual grasp of the relevant elements and relations in a model, which might vary in kind and in accordance with the pertinent level of modelling. In order to elucidate this hypothesis, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between model and target system (Black 1962). I then undertake a comparison between contemporary AI approaches that variously align with these modelling paradigms: Behaviour-based AI, Deep Learning and the Predictive Processing Paradigm. I conclude that epistemic transparency of the algorithms involved in these models is not sufficient and might not always be necessary to meet the condition of recognising their epistemically relevant elements.
https://supsi.zoom.us/j/63801449377?pwd=WEZiRzV0UWJ0a3kzSXpQSjZNNTl6Zz09

10 June 2021

How much is model opacity a problem for explanation and understanding from machine learning models? I argue that there are several ways in which non-epistemic values influence the extent to which model opacity undermines explanation and understanding. I consider three stages of the scientific process surrounding ML models where the influence of non-epistemic values emerges: 1) establishing the empirical link between the model and phenomenon, 2) explanation, and 3) attributions of understanding.
Online 17.00

22 September 2021 - 22 September 2021

This year is the 100th anniversary of Taylor (1921)[1], which may be considered the first relevant publication on air pollution modeling. Since then, substantial progresses have been made. This seminar presents a view on this topic from different angles. Air pollution modeling has been 1) investigated by scientists, as a subset of the general topic of computational fluid dynamics; 2) studied for the purpose of developing computer simulation packages to be used or recommended by governmental regulatory agencies for environmental protection [2]; 3) used by the industry to assess the environmental impacts of its chemical emissions; and 4) used, especially in the US, in the context of environmental litigation. This seminar will address developments and uses of air pollution models and, in particular, Monte Carlo simulation methods, which appear more suitable than other numerical techniques in reproducing turbulent diffusion in the atmosphere. Finally, the possible use of machine learning methods for air pollution problems will be briefly addressed. [1] Taylor, G.I. (1921) Diffusion by Continuous Movements. Proceedings of the London Mathematical Society, 20, 196-212. [2] E.g., https://www.epa.gov/scram
Room A1.02 - East Campus USI-SUPSI

st.wwwsupsi@supsi.ch