Events
10 December 2021 - 10 December 2021

Statistical modelling and analysis for complex biomedical data: challenges and obstacles. One of the major challenges in modern biostatistics derives from the analysis of Real World Data (RWD). Indeed, whenever main features of common control designs cannot be applied, common statistical tools may fail in dealing with unbalanced groups, non-homogenous populations and confoundness derived from large observational data. New statistical perspectives are needed to deal with wide heterogeneity of big biomedical data that might magnify noise rather than improve inference. A deep understanding of the data generating process represents the first step to investigate the complex dependence structure among covariates in RWD and for supporting new findings in biomedical research. This seminar proposes novel statistical settings (classical and Bayesian) for the analysis of observational studies based longitudinal, survival and cross-sectional data. Latent class models, Bayesian Networks, Frailty modelling are illustrated as possible tools for facing obstacles and challenges of complex biomedical data. Examples from different biomedical frameworks, such as COVID-19 data, gene therapy and oncology are illustrated.
Room D1.10 - East Campus

2 December 2021

Artificial intelligence (AI) and the human brain are profoundly different cognitive systems. Nevertheless, they can interact with each other dynamically. One such way of interaction concerns the possibility of using AI to enhance human cognition. This approach, known as 'augmented intelligence', raises important ethical issues. These include implications for agency, autonomy and moral responsibility, and the moral permissibility of cognitive enhancement. This talk will offer an overview and critical analysis of the ethical implications of augmented intelligence and human-computer interaction.
Room D1.01, Sector D, first floor, Campus Est, USI-SUPSI, Lugano-Viganello

11 November 2021

Digital well-being technologies (DWTs) are systems aiming to increase an aspect of well-being (e.g., mental health, physical wellness) of users by means of personalised outputs. In this talk, I will deal with an ethical issue of two types of DWTs – recommendation systems significantly influencing one’s life and digital well-being apps for healthy adults – that so far has been overlooked: their impact on the user’s good life, namely, a life that is good for the user. I will show that since these DWTs are mainly based on the user’s narrowly defined utility function and past or similar users’ preferences, they impoverish the diversity and novelty of the user’s digital environment in terms of stimuli and opportunities. Then, I will contend that a homogeneous and immutable environment limits the individual’s good life because in such an environment, some factors that are essential to any good life are reduced: life experience, self-knowledge, and authenticity. Finally, I will outline the user’s right to an open present, which is a moral requirement for DWTs that protects the user’s good life from the negative effects of DWTs.
Campus Est USI-SUPSI, Lugano-Viganello, room D1.01 (Sector D, first floor)

22 September 2021 - 22 September 2021

This year is the 100th anniversary of Taylor (1921)[1], which may be considered the first relevant publication on air pollution modeling. Since then, substantial progresses have been made. This seminar presents a view on this topic from different angles. Air pollution modeling has been 1) investigated by scientists, as a subset of the general topic of computational fluid dynamics; 2) studied for the purpose of developing computer simulation packages to be used or recommended by governmental regulatory agencies for environmental protection [2]; 3) used by the industry to assess the environmental impacts of its chemical emissions; and 4) used, especially in the US, in the context of environmental litigation. This seminar will address developments and uses of air pollution models and, in particular, Monte Carlo simulation methods, which appear more suitable than other numerical techniques in reproducing turbulent diffusion in the atmosphere. Finally, the possible use of machine learning methods for air pollution problems will be briefly addressed. [1] Taylor, G.I. (1921) Diffusion by Continuous Movements. Proceedings of the London Mathematical Society, 20, 196-212. [2] E.g., https://www.epa.gov/scram
Room A1.02 - East Campus USI-SUPSI

10 June 2021

How values encroach on understanding from opaque machine learning models *Abstract: How much is model opacity a problem for explanation and understanding from machine learning models? I argue that there are several ways in which non-epistemic values influence the extent to which model opacity undermines explanation and understanding. I consider three stages of the scientific process surrounding ML models where the influence of non-epistemic values emerges: 1) establishing the empirical link between the model and phenomenon, 2) explanation, and 3) attributions of understanding.
Online

10 June 2021

How much is model opacity a problem for explanation and understanding from machine learning models? I argue that there are several ways in which non-epistemic values influence the extent to which model opacity undermines explanation and understanding. I consider three stages of the scientific process surrounding ML models where the influence of non-epistemic values emerges: 1) establishing the empirical link between the model and phenomenon, 2) explanation, and 3) attributions of understanding.
Online 17.00

28 May 2021

The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that gives rise to intransparent models. However, the dichotomy in the pertinent literature between a perceived necessity of making AI algorithms transparent and the countervailing claim that they give rise to an 'essential' epistemic opacity of AI models might be false. Instead, epistemic transparency is primarily a function of the degrees of an epistemic agent's perceptual or conceptual grasp of the relevant elements and relations in a model, which might vary in kind and in accordance with the pertinent level of modelling. In order to elucidate this hypothesis, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between model and target system (Black 1962). I then undertake a comparison between contemporary AI approaches that variously align with these modelling paradigms: Behaviour-based AI, Deep Learning and the Predictive Processing Paradigm. I conclude that epistemic transparency of the algorithms involved in these models is not sufficient and might not always be necessary to meet the condition of recognising their epistemically relevant elements.
https://supsi.zoom.us/j/63801449377?pwd=WEZiRzV0UWJ0a3kzSXpQSjZNNTl6Zz09

28 May 2021

The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that gives rise to intransparent models. However, the dichotomy in the pertinent literature between a perceived necessity of making AI algorithms transparent and the countervailing claim that they give rise to an 'essential' epistemic opacity of AI models might be false. Instead, epistemic transparency is primarily a function of the degrees of an epistemic agent's perceptual or conceptual grasp of the relevant elements and relations in a model, which might vary in kind and in accordance with the pertinent level of modelling. In order to elucidate this hypothesis, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between model and target system (Black 1962). I then undertake a comparison between contemporary AI approaches that variously align with these modelling paradigms: Behaviour-based AI, Deep Learning and the Predictive Processing Paradigm. I conclude that epistemic transparency of the algorithms involved in these models is not sufficient and might not always be necessary to meet the condition of recognising their epistemically relevant elements.
Online

12 May 2021 - 12 May 2021

Most philosophers and scientists are convinced that there is a "hard problem" of consciousness and an "explanatory gap" preventing us from understanding why experiences feel the way they do and why they have "something it's like”. My claim is that this attitude may derive from the (understandable) desire to preserve the last remnants of human uniqueness before the onslaught of conscious robots. My "sensorimotor" approach to understanding phenomenal consciousness suggests that this is a mistake. If we really think deeply about what we mean by having a phenomenal experience, then there is no reason why machines should not be conscious very soon, i.e. in the next decades
Online

12 May 2021

Why machines will soon have phenomenal consciousness *Abstract: Most philosophers and scientists are convinced that there is a "hard problem" of consciousness and an "explanatory gap" preventing us from understanding why experiences feel the way they do and why they have "something it's like”. My claim is that this attitude may derive from the (understandable) desire to preserve the last remnants of human uniqueness before the onslaught of conscious robots. My "sensorimotor" approach to understanding phenomenal consciousness suggests that this is a mistake. If we really think deeply about what we mean by having a phenomenal experience, then there is no reason why machines should not be conscious very soon, i.e. in the next decades.
Online

21 April 2021

Although models developed using machine learning are increasingly prevalent in science, their opacity can limit their scientific utility. Explainable AI (XAI) aims to diminish this impact by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it also plays an invaluable exploratory role. In this talk, I will introduce a series of XAI techniques and in each case demonstrate their potential usefulness for scientific exploration. In particular, I argue that these tools can be used to (1) better understand what an ML model is a model of, (2) engage in causal inference over high-dimensional nonlinear systems, and (3) generate "algorithmic-level" hypotheses in cognitive science.
Online

7 April 2021

Autonomous systems have become an interconnected part of everyday life with the recent increases in computational power available for both onboard computers and offline data processing. Two main research communities, Optimal Control and Reinforcement Learning stand out in the field of autonomous systems, each with a vastly different perspective on the control problem. While model-based controllers offer stability guarantees and are used in nearly all real-world systems, they require a model of the system and operating environment. The training of learning- based controllers is currently mostly limited to simulators, which also require a model of the system and operating environment. It is not possible to model every possible operating scenario an autonomous system can encounter in the real world at design time and currently, no control methods exist for such scenarios. In this seminar, we present a hybrid control framework, comprised of a learning-based supervisory controller and a set of model-based low-level controllers, that can improve a system’s robustness to unknown operating conditions.
Online - 14h00

30 March 2021 - 30 March 2021

In recent years, considerable research has been pursued at the interface between dynamical system theory and deep learning, leading to advances in both fields. In this talk, I will discuss two approaches for dynamical system modelling with deep learning tools and concepts that we are developing at IDSIA. In the first approach, we adopt tailor-made state-space model structures where neural networks describe the most uncertain system components, while we retain structural/physical knowledge, if available. Specialised training algorithms for these model structures are also discussed. The second approach is based on a neural network architecture, called dynoNet, where linear dynamical operators parametrised as rational transfer functions are used as elementary building blocks. Owing to the rational transfer function parametrisation, these blocks can describe infinite impulse response (IIR) filtering operations. Thus, dynoNet may be seen as an extension of the 1D-CNN architecture, as the 1D-Convolutional units of 1D-CNNs correspond to the finite impulse response (FIR) filtering case.
Online - 11:30

st.wwwsupsi@supsi.ch