### Stefano Fiorini: Unlocking Reassembly Tasks - Reconstructing 2D and 3D Worlds with Deep Learning

Reassembly tasks are fundamental human skills acquired during early developmental stages; therefore, we believe it is necessary to develop these tasks to approach general artificial intelligence (AI). The resolution of reassembly tasks in both 2D and 3D holds significant relevance across various fields such as biology, computer vision, and cultural heritage. However, existing approaches tend to focus solely on specific tasks and modalities, lacking a unified framework. In this presentation, we will introduce a novel unified framework based on a Graph Neural Network architecture. We will also explore the benefits and issues related to the adoption of a diffusion process, where we introduce noise into the elements' positions and orientations, followed by iterative denoising to reconstruct their coherent poses. Through our study, we will reveal the shared fundamentals between 2D and 3D tasks, emphasizing the significance of rotation-equivariant representation as a common inductive bias that enhances performance in both modalities.*East Campus USI-SUPSI, Room B1.09*

### Enrico Giudice: Bayesian Causal Inference with Gaussian Process Networks

Causal inference from observational data is a compelling problem in statistics, which has attracted much attention due to its potential application in various scientific fields. Estimating the effects of a manipulation on a system of random variables however poses both modeling and computational challenges, which are typically addressed by imposing strict assumptions on the joint distribution such as linearity. One appealing approach is to model the system as a Gaussian process network (GPN), which allows describing the causal relationships among a set of random variables with minimal parametric assumptions. In the absence of prior knowledge of the underlying causal graph, a fully Bayesian approach requires integrating the causal quantity of interest over the posterior over graphs, which is computationally infeasible even in low dimensions. By harnessing Monte Carlo and Markov Chain Monte Carlo methods we can sample from the posterior distribution of network structures, thus providing an accurate approximation of the posterior. Causal inference across the whole GPN can then be performed while also accounting for uncertainty in the causal graph. Simulation studies show that our approach is able to identify the effects of hypothetical interventions with non-Gaussian, non-linear observational data and accurately reflect the posterior uncertainty of the causal estimates. Finally we compare the results of our GPN-based causal inference approach to existing methods on a real dataset of A. Thaliana gene expressions.*East Campus USI-SUPSI, Room C2.09*

### Charl Ras: The multi-source multi-sink directed Steiner tree problem in the plane

The Euclidean Steiner tree problem requires a shortest network interconnecting a given set of points in the plane. Additional vertices may be introduced and are called Steiner points. This is a well-studied, but NP-hard problem. Nevertheless, the current flagship algorithm can exactly solve instances on many thousands of input points. There are many variations to this classical problem. In this presentation we will look at a multi-source multi-sink directed version. For two given sets of points A (the sources) and B (the sinks), the task is to find a minimum length network such that there exists a directed path between every source and every sink. We will share some known structural results on optimal solutions and discuss the current state of algorithmic approaches for finding exact solutions.*East Campus USI-SUPSI Room B1.14*

### Emily Chang: Bridging Geometric and Information-theoretic Compression in Language Models

For a language model (LM) to faithfully model human language, it must compress vast, potentially infinite information into a relatively low-dimensional space. On this topic, I will present a recent work with Corentin Kervadec and Marco Baroni to appear at EMNLP. We propose analyzing compression in (pre-trained) LMs from two points of view: geometric and information-
theoretic. We demonstrate that the two views are highly correlated, such that the intrinsic geometric dimension of linguistic data predicts their coding length under the LM. We then show that, in turn, high compression of a linguistic dataset predicts rapid adaptation to that dataset, confirming that being able to compress linguistic information is an important part of successful LM performance. As a practical byproduct of our analysis, we evaluate a battery of intrinsic dimension estimators for the first time on linguistic data, showing that only some encapsulate the relationship between information-theoretic compression, geometric compression, and ease-of-adaptation.
*Room D1.14 - East Campus USI-SUPSI *

### Workshop: Explainable AI in medicine - A critical appraisal of limitations and insights for future developments

The aim of the meeting, organised under the auspices of IDSIA USI-SUPSI, the College of Humanities at EPFL and the Digital Society Initiative at the University of Zurich, is to bring together experts from different fields such as philosophy, bioethics, AI ethics, XAI, and human-computer interaction to discuss if, how and to what extent proposed solution to the black-box problem are effective in supporting the successful integration and appropriation of AI systems in medical practice. *East Campus USI-SUPSI - Room C1.02*

### Claudio Mancinelli: Computing the Riemannian center of mass on meshes

The Riemannian center of mass provides the equivalent to the
Euclidean affine average on manifolds. In spite of its many
potential applications in computer graphics and geometric
modeling, there exist surprisingly few algorithms to compute it.
In this talk, a direct method for computing the Riemannian
center of mass on a triangle mesh is described. Such a method
works in the polyhedral metric and uses a piecewise-linear
interpolation of gradients of the distance fields from a set
of control points. Applications for tracing splines on a surface
and comparison to other methods at the state of the art will be
presented as well.*East Campus USI-SUPSI - Room D0.02*

### Chiara Manganinini: What Should a Machine Learning Ontology Look Like?

The philosophy of computer science has recently addressed the ontological question of what constitutes a physical computational artifact. Despite their different nuances, all the analyses proposed so far largely rest on the central notions of specification, implementation, and correctness. In this talk, I extend this debate to machine learning (ML) systems, showing that all the three mentioned concepts need to undergo a substantial revision when it comes to ML artifacts. A ML ontology should accommodate a new epistemological role played by specification, defined as the set of functional requirements the artifact must satisfy. In predictive contexts, in fact, specifications are discovered through the actual training process rather than fixed from the beginning, this having in turn deep consequences on the relevant notions of correctness and implementation involved. A revised framework should allow us to formulate new and systematic insights into the notions of correctness and miscomputation, but also fairness and bias, particularly relevant in many decision-making contexts.*East Campus USI-SUPSI*

### Kevin B Lee and Andrea Rizzoli in conversation on the future of intelligence

On Wednesday, August 9th, "A Long Night of Dreaming about The Future of Intelligence" takes place at the Locarno Film Festival. From sunset to sunrise, Festival guests and visitors are invited to learn and dream together about possible futures of intelligence. Guided by researchers, artists, and cinephiles, these questions will be addressed: how do different forms of artificial and ecological intelligence manifest today? How might intelligence change in the future? And what is the role of cinema in shaping intelligence and rendering it visible? For the duration of an entire night, emerging forms of intelligence and their impact on society can be discussed and experienced in talks, workshops and performances.*BaseCamp PopUp, Istituto Sant’Eugenio, Locarno*

### Demetri Psaltis - Optics and Machine Learning

There is a long history linking optics and machine learning, going back to the 1980’s when optics was first used for the implementation of neural networks. The interest in the optical implementation of neural networks has been revived recently due to the explosion in the size of the networks that are realized and the associated high energy consumption required to train and operate digitally these networks. In this presentation, I will focus primarily on multimode fibers and their use as nonlinear optical computing elements. I will show that in a variety of classification tasks, the combination of nonlinear optical elements and digital co-processors [1] can reach classification accuracy competitive with very large digital multi-layer networks but with lower energy consumption. A possible application area of this technology is autonomous robots, vehicles and drones where low energy consumption is a critical need.
[1] Programming Nonlinear Propagation for Efficient Optical Learning Machines
Scalable optical learning operator, Uğur Teğin, Mustafa Yıldırım, Iİker Oğuz, Christophe Moser, Demetri Psaltis, Nature Computational Science, volume 1, pages542–549 (2021)
*Room C1.03 - East Campus USI-SUPSI*

### Lucio Russo - Un’introduzione alla scienza esatta ellenistica

Dopo avere ricordato i principali aspetti metodologici della scienza esatta ellenistica e alcuni dei suoi risultati ben documentati, si propone una ricostruzione degli aspetti matematici di una teoria dinamica perduta, applicata all’astronomia, alla quale Plutarco accenna nel dialogo "De facie quae in orbe lunae apparet". La ricostruzione è basata su passi della Meccanica pseudo-aristotelica, degli Elementi Euclide e del trattato Sulle spirali di Archimede.*Sala C1.02 - Campus Est USI-SUPSI*

### Dario Piga: Deep learning for system identification, and viceversa

The distinction between deep learning and system identification can be quite intricate, as these fields have evolved through decades of research and contributions from diverse communities. In this talk, we aim to showcase how concepts from deep learning and system identification can be synergistically combined to create innovative algorithms and tools for data-driven modeling and analysis of nonlinear dynamical systems. Three main results will be presented:
- A novel neural network architecture, called dynoNet, which integrates transfer functions into a deep learning framework, providing a bridge between traditional system identification and modern deep learning techniques.
- A new algorithm for rapid model adaptation of neural network models, enabling fast and efficient fine-tuning to accommodate changes in system dynamics or operating conditions.
- Quantification of predictive uncertainty in deep-learning models describing nonlinear dynamical systems, providing insights into model confidence and facilitating robust decision-making.
*Room C1.02 - East Campus*

### Fabio D'Asaro - Inductive Logic Programming and Explainable AI

In the talk, I will present some recent research in Explainable Artificial Intelligence (XAI) and some novel application of Inductive Learning of Answer Set Programs (ILASP) I have carried out with my collaborators in three areas. The presentation will encompass both published results and ongoing work, reflecting the latest advancements in the field. ILASP is a powerful method for learning logic programs under the answer set semantics that has shown great potential in addressing explainability challenges. By exploring the use of ILASP in different contexts, we aim to demonstrate its versatility and the value it brings to the field of XAI. The talk will be organized into three main sections, each focusing on a distinct line of research: (i) ILASP for explaining reinforcement learning agents: In this section, we will present how ILASP can be utilised to generate human-understandable explanations for reinforcement learning agents' decision-making processes; (ii) ILASP for explaining preference learning systems: preference learning deals with the prediction of users' preferences based on observed data. We will discuss the application of ILASP to create explainable models of the preference learning systems in terms of weak constraints, making the process more transparent and interpretable; (iii) ILASP in the context of learning Abstract Argumentation Frameworks. Abstract Argumentation Frameworks (AFs) are a powerful approach to reasoning about conflicting information. In this section, we will glimpse at the use of ILASP to learn Afs semantics. Remarks on challenges and future research directions in this rapidly evolving field will conclude the talk.*Room C2.09 - East Campus USI-SUPSI*

### Giorgio Corani: An overview of forecasts reconciliation

Often time series are organized into a hierarchy. For example, the total visitors of a country can be divided into regions and the visitors of each region can be further divided into sub-regions. This is a hierarchical time series. Hierarchical forecasts should be coherent; for instance, the sum of the forecasts of the different regions should equal the forecast for the total. The forecasts are incoherent if they do not satisfy such constraints. Temporal hierarchies are another application of hierarchical time series, in which the same variable is predicted at different scales (e.g., monthly, quarterly and yearly) and coherence across the different temporal scales is needed. Reconciliation is the process of adjusting forecasts which are created independently for each time series, so that they become coherent. I will discuss the state-of-the-art of reconciliation algorithms.*Room B1.17 East Campus USI-SUPSI*

### Gail Weiss: Thinking Like Transformers

Transformers - the purely attention based NN architecture - have emerged as a powerful tool in sequence processing. But how does a transformer think? When we discuss the computational power of RNNs, or consider a problem that they have solved, it is easy for us to think in terms of automata and their variants (such as counter machines and pushdown automata). But when it comes to transformers, no such intuitive model is available.
In this talk I will present a programming language, RASP (Restricted Access Sequence Processing), which we hope will serve the same purpose for transformers as finite state machines do for RNNs. In particular, we will identify the base computations of a transformer and abstract them into a small number of primitives, which are composed into a small programming language. We will go through some example programs in the language, and discuss how a given RASP program relates to the transformer architecture.*Room A1.02 - East Campus USI-SUPSI*

### Kai Hormann: Novel Range Functions via Taylor Expansions and Recursive Lagrange Interpolation with Application to Real Root Isolation

Range functions are an important tool for interval computations, and they can be employed for the problem of root isolation. In this talk, I will first introduce two new classes of range functions for real functions. They are based on the remainder form by Cornelius and Lohner (1984) and provide different improvements for the remainder part of this form. On the one hand, I will show how centered Taylor expansions can be used to derive a generalization of the classical Taylor form with higher than quadratic convergence. On the other hand, I will discuss a recursive interpolation procedure, in particular based on quadratic Lagrange interpolation, leading to recursive Lagrange forms with cubic and quartic convergence. These forms can be used for isolating the real roots of square-free polynomials with the algorithm EVAL, a relatively recent algorithm that has been shown to be effective and practical. Finally, a comparison of the performance of these new range functions against the standard Taylor form will be given. Specifically, EVAL can exploit features of the recursive Lagrange forms which are not found in range functions based on Taylor expansion.
Experimentally, this yields at least a twofold speedup in EVAL.*Room B1.17*