Highlights
Events
2018 2015 2013 2012

We consider two types of searching models, where the goal is to design an adaptive algorithm that locates an unknown vertex in a graph by repeatedly performing queries. In the vertex-query model, each query points to a vertex v and the response either admits that v is the target or provides a neighbor of von a shortest path from v to the target. This model has been introduced for trees by Onak and Parys [FOCS 2006] and by Emamjomeh-Zadeh et al. [STOC 2016] for arbitrary graphs. In the edge-query model, each query chooses an edge and the response reveals which endpoint of the edge is closer to the target, breaking ties arbitrarily. Our goal is to analyze solutions to these problems assuming that some responses may be erroneous. We develop a scheme for tackling such noisy models with the following line of arguments: For each of the two models, we analyze a generic strategy that assumes a fixed number of lies and give a precise bound for its length via an amortized analysis. From this, we derive bounds for both a linearly bounded error rate, where the number of errors in T queries is bounded by r*T for some r<1/2, and a probabilistic model in which each response is incorrect with some probability p<1/2. The bounds for adversarial case turn out to be strong enough for non-adversarial scenarios as well. We obtain thus a much simpler strategy performing fewer vertex-queries than one by Emamjomeh-Zadeh et al. For edge-queries, not studied before for general graphs, we obtain bounds that are tight up to log=CE=94 factors in all error models. Applying our graph-theoretic results to the setting of edge-queries for paths, we obtain a number of improvements over existing bounds for searching in a sorted array in the presence of errors, including an exponential improvement for the prefix-bounded model in unbounded domains.
Manno, Galleria 1, 2nd floor, room G1-204 @12:00

27 June 2018

Information are essential in our everyday life. With them, languages represent the basis to assert or deny facts. Like people speak about something also the surrounding nature tell us something with is own language, the “chemical language”. As "Wittgenstein” reported in the Tractatus Logico-Philosophicus (4.002) “Everyday language is a part of the human organism and is no less complicated than it.” we will have a walk inside the chemical language and we will discover how we can infer biological knowledges from molecular structures.
Manno, Galleria 1, 2nd floor, room G1-204 @12:00

28 June 2018

Most challenging problems in the life sciences, natural sciences and engineering involve disentangling complex networks of probabilistic and causal relationships among heterogeneous, large data sources. This endeavour poses three challenges: devising algorithms that can scale to high dimensions; formulating models that are auditable and interpretable, but also flexible; and balance interpretability against predictive accuracy. Bayesian networks strive to address these concerns, and can be learned efficiently from both big data and high-dimensional data. In this talk I will discuss some of my recent research on computational and information-theoretic aspects of structure learning, touching on the challenges of incomplete and correlated observations and more in general on the nature of prediction in machine learning models.
Meeting Room @ IDSIA, Galleria 1, 11h00

31 August 2018

Resources such as labeled corpora are necessary to train automatic models within the natural language processing (NLP) field. Historically, a large number of resources regarding a broad number of problems are available mostly in English. One of such problems is known as Personality Identification where based on a psychological model (e.g. The Big Five Model), the goal is to find the traits of a subject’s personality given, for instance, a text written by the same subject. In this presentation I will talk about a new corpus in Spanish called Texts for Personality Identification (TxPI) and I'll show some basic baselines for text classification.
IDSIA, Galleria 1, Meeting Room @11:00

4 September 2018 - 4 September 2018

Stockouts are a menace. Despite decades of investments and research, shelves are still empty, online orders not filled. Simply because stuff isn’t there. That is because inventory management is notoriously difficult for retailers in their multi-tier supply chains. Stocking too much causes inefficiency, markdowns, and waste. Stocking too little causes lost sales, dissatisfies shoppers, and diminishes loyalty. Finding out where, when and why what happened that caused items to be missing and at what cost is a tedious, manual, and time consuming task. So it rarely gets done. This presentation gives an overview of the causes and costs of stockouts, and invites the audience to gather ideas for the automated identification and classification of stockouts and their causes from data - in order to create and submit a collaborative funding / research proposal.
IDSIA, Galleria 1, Room G1-204 @12:00

21 September 2018 - 22 September 2018

The topic of the 2018 Meeting will be Logic and Quantum Physics: As it requires a radical revision of the common view of the nature of physical reality but also of how to handle information processes, since its birth quantum physics has been a rich source of challenges and inspirations for philosophy, logic and computer science. In recent times there has been a growing, exciting body of researches in the foundations of quantum theory, stemming particularly from quantum logic and quantum information, that deserves to be shared and discussed. The aim of this event is therefore to bring together some of the experts in quantum logic, quantum information and philosophy of physics, to provide with a general overview of the main problematics to a wide audience, but also to stimulate interactions and discussions based on some of the latest developments in the concerned fields with both established researchers and graduate and post graduate students.
Lugano - USI Campus

26 September 2018

Coresets are one of the central methods to facilitate the analysis of large data sets. We continue a recent line of research applying the theory of coresets to logistic regression. First, we show a negative result, namely, that no strongly sublinear sized coresets exist for logistic regression. To deal with intractable worst-case instances, we introduce a complexity measure $\mu(X)$, which quantifies the hardness of compressing a data set for logistic regression. $\mu(X)$ has an intuitive statistical interpretation that may be of independent interest. For data sets with bounded $\mu(X)$-complexity, we show that a novel sensitivity sampling scheme produces the first provably sublinear $(1\pm\eps)$-coreset. Our algorithms are viable in practise, comparing favorably to uniform sampling as well as to state of the art methods in the area. Joint work with Alexander Munteanu, Christian Sohler, and David Woodruff. To appear at NIPS 2018.
Manno, Galleria 1, 2nd floor, room G1-204 @12:00

3 October 2018

Retail trades greatly benefit from price promotions (promos), i.e., temporary price reductions. The marketing literature on the topic is vast, mainly under the heading “Trade Promotion Optimization”, but not much has been produced on the optimization of the schedule of promotions of brands or items on long time horizons. There is a rich offer of commercial packages to support these decisions, despite the rather poor coverage in the optimization literature of the many, intertwining operational constraints that characterize actual applications. This work proposes a model for retailer chains, focused beyond transactional trade promotion management. It considers both manufacturers, who provide products to sell, and retailers, who are responsible for sales to the consumers. Input data to the model are derived from statistic analytics based on historical data, and yields the expected baseline and the uplift (ratio between the average sales volume with and without a promotion) for each promo in each time period, together with the different contributions to the uplift: cannibalization, halo, promotional dip, forward buying, etc. Building on this, we propose a mathematical model of the effectiveness of a promotion plan in the horizon of interest. (Joint work with Marco Antonio Boschetti)
Galleria 1, 2nd floor, IDSIA meeting room @16h30

15 November 2018

We discuss PSAT, a probabilistic extension of the classical satisfiability (SAT) problem. This is achieved by assigning weights to the clauses of a SAT instance. The PSAT instance is satisfiable if and only if a probability mass function over the literals and consistent with the weights exists . We present two algorithms for PSAT based, respectively, on column generation and integer linear programming, both showing evidence of phase transition. PSAT solves inferences in a recently proposed probabilistic logic (CCL, in [Antonucci & Facchini, 2018]). This allows to perform machine learning with logical constraints under relaxed independence assumptions over probabilistic facts. As an application, we consider label ranking and show that existing solvers can be used to solve practical ranking tasks.
Manno, Galleria 1, 2nd floor, room G1-204 @12:h00

27 November 2018

Policy optimization is an effective Reinforcement Learning approach to solve continuous control tasks. Recent achievements have shown that alternating online and offline optimization is a successful choice for efficient trajectory reuse. However, deciding when to stop optimizing and collect new trajectories is non-trivial, as it requires to account for the variance of the objective function estimate. In this talk, we propose a novel, model-free, policy search algorithm, POIS, applicable in both action-based and parameter-based settings. We first derive a high-confidence bound for importance sampling estimation; then we define a surrogate objective function, which is optimized offline whenever a new batch of trajectories is collected. Finally, the algorithm is tested on a selection of continuous control tasks, with both linear and deep policies, and compared with state-of-the-art policy optimization methods.
Galleria 1, 2nd floor, room G1-204 @12:00

st.wwwsupsi@supsi.ch