Uncertain reasoning under incompleteness
We consider the problem of learning models and doing predictions in presence of incomplete information. Examples of incomplete information are incomplete data sets and, more generally, incomplete information on the basis of which we have to take a decision. The most frequent treatment of incompleteness in the scientific literature is based on regarding incompleteness as uninformative. This means that incompleteness is intended to happen following "random" patterns, and the goal of the analysis is just to filter it out in order to recover the underlying "signal". Nowadays, there is some criticism about these approaches as they often lead to fragile models and (sometimes highly) misleading conclusions. The point is that regarding incompleteness as uninformative is often too a narrow view. Our past work has shown that it is actually possible to work with more general views of incompleteness, by introducing a so-called conservative updating rule. This rule prescribes, in the framework of expert systems, how to update beliefs under incomplete information when there is nearly no knowledge about the process that gives rise to the incompleteness. In this project we will generalize such a work in two directions: to more complex states of knowledge about the incompleteness process, and to the statistical case, thus deriving new types of conservative rules. At the algorithmic level, the goal of this project will be to make the new conservative rules practical for use with real-world problems. For this part, we will mostly focus on Bayesian and credal networks.
People Alessandro Antonucci

Marco Zaffalon



Copyright © 2003 - 2012 IDSIA. All Rights Reserved.