Modelling prior ignorance
The problem is modelling prior ignorance about statistical parameters through a set of prior distributions M or, equivalently, the upper and lower expectations (also called previsions) that are generated by M. The upper and lower expectations of a bounded real-valued function (we call it a gamble) g on a possibility space, denoted by LE(g) and UE(g), are respectively the infimum and supremum of the expectations Ep(g) over the probability measures p in M (if M is assumed to be closed and convex, p it is fully determined by all the upper and lower expectations). In choosing a set M to model prior near-ignorance, the main aim is to generate upper and lower expectations with the property that LE(g) = inf g and UE(g) = sup g on a specific class of gambles of interest g. This means that the only available information about E(g) is that it belongs to [inf g; sup g], which is equivalent to state a condition of complete prior ignorance about the value of g. Modeling a state of prior ignorance about the value w of a random variable W is not the only requirement for M, it should also lead to non-vacuous posterior inferences. Posterior inferences are vacuous if the lower and upper expectations of all gambles of interest g coincide with the infimum and, respectively, the supremum of g. This means that our prior beliefs do not change with experience (i.e., there is no learning from data). The issue is thus to define a set M of distributions that is a model of prior near-ignorance but that does not lead to vacuous inferences.
Set distributions filtering
A common trait of state estimation techniques is that they assume that the distributions associated with the prior, state transition, and likelihood functions are perfectly known. However, in many practical cases, our information about the system to be modelled may not allow us to characterize these functions with single (precise) distributions. For example, in the Gaussian case, we may only be able to determine an interval that contains the mean of the Gaussian distribution or, in more general cases, we may only be able to state that the distribution of the noise belongs to some set of distributions. This leads to alternative models of representation of uncertainty based on a set of probability distributions and, thus, to robust filtering. The most explored techniques for robust filtering are H-infinity, H2 and set-membership estimation. These techniques deal mainly with two kinds of uncertainties: norm-bounded parametric uncertainty and/or bounded uncertainty in the noise statistics or in the noise intensity. In a recent paper we have proposed a new more general approach to robust filtering that instead focuses attention on the use of closed convex sets of distributions to model the imprecision in the knowledge about the system parameters and probabilistic relationships involved. Norm-bounded parametric uncertainty and/or bounded uncertainty can in fact be seen as special cases of closed convex sets of distributions.