Skip to Main Content
Existing methods of parameters and structure learning of probabilistic inference networks assume that the database is complete. If there are missing values, these values are assumed to be missing at random. This paper incorporates the concepts use in Dempster-Shafer theory of belief functions to learn both the parameters and structure of the inference networks. Instead of filling the missing values by their estimates, we model these missing values as representing our ignorance or lack of belief in the actual state of the corresponding variables. There representation allows us to add new findings in terms of support functions as used in belief functions, thus providing a richer way to enter evidence in an inference network.