Skip to Main Content
Probability density function (PDF) estimation is a constantly important topic in the fields related to artificial intelligence and machine learning. This paper is dedicated to considering problems on the estimation of a density function simply from its marginal distributions. The possibility of the learning problem is first investigated and a uniqueness proposition involving a large family of distribution functions is proposed. The learning problem is then reformulated into an optimization task which is studied and applied to Gaussian mixture models (GMM) via the generalized expectation maximization procedure (GEM) and Monte Carlo method. Experimental results show that our approach for GMM, only using partial information of the coordinates of the samples, can obtain satisfactory performance, which in turn verifies the proposed reformulation and proposition.