Skip to Main Content
Like all models, network feature selection models require that assumptions be made on the size and structure of the desired features. The most common assumption is sparsity, where only a small section of the entire network is thought to produce a specific phenomenon. The sparsity assumption is enforced through regularized models, such as the lasso. However, assuming sparsity may be inappropriate for many real-world networks, which possess highly correlated modules. In this paper, we illustrate two novel optimization strategies, namely, boosted expectation propagation (BEP) and boosted message passing (BMP), which directly use the network structure to estimate the parameters of a network classifier. BEP and BMP are ensemble methods that seek to optimize classification performance by combining individual models built upon local network features. Neither BEP nor BMP assumes a sparse solution, but instead they seek a weighted average of all network features where the weights are used to emphasize all features that are useful for classification. In this paper, we compare BEP and BMP with network-regularized logistic regression models on simulated and real biological networks. The results show that, where highly correlated network structure exists, assuming sparsity adversely effects the accuracy and feature selection power of the network classifier.
Neural Networks and Learning Systems, IEEE Transactions on (Volume:23 , Issue: 11 )
Date of Publication: Nov. 2012