Processing math: 100%
Multi-atlas segmentation using manifold learning with deep belief networks | IEEE Conference Publication | IEEE Xplore

Multi-atlas segmentation using manifold learning with deep belief networks


Abstract:

This paper proposes a novel combination of manifold learning with deep belief networks for the detection and segmentation of left ventricle (LV) in 2D — ultrasound (US) i...Show More

Abstract:

This paper proposes a novel combination of manifold learning with deep belief networks for the detection and segmentation of left ventricle (LV) in 2D — ultrasound (US) images. The main goal is to reduce both training and inference complexities while maintaining the segmentation accuracy of machine learning based methods for non-rigid segmentation methodologies. The manifold learning approach used can be viewed as an atlas-based segmentation. It partitions the data into several patches. Each patch proposes a segmentation of the LV that somehow must be fused. This is accomplished by a deep belief network (DBN) multi-classifier that assigns a weight for each patch LV segmentation. The approach is thus threefold: (i) it does not rely on a single segmentation, (ii) it provides a great reduction in the rigid detection phase that is performed at lower dimensional space comparing with the initial contour space, and (iii) DBN's allows for a training process that can produce robust appearance models without the need of large annotated training sets.
Date of Conference: 13-16 April 2016
Date Added to IEEE Xplore: 16 June 2016
Electronic ISBN:978-1-4799-2349-6
Electronic ISSN: 1945-8452
Conference Location: Prague, Czech Republic
References is not available for this document.

1. Introduction

Two sequential stages are typical in current machine learning methodologies for object segmentation [1], [2]: (i) rigid detection (coarse step) and (ii) non-rigid segmentation (fine step). The first step (rigid detection) is of crucial importance, since it reduces the search running time and training complexities. This paper achieves a complexity reduction of the rigid detection

State-of-the-art rigid detection produces in practice, translation, rotation and scaling of the visual object, (e.g [3]), i.e. R = 5. In this paper, the rigid detection space achieves M < R, where M in the intrinsic dimension of the manifold. See Section 6.1.

by using a manifold learning algorithm. This is an atlas-based segmentation, in sense that the manifold partitions (by soft clustering) the data into several patches using two distinct assumptions: (i)preserving the angular within a patch using a smaller number of points and (ii) the distance (i.e. neighborhood) within these points [4]. Each patch in the learned manifold provides a segmentation proposal. Since multiple patches are obtained, multiple segmentations should be combined. In this paper, a novel strategy to accomplish this is developed. More specifically, a DBN multi-classifier for final segmentation is proposed. This means that a multi-atlas segmentation strategy is followed, i.e. the different segmentations are fused within patches, in which the weights are given by the deep belief network classifiers.

Select All
1.
Yiqiang Zhan, Xiang Sean Zhou, Zhigang Peng, and Arun Krishnan, “Active scheduling of organ detection and segmentation in whole-body medical images,” in Medical Image Computing and Computer-Assisted Intervention-MICCAI2008, pp. 313–321. Springer 2008.
2.
Shaoting Zhang, Yiqiang Zhan, Maneesh Dewan, Junzhou Huang, Dimitris N Metaxas, and Xiang Sean Zhou, “Towards robust and effective shape modeling: Sparse shape composition,” Medical image analysis, vol. 16, no. 1, pp. 265–277, 2012.
3.
S. K. Zhou, “Shape regression machine and efficient segmentation of the left ventricle endocardium from 2-D B-mode echocardiogram,” Med. Imag. Analysis, no. 14, pp. 563–581, 2010.
4.
J. C. Nascimento and J. G. Silva, “Manifold learning for object tracking with multiple motion dynamics,” in ECCV, 2010.
5.
T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer, “Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains,” Neuroimage, vol. 21, no. 4, pp. 1428–42, 2004.
6.
S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation,” IEEE Trans. Med. Imag., vol. 23, no. 7, pp. 903–921, 2004.
7.
R. A. Heckemann, J. V. Hajnal, P. Aljabar, D. Rueckert, and A. Ham-mersc, “Automatic anatomical brain mri segmentation combining label propagation and decision fusion,” Neuroimage, vol. 33, pp. 115–126, 2006.
8.
M. Wu, C. Rosano, P. Lopez-Garcia, C.S. Carter, and H.J. Aizenstein, “Optimum template selection for atlas-based segmentation,” Neuroimage, vol. 34, no. 4, pp. 1612–1618, 2007.
9.
R. Wolz, P. Aljabar, J.V. Hajnal, A. Hammers, and D. Rueckert, “Leap: Learning embeddings for atlas propagation,” Neuroimage, vol. 49, no. 2, pp. 1316–1325, 2010.
10.
A.K. Hoang Duc M. Modat, K.K. Leung, M.J. Cardoso, J. Barnes, T. Kadir, and S. Ourselin, “Using manifold learning for atlas selection in multi-atlas segmentation,” PLOS one, vol. 8, no. 8, pp. 1–15, 2013.
11.
B. Georgescu, X. S. Zhou, D. Comaniciu, and A. Gupta, “Databased-guided segmentation of anatomical structures with complex appearance,” in CVPR, 2005.
12.
Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features,” IEEE Trans. Med. Imaging, vol. 27, no. 11, pp. 1668–1681, 2008.
13.
X. S. Zhou, D. Comaniciu, and A. Gupta, “An information fusion framework for robust shape tracking,” IEEE Trans. Pattern Anal. Machine Intell., vol. 27, no. 1, pp. 115–129, 2005.
14.
G. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
15.
G. Carneiro and J. C. Nascimento, “Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data,” IEEE Trans. Pattern Anal. Machine Intell., vol. 35, no. II, pp. 2592–2607, 2013.
16.
J. C. Nascimento and J. S. Marques, “Robust shape tracking with multiple models in ultrasound images,” IEEE Trans. Imag. Proc., vol. 17, no. 3, pp. 392–406, 2008.
17.
G. Carneiro and J. C. Nascimento, “Multiple dynamic models for tracking the left ventricle of the heart from ultrasound data using particle filters and deep learning architectures,” in CVPR, 2010, pp. 2815–2822.

Contact IEEE to Subscribe

References

References is not available for this document.