Skip to Main Content
Understanding neural connectivity and structures in the brain requires detailed 3D anatomical models, and such an understanding is essential to the study of the nervous system. However, the reconstruction of 3D models from a large set of dense nanoscale medical images is very challenging, due to the imperfections in staining and noise in the imaging process. Manual segmentation in 2D followed by tracking the 2D contours through cross-sections to build 3D structures can be a solution, but it is impractical. In this paper, we propose an automated tracking and segmentation framework to extract 2D contours and to trace them through the z direction. The segmentation is posed as an energy minimization problem and solved via graph cuts. The energy function to be minimized contains a regional term and a boundary term. The regional term is defined over the flux of the gradient vector fields and the distance function. Our main idea is that the distance function should carry the information of the segmentation from the previous image based on the assumption that successive images have a similar segmentation. The boundary term is defined over the gray-scale intensity of the image. Experiments were conducted on nanoscale image sequences from the Serial Block Face Scanning Electron Microscope (SBF-SEM). The results show that our method can successfully track and segment densely packed cells in EM image stacks.