Skip to Main Content
Previous performance evaluation of range image segmentation algorithms has depended on manual tuning of algorithm parameters, and has lacked a basis for a test of the significance of differences between algorithms. We present an automated framework for evaluating the performance of range image segmentation algorithms. Automated tuning of algorithm parameters in this framework results in performance as good as that previously obtained with careful manual tuning by the algorithm developers. Use of multiple training and test sets of images provides the basis for a test of the significance of performance differences between algorithms. The framework implementation includes range images, ground truth overlays, program source code, and shell scripts. This framework should make it possible to objectively and reliably compare the performance of range image segmentation algorithms; allow informed experimental feedback for the design of improved segmentation algorithms. The framework is demonstrated using range images, but in principle it could be used to evaluate region segmentation algorithms for any type of image.