Skip to Main Content
This paper presents the design and implementation of a flexible and easy-to-use multi-camera acquisition setup for markerless human gesture monitoring in unconstrained environments. A robust 2-stage framework is proposed to achieve full calibration of a variable number of synchronized cameras separated by long baselines. In the first stage, the intrinsic parameters are computed for each camera independently. In the second stage, the cameras are registered based on their relative positioning by waving a red light emitting device to produce a set of feature points. Matches are regrouped by camera pair such that pair-wise stereo relations can be found for as many pairs as possible before being scaled to create a consistent weighted camera graph which is used to link all cameras. Experimental results demonstrate the accuracy of the calibration that is achieved and the suitability of the proposed approach for almost any multi-camera configurations. An application is presented for volumetric-reconstruction of human beings to validate the implementation.