Skip to Main Content
We present a novel calibration method for multi-camera platforms, based on multi-linear constraints. The calibration method can recover the relative orientation between the different cameras on the platform, even when there are no corresponding feature points between the cameras, i.e. there are no overlaps between the cameras. It is shown that two translational motions in different directions are sufficient to linearly recover the rotational part of the relative orientation. Then two general motions, including both translation and rotation, are sufficient to linearly recover the translational part of the relative orientation. However, as a consequence of the speed-scale ambiguity the absolute scale of the translational part can not be determined if no prior information about the motions are known, e.g. from dead reckoning. It is shown that in case of planar motion, the vertical component of the translational part can not be determined. However, if at least one feature point can be seen in two different cameras, this vertical component can also be estimated. Finally, the performance of the proposed method is shown in simulated experiments.