A framework is presented in this paper for the control of a multisensor robot under several constraints. In this approach, the features coming from several sensors are treated as a single feature vector. The core of our approach is a weighting matrix that balances the contribution of each feature, allowing the taking of constraints into account. The constraints are considered as additional features that are smoothly injected in the control law. Multisensor modeling is introduced for the design of the control law, drawing similarities with linear quadratic control. The main properties are exposed and we propose several strategies to cope with the main drawbacks. The framework is validated in a complex experiment, illustrating various aspects of the approach. The goal is the positioning of a six-DOF robot arm with 3-D visual servoing. The considered constraints are both eye-in-hand and eye-to-hand visibility, together with joint limit avoidance. The system is thus highly overdetermined, yet the task can be performed while ensuring several combinations of constraints.