Skip to Main Content
In the context of vision-based hand gesture recognition, we study how a robot swarm can incrementally and cooperatively learn to classify an unseen gesture vocabulary using a simple information sharing mechanism. Training examples and correction feedback are interactively provided by a human instructor. Each robot in the swarm is equipped with a statistical classifier, which is built and progressively updated using the input from the instructor. In order to learn collectively and speed-up the process, the robots share with each other a selection of the locally acquired gesture data. Extensive experiments on a real-world dataset show that the proposed cooperative learning approach is effective and robust, in spite of its simplicity. Accounting for bandwidth limitations in network communications, we study the impact of different strategies for the selection of the shared data, and we investigate the effect of swarm size and the amount of shared information on the learning speed.