This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data processing and analysis. The IEEE FIPA standard-compliant mobile agent system, Mobile-C, implemented as a C library, is used as the foundation for the mobile agent-based distributed vision fusion architecture. Mobile agents dynamically migrate from one sensor node to another to fully combine all necessary sensor data in a desired manner specific to the system requesting the data. Dispatching mobile agents to target vision systems on the network is done on demand, reducing network congestion and the required communication bandwidth. The use of mobile agents in a distributed vision system allows for the encapsulation of specific fusion techniques. The differences between monolithic and mobile agent-based approaches along with future considerations were discussed. The validity of the architecture was proven through two separate case studies. The first case study involves the localization of a part in a real experimental setup with a retrofitted robotic workcell composed of a Puma 560, IBM 7575, conveyor system, and vision system. The second case study vertically and horizontally integrates multiple systems as a tier-scalable planetary reconnaissance experimental system involving two vision systems: a Puma 560 manipulator and a K-Team Khepera III mobile robot. All source code including Mobile-C, the mobile agents, and the mobile agent code presented in the article are available at the project Web site.