Vision-based sensors are a key component for robot systems, where many tasks depend on image data. Realtime constrains of the control tasks bind a lot of processing power only for a single sensor modality. Dedicated and distributed processing resources are the "natural" solution to overcome this limitation. This paper presents first experiments, using embedded processors as well as dedicated hardware, performing various image (pre)processing tasks. Architectural concepts and requirements for smart vision systems have been acquired.