Perceptual components such as audiovisual processing and multimodal fusion components are integral elements of smart space applications. These components provide information about human actors' identity, location, activities, and sometimes goals through person trackers, person-identification components, and other situation-identification elements. Perceptual components are usually computationally demanding because they often perform real-time processing of vast amounts of data. Legacy middleware frameworks facilitate integration of pervasive smart space applications, yet they make no attempt to standardize perceptual component data and interfaces. To develop context-aware applications for smart spaces, designers must integrate perceptual components from various vendors.