Skip to Main Content
We present an activity-recognition system for assisted living applications and smart homes. While existing systems tend to rely on expensive computation of comparatively largedimension data sets, ours leverages information from a small number of fundamentally different sensor measurements that provide context information pertaining the person's location, and action information by observing the motion of the body and arms. Camera nodes are placed on the ceiling to track people in the environment, and place them in the context of a building map where areas and objects of interest are premarked. Additionally, a single inertial sensor node is placed on the subject's arm to infer arm pose, heading and motion frequency using an accelerometer, gyroscope and magnetometer. These four measurements are parsed using a lightweight hierarchy of finite state machines, yielding recognition rates with high precision and recall values (0.92 and 0.93, respectively).