Skip to Main Content
Radio frequency (RF) sensing networks are a class of wireless sensor networks (WSNs) which use RF signals to accomplish tasks such as passive device-free localization and tracking. The algorithms used for these tasks usually require access to measurements of baseline received signal strength (RSS) on each link. However, it is often impossible to collect this calibration data (measurements collected during an offline calibration period when the region of interest is empty of targets). We propose adapting background subtraction methods from the field of computer vision to estimate baseline RSS values from measurements taken while the system is online and obstructions may be present. This is done by forming an analogy between the intensity of a background pixel in an image and the baseline RSS value of a WSN link and then translating the concepts of temporal similarity, spatial similarity, and spatial ergodicity, which underlie specific background subtraction algorithms to WSNs. Using experimental data, we show that these techniques are capable of estimating baseline RSS values with enough accuracy that RF tomographic tracking can be carried out in a variety of different environments without the need for a calibration period.