Abstract:
A brain–computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired action...Show MoreMetadata
Abstract:
A brain–computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, preprocessing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning-based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning-based selective attention mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified long short-term memory to distinguish the interdimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach.
Published in: IEEE Internet of Things Journal ( Volume: 6, Issue: 2, April 2019)