Abstract:
Environment recognition systems can facilitate the predictive control of lower-limb exoskeletons and prostheses by recognizing the oncoming walking environment prior to p...Show MoreMetadata
Abstract:
Environment recognition systems can facilitate the predictive control of lower-limb exoskeletons and prostheses by recognizing the oncoming walking environment prior to physical interactions. While many environment recognition systems have been developed using different wearable technology and classification algorithms, their relative operational performances have not been evaluated. Motivated to determine the state-of-the-science and propose future directions for research innovation, we conducted an extensive comparative analysis of the wearable technology, training datasets, and classification algorithms used for vision-based environment recognition. The advantages and drawbacks of different wearable cameras and training datasets were reviewed. Environment recognition systems using pattern recognition, machine learning, and convolutional neural networks for image classification were compared. We evaluated the performances of different deep learning networks using a novel balanced metric called “NetScore”, which considers the image classification accuracy, and computational and memory storage requirements. Based on our analysis, future research in environment recognition systems for lower-limb exoskeletons and prostheses should consider developing 1) efficient deep convolutional neural networks for onboard classification, and 2) large-scale open-source datasets for training and comparing image classification algorithms from different researchers.
Published in: 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob)
Date of Conference: 29 November 2020 - 01 December 2020
Date Added to IEEE Xplore: 15 October 2020
ISBN Information: