Abstract:
In recent years, a great amount of video data is generated by surveillance cameras in cities and industries, and social media, and internet sites. It seems that this tren...Show MoreMetadata
Abstract:
In recent years, a great amount of video data is generated by surveillance cameras in cities and industries, and social media, and internet sites. It seems that this trend will continue with the video data produced from various sources. Consequently, there is a request for automatic processing and analysis of large-scale video data. Deep learning-powered video analytics can help make these unstructured videos understandable and make the video analysis process faster and more efficient. On the other hand, the reproduction of the human movement has long been the inspiration for robotics. This project introduces the field of deep learning-powered human motion imitation via motion primitives. This work overviews the data processing pipeline, starting from human observation in videos and progressing through analyzing motion via deep learning-powered video analytics, motion modeling through motion primitives, and reproducing it by V-REP robotic simulator. The proposed framework is an early version of deep learning video analytics for human motion imitation with motion primitive approaches. It ensures controlling the robot in the simulator environment to replicate the desired movement based on overviewed human activity recognition in the video.
Date of Conference: 15-17 October 2020
Date Added to IEEE Xplore: 23 November 2020
ISBN Information: