Motion Prediction for Autonomous Vehicles Using Deep Learning Techniques | IEEE Conference Publication | IEEE Xplore

Motion Prediction for Autonomous Vehicles Using Deep Learning Techniques


Abstract:

Autonomous vehicles require motion prediction of nearby traffic agents to guarantee secure navigation. By anticipating the movements of surrounding objects such as other ...Show More

Abstract:

Autonomous vehicles require motion prediction of nearby traffic agents to guarantee secure navigation. By anticipating the movements of surrounding objects such as other vehicles, pedestrians, and bicycles, the autonomous vehicle can make informed decisions to prevent collisions, adjust speed and direction, and operate effectively in changing conditions. Motion prediction is an essential aspect of autonomous vehicle systems, contributing to increased safety, dependability, and efficiency. This work generates an effective mechanism to predict the movement direction of nearby traffic entities around our ego vehicle (the vehicle that we originally reference) using deep learning techniques and the inclusion of a LIDAR-based dataset.
Date of Conference: 06-08 July 2023
Date Added to IEEE Xplore: 23 November 2023
ISBN Information:

ISSN Information:

Conference Location: Delhi, India
References is not available for this document.

I. Introduction

Computer systems can easily solve mathematical functions and equations that humans find difficult. However, there are some problems that we as humans can solve "intuitively" but the same problem is incredibly challenging for computers to solve. One such task is the problem of motion prediction of nearby objects that are moving constantly. Predicting the behaviour of traffic agents around autonomous vehicles is an unsolved problem that needs to be tackled to attain full self-driving autonomy. Since the very beginning, the self-driving vehicle industry has been working on creating or utilizing many sensors in the form of RADAR (Radio Detection And Ranging), LIDAR (Light Detection and Ranging), Cameras, etc [10]. The motive of these sensors is to ensure the safety of the vehicle during the driving process at a particular point in time by locating the nearby traffic agents. Addressing the physical state of the traffic agents at a given point in time is crucial to have a safe and pleasurable ride. The prediction problem involves knowing the future state of the traffic agents that are interacting with the autonomous vehicle. This predictive modelling problem is primarily detected by the sensor installed on the vehicle which is then subjected to various predictive modelling techniques.

Select All
1.
A. Bochkovskiy, “YOLOv4: Optimal Speed and Accuracy of Object Detection, ” 2020. [Online]. Available: doi: 10.48550/arXiv.2004.10934.
2.
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. M. Kirillov, and S. Zagoruyko, “End-to-End Object Detection with Transformers, ” Cornell University, 2020. [Online]. Available: doi: 10.48550/arxiv.2005.12872.
3.
S. Casas, W. Luo, and R. Urtasun, “Intentnet: Learning to predict intention from raw sensor data, ” 2021.
4.
N. Djuric et al., “MultiXNet: Multiclass multistage multimodal motion prediction, ” 2021 IEEE Intelligent Vehicles Symposium (IV), 2021. doi: 10.1109/iv48863.2021.9575718
5.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition, ” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi: 10.1109/cvpr.2016.90
6.
S. Hochreiter and J. Schmidhuber, “Long short-term memory, ” Neural Computation, vol. 9, pp. 1735 - 80, Dec. 1997. doi: 10.1162/neco.1997.9.8.1735.
7.
J. Houston, “One Thousand and One Hours: Self-driving Motion Prediction Dataset, ” 2020. [Online]. Available: doi: 10.48550/arXiv.2006.14480.
8.
W. Luo, B. Yang, and R. Urtasun, “Fast and furious: Real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net, ” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/cvpr.2018.00376
9.
S. Mandal, S. Biswas, V. E. Balas, R. N. Shaw, and A. Ghosh, “Motion prediction for Autonomous Vehicles from lyft dataset using Deep Learning, ” 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), 2020. doi: 10.1109/ic-cca49541.2020.9250790
10.
Y. Li and J. Ibanez-Guzman, “Lidar for autonomous driving: The principles, challenges, and trends for Automotive Lidar and Perception Systems, ” IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 50–61, 2020. doi: 10.1109/msp.2020.2973615
11.
M. C. Chirodea et al., “Comparison of tensorflow and pytorch in convolutional neural network - based applications, ” 2021 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2021. doi: 10.1109/ecai52376.2021.9515098
12.
J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement, ” Cornell University, 2018. [Online]. Available: doi: 10.48550/arxiv.1804.02767.
13.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. doi: 10.1109/tpami.2016.2577031
14.
Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully convolutional one-stage object detection, ” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019. doi: 10.1109/iccv.2019.00972
15.
X. Zhang, F. Wan, C. Liu, X. Ji, and Q. Ye, “Learning to match anchors for visual object detection, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 3096–3109, 2022. doi: 10.1109/tpami.2021.3050494
16.
J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph Neural Networks: A Review of Methods and Applications, ” 2021.
17.
A. Alahi et al., “Social LSTM: Human trajectory prediction in crowded spaces, ” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi: 10.1109/cvpr.2016.110
18.
N. Deo and M. M. Trivedi, “Convolutional social pooling for vehicle trajectory prediction, ” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018. doi: 10.1109/cvprw.2018.00196
19.
T. Zhao et al., “Multi-agent tensor fusion for contextual trajectory prediction, ” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. doi: 10.1109/cvpr.2019.01240
20.
S. Srikanth et al., “Infer: Intermediate representations for future prediction, ” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019. doi: 10.1109/iros40897.2019.8968553
21.
M. Schreiber, V. Belagiannis, C. Glaser, and K. Dietmayer, “Dynamic Occupancy Grid Mapping with recurrent neural networks, ” 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021. doi: 10.1109/icra48506.2021.9561375
22.
B. Kim et al., “Probabilistic vehicle trajectory prediction over occupancy grid map via Recurrent Neural Network, ” 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017. doi: 10.1109/itsc.2017.8317943
23.
H. Akolkar, S. H. Ieng, and R. Benosman, “Real-time high speed motion prediction using fast aperture-robust event-driven visual flow, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1 - 1, 2021. doi: 10.1109/tpami.2020.3010468.
24.
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset, ” International Journal of Robotics Research (IJRR), 2013.
25.
P. Sun et al., “Scalability in perception for autonomous driving: Waymo Open Dataset, ” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. doi: 10.1109/cvpr42600.2020.00252.
26.
H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, et al., “nuScenes: A multimodal dataset for autonomous driving,” arXiv preprint arXiv:1903.11027, 2019.
27.
W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The Oxford Robotcar Dataset, ” The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, 2016. doi: 10.1177/0278364916679498.
28.
C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum pointnets for 3D object detection from RGB-D data, ” Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
29.
A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “PointPillars: Fast encoders for object detection from point clouds, ” Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
30.
G. James, D. Witten, T. Hastie, and R. Tibshirani, “Introduction to Statistical Learning, ” vol. 112, Springer, New York, NY, USA, 2013.

Contact IEEE to Subscribe

References

References is not available for this document.