Abstract:
In automated driving, pedestrian detection by sensors pose a particular challenge caused by smaller sizes than vehicles, occlusion through objects, and loose movement acr...Show MoreMetadata
Abstract:
In automated driving, pedestrian detection by sensors pose a particular challenge caused by smaller sizes than vehicles, occlusion through objects, and loose movement across the street geometry. Due to their vulnerability, analyzing and predicting pedestrian behavior and intention is vital when developing automated vehicles. However, while interpreting a pedestrian's intention is critical for path-planning algorithms, perception functions are more challenged by a pedestrian's pose and animation. Training machine learning models on unbalanced data sets that mainly consist of regular walking pedestrians may lead to the models' inability to detect pedestrians with anomalous body poses, such as during stumbling or falling. Recent work has been proposed to enhance data sets collected in real traffic by synthetic data generated by simulation software. The freedom inside a simulation framework allows a developer to test automated driving software in hazardous scenarios that may occur with less frequency in real traffic. With this work, we introduce a framework for generating manually defined pedestrian animations rarely recorded in real traffic. Using the CARLA simulation framework, we enhance the existing functionalities with a module to simulate pedestrian animations that differ from the already implemented standing, walking, and running. Additionally, we test the pedestrian animations for their impact on perception performance using a YOLOv5 object detector, custom trained on synthetic data.
Date of Conference: 24-28 September 2023
Date Added to IEEE Xplore: 13 February 2024
ISBN Information: