Abstract:
Large Language Models (LLMs) represent the best hope for augmenting Advanced Driver Assistance Systems (ADAS) with powerful reasoning that is nonetheless fully grounded i...Show MoreMetadata
Abstract:
Large Language Models (LLMs) represent the best hope for augmenting Advanced Driver Assistance Systems (ADAS) with powerful reasoning that is nonetheless fully grounded in real-world driving contexts. In this study, we propose an assistive driving paradigm where LLMs make decisions by performing temporal analysis of data obtained from object detection models (YOLOv9 and YOLOv10). We incorporate both detected objects and sensor data into a Retrieval-Augmented Generation (RAG) that allows the LLM to reason arithmetically and apply common sense to drive in safe manners to be compliant with the current context. The model in this paper directly handles issues like low visibility and out-of-distribution where standard models would typically fail. In a simulated environment, we assess the LLM’s operational efficiency in decision-making and prove that its reasoning capacities substantially optimize the probability of a precise forecast and the extent of the response’s precision. These results indicate that with LLMs’ help, ADAS can be developed into life-changing technologies that help maintain safety even under increasing uncertainties in the environment.
Date of Conference: 06-10 January 2025
Date Added to IEEE Xplore: 20 February 2025
ISBN Information: