LimSim++: A Closed-Loop Platform for Deploying Multimodal LLMs in Autonomous Driving

1 Shanghai Artificial Intelligence Laboratory, Shanghai, China
2 College of Control Science and Engineering, Zhejiang University

Equal Contribution
Corresponding author

Abstract

This paper introduces LimSim++, an extended version of LimSim designed for the application of Multimodal Large Language Models ((M)LLMs) in autonomous driving. Acknowledging the limitations of existing simulation platforms, LimSim++ addresses the need for a long-term closed-loop infrastructure supporting continuous learning and improved generalization in autonomous driving. The platform offers extended-duration, multi-scenario simulations, providing crucial information for (M)LLM-driven vehicles. Users can engage in prompt engineering, model evaluation, and framework enhancement, making LimSim++ a versatile tool for research and practice. The contributions include the introduction of an open-source testing platform, a baseline MMLM-driven framework, and validation through quantitative experiments across diverse scenarios. LimSim++ serves as a crucial resource for advancing the integration of (M)LLMs in the development of trustworthy autonomous driving systems. The open-source resources are available at Github: LimSim/LimSim_plus.

Platform Composition

LimSim++ is the first closed-loop evaluation platform specifically developed for (M)LLM-driven autonomous driving.

Framework

LimSim++ facilitates the closed-loop simulation of (M)LLMs-driven autonomous driving, creating an environment for continuous learning.

Various Scenarios Supporting

Multimodal LLMs Supporting

Enhancement with Memory

GPT-3.5 driving in zero-shot conditions: collision with a vehicle when changing lanes to the left.

GPT-3.5 driving while zero-shot: waiting too long at on-ramps.

GPT-3.5 driving with memory: safe lane changes and smooth on-ramps

BibTeX

@article{fu2024limsim,
  title={LimSim++: A Closed-Loop Platform for Deploying Multimodal LLMs in Autonomous Driving}, 
  author={Daocheng Fu and Wenjie Lei and Licheng Wen and Pinlong Cai and Song Mao and Min Dou and Botian Shi and Yu Qiao},
  year={2024},
  eprint={2402.01246},
  archivePrefix={arXiv},
  primaryClass={cs.RO}
}