Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering

License

NotificationsYou must be signed in to change notification settings

OpenDriveLab/DriveLM

Repository files navigation

DriveLM:Driving withGraphVisualQuestionAnswering

Autonomous Driving Challenge 2024Driving-with-LanguageLeaderboard.

License: Apache2.0arXivHugging Face

drivelm_nus_demo_v2_1.mp4

Highlights

🔥 We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performingGraph VQA and end-to-end driving.

🏁DriveLM serves as a main track in theCVPR 2024 Autonomous Driving Challenge. Everything you need for the challenge isHERE, including baseline, test data and submission format and evaluation pipeline!

News

  • [2025/01/08]Drive-Bench release! In-depth analysis in what are DriveLM really benchmarking. Take a look atarxiv.
  • [2024/07/16] DriveLMofficial leaderboard reopen!
  • [2024/07/01] DriveLM got accepted to ECCV 2024! Congrats to the team!
  • [2024/06/01] Challenge ended up!See the final leaderboard.
  • [2024/03/25] Challenge test server is online and the test questions are released.Check it out!
  • [2024/02/29] Challenge repo release. Baseline, data and submission format, evaluation pipeline.Have a look!
  • [2023/08/25] DriveLM-nuScenes demo released.
  • [2023/12/22] DriveLM-nuScenes fullv1.0 andpaper released.

Table of Contents

  1. Highlights
  2. Getting Started
  3. Current Endeavors and Future Horizons
  4. TODO List
  5. DriveLM-Data
  6. License and Citation
  7. Other Resources

Getting Started

To get started with DriveLM:

(back to top)

Current Endeavors and Future Directions

  • The advent of GPT-style multimodal models in real-world applications motivates the study of the role of language in driving.
  • Date below reflects the arXiv submission date.
  • If there is any missing work, please reach out to us!

DriveLM attempts to address some of the challenges faced by the community.

  • Lack of data: DriveLM-Data serves as a comprehensive benchmark for driving with language.
  • Embodiment: GVQA provides a potential direction for embodied applications of LLMs / VLMs.
  • Closed-loop: DriveLM-CARLA attempts to explore closed-loop planning with language.

(back to top)

TODO List

  • DriveLM-Data
    • DriveLM-nuScenes
    • DriveLM-CARLA
  • DriveLM-Metrics
    • GPT-score
  • DriveLM-Agent
    • Inference code on DriveLM-nuScenes
    • Inference code on DriveLM-CARLA

(back to top)

DriveLM-Data

We facilitate thePerception, Prediction, Planning, Behavior, Motion tasks with human-written reasoning logic as a connection between them. We propose the task ofGVQA on the DriveLM-Data.

📊 Comparison and Stats

DriveLM-Data is thefirst language-driving dataset facilitating the full stack of driving tasks with graph-structured logical dependencies.

Links to details aboutGVQA task,Dataset Features, andAnnotation.

(back to top)

License and Citation

All assets and code in this repository are under theApache 2.0 license unless specified otherwise. The language data is underCC BY-NC-SA 4.0. Other datasets (including nuScenes) inherit their own distribution licenses. Please consider citing our paper and project if they help your research.

@article{sima2023drivelm,title={DriveLM: Driving with Graph Visual Question Answering},author={Sima, Chonghao and Renz, Katrin and Chitta, Kashyap and Chen, Li and Zhang, Hanxue and Xie, Chengen and Luo, Ping and Geiger, Andreas and Li, Hongyang},journal={arXiv preprint arXiv:2312.14150},year={2023}}
@misc{contributors2023drivelmrepo,title={DriveLM: Driving with Graph Visual Question Answering},author={DriveLM contributors},howpublished={\url{https://github.com/OpenDriveLab/DriveLM}},year={2023}}

(back to top)

Other Resources

Twitter Follow

OpenDriveLab

Twitter Follow

Autonomous Vision Group

(back to top)

Sponsor this project

 

[8]ページ先頭

©2009-2025 Movatter.jp