Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Official implementation of the paper: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance"

NotificationsYou must be signed in to change notification settings

InternRobotics/NavDP

Repository files navigation

Wenzhe CaiJiaqi PengYuqiang YangYujian ZhangMeng Wei
Hanqing WangYilun ChenTai WangJiangmiao Pang
Shanghai AI Laboratory  Tsinghua University 
Zhejiang University  The University of Hong Kong 

ProjectarXivVideoBenchmarkDatasetGitHub star chartGitHub Issues

🔥 News

  • We have open-sourced the entiredeploy process based on LeKiwi, from hardware setup to algorithm deployment. 😺 Welcome to use it!
  • We release theLoGoPlanner - a localization-grounded, end-to-end navigation framework.
  • We release theInternVLA-N1 - the first end-to-end navigation dual-system.
  • We release theInternNav - an all-in-one open-source toolbox for embodied naivgation.

🏡 Introduction

Navigation Diffusion Policy (NavDP) is an end-to-end mapless navigation modelthat can achieves cross-embodiment generalization without any real-world robot data. By building a highly efficient simulation data generation pipeline as well as the superior model design, NavDP achieves real-time path-planning and obstacle avoidance across various navigation tasks, including nogoal exploration, pointgoal navigation, imagegoal navigation.

Dialogue_Teaser

💻 InternVLA-N1 System-1 Model

Please fill thisform to access the link to download the latest model checkpoint.

🛠️ Installation

Please follow the instructions to config the environment for NavDP.

Step 0: Clone this repository

git clone https://github.com/InternRobotics/NavDPcd NavDP/baselines/navdp/

Step 1: Create conda environment and install the dependency

conda create -n navdp python=3.10conda activate navdppip install -r requirements.txt

🤖 Run NavDP Model

Run the following line to start navdp server:

python navdp_server.py --port${YOUR_PORT} --checkpoint${SAVE_PTH_PATH}

Then, follow the subsequent tutorial to build the environment for IsaacSim and start the evaluation in simulation. By running with our benchmark, you should be able to replicate the navigation examples below:

NoGoal Exploration

scenes

PointGoal Navigation

scenes

ImageGoal Navigation

scenes

🎢 InternVLA-N1 System-1 Benchmark

🏠 Overview

This repository is a high-fidelity platform for benchmarking the visual navigation methods based onIsaacSim andIsaacLab. With realistic physics simulation and realistic scene assets, this repository aims to build an benchmark that can minimizing the sim-to-real gap in navigation system-1 evaluation.

scenes

Highlights

  • ⭐ Decoupled Framework between Navigation Approaches and Evaluation Process

The evaluation is accomplished by calling navigation method api with HTTP requests. By decoupling the implementation of navigation model with the evaluation process, it is much easier for users to evaluate the performance of novel navigation methods.

  • ⭐ Fully Asynchronous Framework between Trajectory Planning and Following

We implement a MPC-based controller to constantly track the planned trajectory. With the asynchronous framework, the evaluation metrics become related to the navigation approaches' decision frequency which help align with the real-world navigation performance.

  • ⭐ High-Quality Scene Asset for Evaluation

Our benchmark supports evaluation in diverse scene assets, including random cluttered environments, realistic home scenarios and commercial scenarios.

  • ⭐ Support Image-Goal, Point-Goal and No-Goal Navigation Tasks

Our benchmark supports multiple navigation tasks, including no-goal exploration, point-goal navigation as well as image-goal navigation.

📋 Table of Contents

🌆 Prepare Scene Asset

Please download the scene asset fromInternScene-N1 at HuggingFace.The episodes information can be directly accessed in this repo. After downloading, please organize the structure as follows:

assets/scenes/├── SkyTexture/│   ├── belfast_sunset_puresky_4k.hdr│   ├── citrus_orchard_road_puresky_4k.hdr│   ├── ...├── Materials/│   ├── Carpet│       ├── textures/│       ├── Carpet_Woven.mdl│       └── ...│   ├── ...├── cluttered_easy/│   └── easy_0/│       ├── cluttered-0.usd/│       ├── imagegoal_start_goal_pairs.npy│       └── pointgoal_start_goal_pairs.npy│   ├── ...├── cluttered_hard/│   └── hard_0/│       ├── cluttered-0.usd/│       ├── imagegoal_start_goal_pairs.npy│       └── pointgoal_start_goal_pairs.npy│   ├── ...├── internscenes_commercial/│   ├── models/│   ├── Materials/│   └── scenes_commercial/│       ├── MV4AFHQKTKJZ2AABAAAAADQ8_usd/│           ├── models/│           ├── Materials/│           ├── metadata.json│           ├── start_result_navigation.usd│           ├── imagegoal_start_goal_pairs.npy│           └── pointgoal_start_goal_pairs.npy│       ├── ...├── internscene_home/│   ├── models/│   ├── Materials/│   └── scenes_home/│       ├── MV4AFHQKTKJZ2AABAAAAADQ8_usd/│           ├── models/│           ├── Materials/│           ├── metadata.json│           ├── start_result_navigation.usd│           ├── imagegoal_start_goal_pairs.npy│           └── pointgoal_start_goal_pairs.npy│       ├── ...
CategoryDownload AssetEpisodes
SkyTextureLink-
MaterialsLink-
Cluttered-EasyLinkEpisodes
Cluttered-HardLinkEpisodes
InternScenes-HomeLinkEpisodes
InternScenes-CommercialLinkEpisodes

🔧 Installation of Benchmark

Our framework is based on IsaacSim 4.2.0 and IsaacLab 1.2.0, you can follow the instructions to configure the conda environment.

# create the environmentconda create -n isaaclab python=3.10conda activate isaaclab# install IsaacSim 4.2pip install --upgrade pippip install isaacsim==4.2.0.2 isaacsim-extscache-physics==4.2.0.2 isaacsim-extscache-kit==4.2.0.2 isaacsim-extscache-kit-sdk==4.2.0.2 --extra-index-url https://pypi.nvidia.com# check the isaacsim installationisaacsim omni.isaac.sim.python.kit# install IsaacLab 1.2.0git clone https://github.com/isaac-sim/IsaacLab.gitcd IsaacLab/git checkout tags/v1.2.0# ignore the rsl-rl unavailable error./isaaclab.sh -i# check the isaaclab installation./isaaclab.sh -p source/standalone/tutorials/00_sim/create_empty.py

After preparing for the dependencies, please clone our project to get started.

# with the created environment following the previous stepsgit clone https://github.com/InternRobotics/NavDP.gitcd NavDPpip install -r requirements.txt

⚙️ Installation of Baseline Library

We collect the checkpoints for other navigation system-1 method from the corresponding respitory and organize their code to support the HTTP api calling for our benchmark. The links of paper, github codes as well as the pre-trained checkpoints are listed in the table below. Some of the baselines requires additional dependencies, and we provide the installation details below.

BaselinePaperRepoCheckpointSupport Tasks
DD-PPOArxivGitHubCheckpointPointNav
iPlannerArxivGitHubCheckpointPointNav
ViPlannerArxivGitHubCheckpointMask2FormerPointNav
GNMArxivGitHubCheckpointImageNav, NoGoal
ViNTArxivGitHubCheckpointImageNav, NoGoal
NoMadArxivGitHubCheckpointImageNav, NoGoal
NavDPArxivGitHubCheckpointPointNav, ImageNav, NoGoal
LoGoPlannerArxivGitHubCheckpointPointNav

DD-PPO

To verify the performance of DD-PPO with continuous action space, we interpolate the predicted discrete actions {Stop, Forward, TurnLeft, TurnRight} into a trajectory. To play with the DD-PPO in our benchmark,you need to install the habitat-lab and habitat-baselines. As Habitat only supports python <= 3.9, we recommand to create a new environment.

conda create -n habitat python=3.9 cmake=3.14.0conda activate habitatconda install habitat-sim withbullet -c conda-forge -c aihabitatgit clone --branch stable https://github.com/facebookresearch/habitat-lab.gitcd habitat-labpip install -e habitat-labpip install -e habitat-baselines

iPlanner

No addition dependencies are required if you have configured the environment for running the benchmark.

ViPlanner

For Viplanner, you need to install the mmcv and mmdet for Mask2Former. We recommand to create a new environment with torch 2.0.1 as backend.

pip install torch==2.0.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118pip install torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118pip install mmcv==2.0.0 -f https://download.openmmlab.com/mmcv/dist/cu118/torch2.0/index.htmlpip install mmengine mmdetpip install git+https://github.com/cocodataset/panopticapi.git

GNM, ViNT and NoMad

To play with GNM, ViNT and NoMad, you need to install the following dependencies:

pip install efficientnet_pytorch==0.7.1pip install diffusers==0.33.1pip install git+https://github.com/real-stanford/diffusion_policy.git

💻 Running Basline as Server

For each pre-built baseline methods, each contains a server.py file, just simply run server python script with parsing the server port as well as the checkpoint path. Taking NavDP as an example:

# please first download the checkpoint from the above linkcd baselines/navdp/python navdp_server.py --port 8888 --checkpoint ./checkpoints/navdp_checkpoint.ckpt

Then, the server will run at backend waiting for RGB-D observations and generate the preferred navigation trajectories.

🕹️ Running Teleoperation

For quickstart or debug with your novel navigation approach, we provide a teleoperation script that the robot move according to your teleoperation command while outputs the predicted trajectory for visualization. With a running server, the teleoperation code can be directly started with one-line command:

# if the running server support no-goal taskpython teleop_nogoal_wheeled.py# if the running server support point-goal taskpython teleop_pointgoal_wheeled.py# if the running server support image-goal taskpython teleop_imagegoal_wheeled.py

Then, you can use 'w','a','s','d' on the keyboard to control the linear and anguler speed.

📊 Running Evaluation

With a running server, it is simple to start the evaluation as:

# if the running server support no-goal taskpython eval_nogoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE}# if the running server support point-goal taskpython eval_pointgoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE}# if the running server support image-goal taskpython eval_imagegoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE}# if the running server support start-goal task(No odometry)python eval_startgoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE}

Notes: Please parse the port to match the server port,and always parse theabsolute path for the scene_dir.For internscenes, please parse scene_scale as 0.01 and 1.0 for cluttered scenes.

📄 License

The open-sourced code are under theCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International LicenseCreative Commons License.

🔗 Citation

If you find our work helpful, please cite:

@misc{navdp,title ={NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance},author ={Wenzhe Cai, Jiaqi Peng, Yuqiang Yang, Yujian Zhang, Meng Wei, Hanqing Wang, Yilun Chen, Tai Wang and Jiangmiao Pang},year ={2025},booktitle={arXiv},}

👏 Acknowledgement

  • InternUtopia (PreviouslyGRUtopia): The closed-loop evaluation and GRScenes-100 data in this framework relies on the InternUtopia framework.
  • InternNav: All-in-one open-source toolbox for embodied navigation based on PyTorch, Habitat and Isaac Sim.
  • Diffusion Policy: Diffusion policy implementation.
  • DepthAnything: The foundation representation for RGB image observations.
  • ViPlanner: ViPlanner implementation.
  • iPlanner: iPlanner implementation.
  • visualnav-transformer: NoMad, ViNT, GNM implementation.

About

Official implementation of the paper: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp