Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A simulation platform for versatile Embodied AI research and developments.

License

NotificationsYou must be signed in to change notification settings

InternRobotics/InternUtopia

Repository files navigation

demo

arxivpdfgithubdocvideo-envideo-cnPyPI DownloadsGitHub IssuesDiscord

InternUtopia

🔥 News

  • [2025-07] InternUtopia 2.2.0 is released!
  • [2025-07] Our project has been renamed to InternUtopia.
  • [2025-02] GRUtopia 2.0 released!
  • [2024-07] We release thepaper and demos of GRUtopia.

🚀 New Features in 2.0 release

📋 Contents

🏠 About

Recent works have been exploring the scaling laws in the field of Embodied AI. Given the prohibitive costs of collecting real-world data, we believe theSimulation-to-Real (Sim2Real) paradigm is a more feasible path for scaling the learning of embodied models.

We introduce projectInternUtopia (aka. 桃源 in Chinese), a general-purpose research platform for embodied AGI.It features several advancements:

  • 🏙️GRScenes, the scene dataset, includes 100k interactive finely annotated scenes. GRScenes covers 89 diverse scene categories, facilitating deployment of general robots across different scenarios.
  • 🧑‍🤝‍🧑GRResidents, a Large Language Model (LLM) driven Non-Player Character (NPC) system that enables social interaction, task generation, and task assignment, thus simulatingsocial scenarios for embodied AI applications.
  • 🤖GRBench, a collection of embodied AI benchmarks for assessing various capabilities of solving embodied tasks.

We hope that this work can alleviate the scarcity of high-quality data in this field and provide a more comprehensive assessment of embodied AI research.

📚 Getting Started

Prerequisites

  • Ubuntu 20.04, 22.04
  • NVIDIA Omniverse Isaac Sim 4.5.0
    • Ubuntu 20.04/22.04 Operating System
    • NVIDIA GPU (RTX 2070 or higher)
    • NVIDIA GPU Driver (recommended version 535.216.01+)
    • Docker (Optional)
    • NVIDIA Container Toolkit (Optional)
  • Conda
    • Python 3.10.16 (3.10.* should be ok)

Installation

We provide the installation guidehere. You can install locally or use docker and verify the installation easily.

Documentation & Tutorial

We provide detaileddocs for the basic usage of different modules supported in InternUtopia. Welcome to try and post your suggestions!

🏙️ Assets

Note

📝First of all youMUST complete theUser Agreement for GRScenes-100 Dataset Access.

Then you can choose to download all assets (~80GB) or a minimum set (~500MB) to examine installation by running the following script withInternUtopia installed:

$ python -m internutopia.download_assets

The default path to store the downloaded assets is${PATH/TO/INTERNUTOPIA/ROOT}/internutopia/assets. Users have two ways to configure the asset path:

  1. Spcecify a custom path during download usingpython -m internutopia.download_assets.
  2. Set it later by runningpython -m internutopia.set_assets_path and entering the preferred directory.

GRScenes-100

If you want to separately download theGRScenes-100 scene assets, you can manually download them fromOpenDataLab,ModelScope andHuggingFace. Please refer to theinstructions for scene usage.

Robots & Weights

If you want to separately download robots and policy weights, you can manually download therobot directory from fromOpenDataLab,ModelScope andHuggingFace and move it to the root of the asset path.

📦 Benchmark & Method

We preliminarily establish three benchmarks for evaluating the capabilities of embodied agents from different aspects:Object Loco-Navigation,Social Loco-Navigation, andLoco-Manipulation. Please refer to theInternNav andInternManip for running the benchmarks.

👥 Support

Join ourWeChat support group orDiscord for any help.

📝 TODO List

  • Release the paper with demos.
  • Release the platform with basic functions and demo scenes.
  • Release 100 curated scenes.
  • Polish APIs and related codes.
  • Full release and further updates.
  • Release the baseline methods and benchmark data.
  • Support multiple episodes.
  • Vectorized env and batch execution.
  • Training framework.

🔗 Citation

If you find our work helpful, please cite:

@inproceedings{grutopia,title={GRUtopia: Dream General Robots in a City at Scale},author={Wang, Hanqing and Chen, Jiahe and Huang, Wensi and Ben, Qingwei and Wang, Tai and Mi, Boyu and Huang, Tao and Zhao, Siheng and Chen, Yilun and Yang, Sizhe and Cao, Peizhou and Yu, Wenye and Ye, Zichao and Li, Jialun and Long, Junfeng and Wang, ZiRui and Wang, Huiling and Zhao, Ying and Tu, Zhongying and Qiao, Yu and Lin, Dahua and Pang Jiangmiao},year={2024},booktitle={arXiv},}

📄 License

InternUtopia's simulation platform isMIT licensed. The open-sourced GRScenes are under theCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International LicenseCreative Commons License.

👏 Acknowledgements

  • OmniGibson: We refer to OmniGibson for designs of oracle actions.
  • RSL_RL: We usersl_rl library to train the control policies for legged robots.
  • ReferIt3D: We refer to the Sr3D's approach to extract spatial relationship.
  • Isaac Lab: We use some utilities from Orbit (Isaac Lab) for driving articulated joints in Isaac Sim.
  • Open-TeleVision: We use Open-TeleVision to teleoperate with Apple VisionPro.
  • HaMeR: We use HaMeR to recognize hand gesture in teleoperate with camera.
  • Infinigen: We use Infinigen to procedurally generate indoor scenes uponGRScenes-100 dataset.
  • VLFM: We refer to VLFM to implement our benchmark baselines.
  • Grounding DINO: We use Grounding DINO in our benchmark baselines.
  • YOLOv7: We use YOLOv7 in our benchmark baselines.
  • MobileSAM: We use MobileSAM in our benchmark baselines.

About

A simulation platform for versatile Embodied AI research and developments.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp