Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems

NotificationsYou must be signed in to change notification settings

OpenDriveLab/AgiBot-World

Repository files navigation

AgiBot World Colosseo is a full-stack large-scale robot learning platform curated for advancing bimanual manipulation in scalable and intelligent embodied systems. It is accompanied by foundation models, benchmarks, and an ecosystem to democratize access to high-quality robot data for the academic community and the industry, paving the path towards the "ImageNet Moment" for Embodied AI.

We have released:

  • GO-1: Our robotic foundation model pretrained on AgiBot World Dataset
  • GO-1 Air: GO-1 model without Latent Planner, high-performanced and lightweighted
  • Task Catalog: Reference sheet outlining the tasks in our dataset, including robot end-effector types, sample action-text descriptions and more
  • AgiBot World Beta: Our complete dataset featuring 1,003,672 trajectories (~43.8T)
  • AgiBot World Alpha: A curated subset of AgiBot World Beta, containing 92,214 trajectories (~8.5T)

News📰

Important

🌟 Stay up to date atopendrivelab.com!

  • [2025/09/19] 🚀Our robotic foundation model GO-1 open-sourced.
  • [2025/03/10] 📄Research Blog andTechnical Report released.
  • [2025/03/01] Agibot World Beta released.
  • [2025/01/03]Agibot World Alpha Sample Dataset released.
  • [2024/12/30] 🤖 Agibot World Alpha released.

TODO List 📅

  • AgiBot World Alpha
  • AgiBot World Beta
    • ~1,000,000 trajectories of high-quality robot data
  • AgiBot World Foundation Model: GO-1
    • GO-1 fine-tuning script
    • GO-1 Air pre-trained checkpoint
    • GO-1 pre-trained checkpoint
    • Examples of using GO-1 model
  • 2025 AgiBot World Challenge

Key Features 🔑

  • 1 million+ trajectories from 100 robots.
  • 100+ 1:1 replicated real-life scenarios across 5 target domains.
  • Cutting-edge hardware: visual tactile sensors / 6-DoF Dexterous hand / mobile dual-arm robots
  • Wide-spectrum versatile challenging tasks
  • General robotic policy pretrained on AgiBot World
Contact-rich Manipulation

Contact-rich Manipulation

Long-horizon Planning

Long-horizon Planning

Multi-robot Collaboration

Multi-robot Collaboration

Fold Shirt (AgileX)

Fold Shirt (AgileX)

Fold Shirt (AgiBot G1)

Fold Shirt (AgiBot G1)

Fold Shirt (Dual Franka)

Fold Shirt (Dual Franka)

Table of Contents

Getting started 🔥

Installation

  1. Download our source code:
git clone https://github.com/OpenDriveLab/AgiBot-World.gitcd AgiBot-World
  1. Create a new conda environment:
conda create -n go1 python=3.10 -yconda activate go1
  1. Install dependencies:

This project is built onLeRobot (datasetv2.1, commit2b71789)
⚡️ Our environment has been tested withCUDA 12.4.

pip install -e.pip install --no-build-isolation flash-attn==2.4.2

If you encounter out of RAM issue while installingflash attention, you can set the environment variableMAX_JOBS to limit the number of parallel compilation jobs:

MAX_JOBS=4 pip install --no-build-isolation flash-attn==2.4.2

How to Get Started with Our AgiBot World Data

Download Datasets

pip install openxlab# install CLIopenxlab dataset get --dataset-repo OpenDriveLab/AgiBot-World# dataset download
huggingface-cli download --resume-download --repo-type dataset agibot-world/AgiBotWorld-Alpha --local-dir ./AgiBotWorld-Alpha

Convert the data toLeRobot Dataset format followingany4lerobot.

Visualize Datasets

We adapt and extend the dataset visualization script fromLeRobot Project:

python scripts/visualize_dataset.py --task-id 390 --dataset-path /path/to/lerobot/format/dataset

It will openrerun.io and display the camera streams, robot states and actions, like this:

How to Get Started with Our GO-1 Model

Requirements

We strongly recommend full fine-tuning for the best performance. However, if GPU memory is limited, you can alternatively fine-tune only the Action Expert.

UsageGPU Memory RequiredExample GPU
Inference~7GBRTX 4090
Fine-tuning (Full)~70GB (batch size=16)A100 80GB, H100
Fine-tuning (Only AE)~24GB (batch size=16)RTX 4090, A100 40GB

Model Zoo

ModelHF LinkDescription
GO-1 Airhttps://huggingface.co/agibot-world/GO-1-AirGO-1 model without Latent Planner pre-trained on AgiBot World dataset
GO-1https://huggingface.co/agibot-world/GO-1GO-1 model pre-trained on AgiBot World dataset

Fine-tuning on Your Own Dataset

Here we provide an example of fine-tuning the GO-1 model on theLIBERO dataset. You can easily adapt it for your own data.

1. Prepare Data

We use the LeRobot dataset for our default dataset and dataloader. We provide a script for converting LIBERO to LeRobot format inevaluate/libero/convert_libero_data_to_lerobot.py.

Since TensorFlow is required to read theRLDS format, we recommend creating a separate conda environment to avoid package conflicts:

conda create -n libero_data python=3.10 -yconda activate libero_datapip install -e".[libero_data]"

Download the raw LIBERO dataset fromOpenVLA, then run the script to convert it into LeRobot dataset:

# Optional: Change the LeRobot home directoryexport HF_LEROBOT_HOME=/path/to/your/lerobotpython evaluate/libero/convert_libero_data_to_lerobot.py --data_dir /path/to/your/libero/data

2. Prepare Configs

We provide an example config for fine-tuning GO-1 on LIBERO ingo1/configs/go1_sft_libero.py.

Key sections in the config:

  • DatasetArguments - path or repo for the LeRobot dataset.
  • GOModelArguments - model settings: architecture (GO-1 Air or GO-1), action chunk size, diffusion scheduler, parameter freezing, etc.
  • GOTrainingArguments - training hyper-parameters, seetransformers docs for more details.
  • SpaceArguments - state/action dimensions, data keys in the LeRobot dataset, default language prompt, control frequency.

Seego1/configs/go1_base_cfg.py for all available config options.

3. Start Fine-tuning

Start fine-tuning with the following command, you can setup environment variables according to theshell.

RUNNAME=<YOUR_RUNNAME> bash go1/shell/train.sh /path/to/your/config

Checkpoints will be saved inexperiment/<YOUR_RUNNAME> and logs will be saved inexperiment/<YOUR_RUNNAME>/logs.

Notes:

  • We also provide adebugging shell which can run on a single RTX4090. It also setDEBUG_MODE to true for faster init.
  • We do not need to precompute the normalization statistics for the training data, as LeRobot will compute them when loading the dataset. The statistics will be saved toexperiment/<YOUR_RUNNAME>/dataset_stats.json.
  • We set action chunk size and control frequency input as 30 in GO-1 pre-training, as our AgiBot World dataset is collected at 30Hz. We change them to 10 in LIBERO fine-tuning, as the LIBERO dataset is collected at 10Hz. You can change them accordingly in the config file.

Testing Your Model

Local Inference

After fine-tuning, you can test your model locally using an example script inevaluate/deploy.py. You can build aGO1Infer object to load the model and dataset statistics, then call theinference method to run inference:

importnumpyasnpfromevaluate.deployimportGO1Infermodel=GO1Infer(model_path="/path/to/your/checkpoint",data_stats_path="/path/to/your/dataset_stats.json")payload= {"top": ...,"right": ...,"left": ...,"instruction":"example instruction","state": ...,"ctrl_freqs":np.array([30]),}actions=model.inference(payload)

We also provide a script for open-loop evaluation with training data inevaluate/openloop_eval.py.

Remote Inference

Considering that 1. real robot may not have powerful GPUs, 2. different robots and simulation benchmarks often require different package dependencies, we also provide a policy server for GO-1. A client in another environment or another machine send observations to the server for remote inference.

Start the server and it will listen on portPORT and waits for observations:

python evaluate/deploy.py --model_path /path/to/your/checkpoint --data_stats_path /path/to/your/dataset_stats.json --port<PORT>

For the client, we provide aGO1Client class to send requests to the server and receive actions:

fromtypingimportDict,Anyimportjson_numpyimportnumpyasnpimportrequestsjson_numpy.patch()classGO1Client:def__init__(self,host:str,port:int):self.host=hostself.port=portdefpredict_action(self,payload:Dict[str,Any])->np.ndarray:response=requests.post(f"http://{self.host}:{self.port}/act",json=payload,headers={"Content-Type":"application/json"}      )ifresponse.status_code==200:result=response.json()action=np.array(result)returnactionelse:print(f"Request failed, status code:{response.status_code}")print(f"Error message:{response.text}")returnNone

We can then run the LIBERO evaluation script to query the server, see theLIBERO README for details.

More Examples

We will provide more examples of fine-tuning and running inference with GO-1 models on real robots and simulation platforms.

Currently we have:

  • Genie Studio: AgiBot G1 with out-of-the-box GO-1 model plus integrated data collection, fine-tuning, and deployment pipeline.
  • AgileX: AgileX Cobot Magic (Aloha)
  • LIBERO: LIBERO Simulation (Franka)
  • RoboTwin: RoboTwin Simulation (Aloha)

📄 License and Citation

All the data and code within this repo are underCC BY-NC-SA 4.0.

  • Please consider citing our work if it helps your research.
  • For the full authorship and detailed contributions, please refer tocontributions.
  • In alphabetical order by surname:
@article{bu2025agibot_arxiv,title={Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems},author={Bu, Qingwen and Cai, Jisong and Chen, Li and Cui, Xiuqi and Ding, Yan and Feng, Siyuan and Gao, Shenyuan and He, Xindong and Huang, Xu and Jiang, Shu and others},journal={arXiv preprint arXiv:2503.06669},year={2025}}@inproceedings{bu2025agibot_iros,title={Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems},author={Bu, Qingwen and Cai, Jisong and Chen, Li and Cui, Xiuqi and Ding, Yan and Feng, Siyuan and He, Xindong and Huang, Xu and others},booktitle={2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},year={2025},organization={IEEE}}@article{shi2025diversity,title={Is Diversity All You Need for Scalable Robotic Manipulation?},author={Shi, Modi and Chen, Li and Chen, Jin and Lu, Yuxiang and Liu, Chiming and Ren, Guanghui and Luo, Ping and Huang, Di and Yao, Maoqing and Li, Hongyang},journal={arXiv preprint arXiv:2507.06219},year={2025}}

📝 Blogs

@misc{AgiBotWorldTeam2025agibot-world-colosseo,title        ={Introducing AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems},author       ={Shi, Modi and Lu, Yuxiang and Wang, Huijie and Xie, Chengen and Bu, Qingwen},year         ={2025},month        ={March},howpublished ={\url{https://opendrivelab.com/AgiBot-World/}},note         ={Blog post},        }@misc{AgiBotWorldTeam2025open-sourcing-go1,title        ={Open-sourcing GO-1: The Bitter Lessons of Building VLA Systems at Scale},author       ={Shi, Modi and Lu, Yuxiang and Wang, Huijie and Yang, Shaoze},year         ={2025},month        ={September},howpublished ={\url{https://opendrivelab.com/OpenGO1/}},note         ={Blog post},        }

About

[IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems

Topics

Resources

Contributing

Stars

Watchers

Forks

Sponsor this project

 

Contributors14


[8]ページ先頭

©2009-2025 Movatter.jp