Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A codebase for point cloud scene flow estimation research. Latest works: Flow4D(RA-L'25), SSF(ICRA'25), SeFlow(ECCV'24), DeFlow(ICRA'24)

License

NotificationsYou must be signed in to change notification settings

KTH-RPL/OpenSceneFlow

Repository files navigation

opensceneflow

💞 If you findOpenSceneFlow useful to your research, please citeour works 📖 andgive a star 🌟 as encouragement. (੭ˊ꒳​ˋ)੭✧

OpenSceneFlow is a codebase for point cloud scene flow estimation.It is also an official implementation of the following papers (sored by the time of publication):

  • Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation
    Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im
    IEEE Robotics and Automation Letters (RA-L) 2025
    [ Backbone ] [ Supervised ] - [arXiv ] [Project ] →here

  • SSF: Sparse Long-Range Scene Flow for Autonomous Driving
    Ajinkya Khoche, Qingwen Zhang, Laura Pereira Sánchez, Aron Asefaw, Sina Sharif Mansouri and Patric Jensfelt
    International Conference on Robotics and Automation (ICRA) 2025
    [ Backbone ] [ Supervised ] - [arXiv ] [Project ] →here

  • SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving
    Qingwen Zhang, Yi Yang, Peizheng Li, Olov Andersson, Patric Jensfelt
    European Conference on Computer Vision (ECCV) 2024
    [ Strategy ] [ Self-Supervised ] - [arXiv ] [Project ] →here

  • DeFlow: Decoder of Scene Flow Network in Autonomous Driving
    Qingwen Zhang, Yi Yang, Heng Fang, Ruoyu Geng, Patric Jensfelt
    International Conference on Robotics and Automation (ICRA) 2024
    [ Backbone ] [ Supervised ] - [arXiv ] [Project ] →here

🎁One repository, All methods!Additionally,OpenSceneFlow integrates following excellent works:ICLR'24 ZeroFlow,ICCV'23 FastNSF,RA-L'21 FastFlow,NeurIPS'21 NSFP. (More on the way...)

Summary of them:
  • FastFlow3d: RA-L 2021, a basic backbone model.
  • ZeroFlow: ICLR 2024, their pre-trained weight can covert into our format easily throughthe script.
  • NSFP: NeurIPS 2021, faster 3x than original version because ofour CUDA speed up, same (slightly better) performance. Done coding, public after review.
  • FastNSF: ICCV 2023. Done coding, public after review.
  • ICP-Flow: CVPR 2024. Done coding, public after review.

💡: Want to learn how to add your own network in this structure? CheckContribute section and know more about the code. Fee free to pull request and your bibtexhere.


0. Installation

There are two ways to install the codebase: directly on yourlocal machine or in aDocker container.

Environment Setup

We use conda to manage the environment, you can install it followhere. Then create the base environment with the following command [5~15 minutes]:

git clone --recursive https://github.com/KTH-RPL/OpenSceneFlow.gitcd OpenSceneFlow&& mamba env create -f environment.yaml# You may need export your LD_LIBRARY_PATH with env lib# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/kin/mambaforge/lib

Docker (Recommended for Isolation)

You always can chooseDocker which isolated environment and free yourself from installation. Pull the pre-built Docker image or build manually.

# option 1: pull from docker hubdocker pull zhangkin/opensf# run containerdocker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name opensceneflow zhangkin/opensf /bin/zsh# and better to read your own gpu device info to compile the cuda extension again:cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv&& /opt/conda/envs/opensf/bin/python ./setup.py installcd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D&& /opt/conda/envs/opensf/bin/python ./setup.py installmamba activate opensf

If you prefer to build the Docker image by yourself, Checkbuild-docker-image section for more details.

1. Data Preparation

Refer todataprocess/README.md for dataset download instructions. Currently, we supportArgoverse 2,Waymo, andcustom datasets (more datasets will be added in the future).

After downloading, convert the raw data to.h5 format for easy training, evaluation, and visualization. Follow the steps indataprocess/README.md#process.

For a quick start, use ourmini processed dataset, which includes one scene intrain andval. It is pre-converted to.h5 format with label data (HuggingFace/Zenodo).

wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo_data.zipunzip demo_data.zip -d /home/kin/data/av2/h5py

Once extracted, you can directly use this dataset to run thetraining script without further processing.

2. Quick Start

Some tips before running the code:

  • Don't forget to active Python environment before running the code.
  • If you want to usewandb, replace allentity="kth-rpl", to your own entity otherwise tensorboard will be used locally.
  • Set correct data path by passing the config, e.g.train_data=/home/kin/data/av2/h5py/demo/train val_data=/home/kin/data/av2/h5py/demo/val.

And free yourself from trainning, you can download the pretrained weight fromHuggingFace and we provided the detailwget command in each model section.

mamba activate opensf

Flow4D

Train Flow4D with the leaderboard submit config. [Runtime: Around 18 hours in 4x RTX 3090 GPUs.]

python train.py model=flow4d lr=1e-3 epochs=15 batch_size=8 num_frames=5 loss_fn=deflowLoss"voxel_size=[0.2, 0.2, 0.2]""point_cloud_range=[-51.2, -51.2, -3.2, 51.2, 51.2, 3.2]"

Pretrained weight can be downloaded through:

wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt

SSF

Extra pakcges needed for SSF model:

pip install mmengine-lite torch-scatter

Train SSF with the leaderboard submit config. [Runtime: Around 6 hours in 8x A100 GPUs.]

python train.py model=ssf lr=8e-3 epochs=25 batch_size=64 loss_fn=deflowLoss"voxel_size=[0.2, 0.2, 6]""point_cloud_range=[-51.2, -51.2, -3, 51.2, 51.2, 3]"

Pretrained weight can be downloaded through:

# the leaderboard weightwget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_best.ckpt# the long-range weight:wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_long.ckpt

SeFlow

Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard. [Runtime: Around 11 hours in 4x A100 GPUs.]

python train.py model=deflow lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss"add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}""model.target.num_iters=2"

Pretrained weight can be downloaded through:

wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/seflow_best.ckpt

DeFlow

Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please changebatch_size&lr accoordingly if you don't have enough GPU memory. (e.g.batch_size=6 for 24GB GPU)

python train.py model=deflow lr=2e-4 epochs=15 batch_size=16 loss_fn=deflowLoss

Pretrained weight can be downloaded through:

wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deflow_best.ckpt

3. Evaluation

You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard.

Since in training, we save all hyper-parameters and model checkpoints, the only thing you need to do is to specify the checkpoint path. Remember to set the data path correctly also.

# it will directly prints all metricpython eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=val# it will output the av2_submit.zip or av2_submit_v2.zip for you to submit to leaderboardpython eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=1python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=2

To submit to the Online Leaderboard, if you selectav2_mode=test, it should be a zip file for you to submit to the leaderboard.Note: The leaderboard result in DeFlow&SeFlow main paper isversion 1, asversion 2 is updated after DeFlow&SeFlow.

# since the env may conflict we set new on deflow, we directly create new one:mamba create -n py37 python=3.7mamba activate py37pip install"evalai"# Step 2: login in eval and register your teamevalai set-token<your token># Step 3: Copy the command pop above and submit to leaderboardevalai challenge 2010 phase 4018 submit --file av2_submit.zip --large --privateevalai challenge 2210 phase 4396 submit --file av2_submit_v2.zip --large --private

4. Visualization

We provide a script to visualize the results of the model also. You can specify the checkpoint path and the data path to visualize the results. The step is quite similar to evaluation.

python save.py checkpoint=/home/kin/seflow_best.ckpt dataset_path=/home/kin/data/av2/preprocess_v2/sensor/vis# The output of above command will be like:Model: DeFlow, Checkpoint from: /home/kin/model_zoo/v2/seflow_best.ckptWe already write the flow_est into the dataset, please run following commend to visualize the flow. Copy and paste it to your terminal:python tools/visualization.py --res_name'seflow_best' --data_dir /home/kin/data/av2/preprocess_v2/sensor/visEnjoy! ^v^ ------# Then run the command in the terminal:python tools/visualization.py --res_name'seflow_best' --data_dir /home/kin/data/av2/preprocess_v2/sensor/vis
seflow.mp4

Or another way to interact withrerun but please only vis scene by scene, not all at once.

python tools/visualization_rerun.py --data_dir /home/kin/data/av2/h5py/demo/train --res_name"['flow', 'deflow']"
rerun-demo.mp4

Cite Us

OpenSceneFlow is originally designed byQingwen Zhang from DeFlow and SeFlow.If you find it useful, please cite our works:

@inproceedings{zhang2024seflow,author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},booktitle={European Conference on Computer Vision (ECCV)},year={2024},pages={353–369},organization={Springer},doi={10.1007/978-3-031-73232-4_20},}@inproceedings{zhang2024deflow,author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},year={2024},pages={2105-2111},doi={10.1109/ICRA57147.2024.10610278}}@article{zhang2025himu,title={HiMo: High-Speed Objects Motion Compensation in Point Cloud},author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},year={2025},journal={arXiv preprint arXiv:2503.00803},}

And our excellent collaborators works contributed to this codebase also:

@article{kim2025flow4d,author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},journal={IEEE Robotics and Automation Letters},title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},year={2025},volume={10},number={4},pages={3462-3469},doi={10.1109/LRA.2025.3542327}}@article{khoche2025ssf,title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},journal={arXiv preprint arXiv:2501.17821},year={2025}}

Thank you for your support! ❤️Feel free to contribute your method and add your bibtex here by pull request!

❤️:BucketedSceneFlowEval;Pointcept;OpenPCSeg;ZeroFlow ...

About

A codebase for point cloud scene flow estimation research. Latest works: Flow4D(RA-L'25), SSF(ICRA'25), SeFlow(ECCV'24), DeFlow(ICRA'24)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp