- Notifications
You must be signed in to change notification settings - Fork685
[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting
License
sczhou/ProPainter
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
⭐ If ProPainter is helpful to your projects, please help star this repo. Thanks! 🤗
📖 For more visual results, go checkout ourproject page
- 2023.11.09: Integrated to 👨🎨OpenXLab. Try out online demo!
- 2023.11.09: Integrated to 🤗Hugging Face. Try out online demo!
- 2023.09.24: We remove the watermark removal demos officially to prevent the misuse of our work for unethical purposes.
- 2023.09.21: Add features for memory-efficient inference. Check ourGPU memory requirements. 🚀
- 2023.09.07: Our code and model are publicly available. 🐳
- 2023.09.01: This repo is created.
- Make a Colab demo.
Make a interactive Gradio demo.Update features for memory-efficient inference.
![]() | ![]() |
![]() | ![]() |
![]() | ![]() |
Clone Repo
git clone https://github.com/sczhou/ProPainter.git
Create Conda Environment and Install Dependencies
# create new anaconda envconda create -n propainter python=3.8 -yconda activate propainter# install python dependenciespip3 install -r requirements.txt
- CUDA >= 9.2
- PyTorch >= 1.7.1
- Torchvision >= 0.8.2
- Other required packages in
requirements.txt
Download our pretrained models fromReleases V0.1.0 to theweights
folder. (All pretrained models can also be automatically downloaded during the first inference.)
The directory structure will be arranged as:
weights |- ProPainter.pth |- recurrent_flow_completion.pth |- raft-things.pth |- i3d_rgb_imagenet.pt (for evaluating VFID metric) |- README.md
We provide some examples in theinputs
folder.Run the following commands to try it out:
# The first example (object removal)python inference_propainter.py --video inputs/object_removal/bmx-trees --mask inputs/object_removal/bmx-trees_mask# The second example (video completion)python inference_propainter.py --video inputs/video_completion/running_car.mp4 --mask inputs/video_completion/mask_square.png --height 240 --width 432
The results will be saved in theresults
folder.To test your own videos, please prepare the inputmp4 video
(orsplit frames
) andframe-wise mask(s)
.
If you want to specify the video resolution for processing or avoid running out of memory, you can set the video size of--width
and--height
:
# process a 576x320 video; set --fp16 to use fp16 (half precision) during inference.python inference_propainter.py --video inputs/video_completion/running_car.mp4 --mask inputs/video_completion/mask_square.png --height 320 --width 576 --fp16
We also provide an interactive demo for object removal, allowing users to select any object they wish to remove from a video. You can try the demo onHugging Face or run itlocally.
Please note that the demo's interface and usage may differ from the GIF animation above. For detailed instructions, refer to theuser guide.
Video inpainting typically requires a significant amount of GPU memory. Here, we offer various features that facilitate memory-efficient inference, effectively avoiding the Out-Of-Memory (OOM) error. You can use the following options to reduce memory usage further:
- Reduce the number of local neighbors through decreasing the
--neighbor_length
(default 10). - Reduce the number of global references by increasing the
--ref_stride
(default 10). - Set the
--resize_ratio
(default 1.0) to resize the processing video. - Set a smaller video size via specifying the
--width
and--height
. - Set
--fp16
to use fp16 (half precision) during inference. - Reduce the frames of sub-videos
--subvideo_length
(default 80), which effectively decouples GPU memory costs and video length.
Blow shows the estimated GPU memory requirements for different sub-video lengths with fp32/fp16 precision:
Resolution | 50 frames | 80 frames |
---|---|---|
1280 x 720 | 28G / 19G | OOM / 25G |
720 x 480 | 11G / 7G | 13G / 8G |
640 x 480 | 10G / 6G | 12G / 7G |
320 x 240 | 3G / 2G | 4G / 3G |
Dataset | YouTube-VOS | DAVIS |
---|---|---|
Description | For training (3,471) and evaluation (508) | For evaluation (50 in 90) |
Images | [Official Link] (Download train and test all frames) | [Official Link] (2017, 480p, TrainVal) |
Masks | [Google Drive] [Baidu Disk] (For reproducing paper results; provided inProPainter paper) |
The training and test split files are provided indatasets/<dataset_name>
. For each dataset, you should placeJPEGImages
todatasets/<dataset_name>
. Resize all video frames to size432x240
for training. Unzip downloaded mask files todatasets
.
Thedatasets
directory structure will be arranged as: (Note: please check it carefully)
datasets |- davis |- JPEGImages_432_240 |- <video_name> |- 00000.jpg |- 00001.jpg |- test_masks |- <video_name> |- 00000.png |- 00001.png |- train.json |- test.json |- youtube-vos |- JPEGImages_432_240 |- <video_name> |- 00000.jpg |- 00001.jpg |- test_masks |- <video_name> |- 00000.png |- 00001.png |- train.json |- test.json
Our training configures are provided intrain_flowcomp.json
(for Recurrent Flow Completion Network) andtrain_propainter.json
(for ProPainter).
Run one of the following commands for training:
# For training Recurrent Flow Completion Network python train.py -c configs/train_flowcomp.json# For training ProPainter python train.py -c configs/train_propainter.json
You can run thesame command toresume your training.
To speed up the training process, you can precompute optical flow for the training dataset using the following command:
# Compute optical flow for training dataset python scripts/compute_flow.py --root_path<dataset_root> --save_path<save_flow_root> --height 240 --width 432
Run one of the following commands for evaluation:
# For evaluating flow completion model python scripts/evaluate_flow_completion.py --dataset<dataset_name> --video_root<video_root> --mask_root<mask_root> --save_results# For evaluating ProPainter model python scripts/evaluate_propainter.py --dataset<dataset_name> --video_root<video_root> --mask_root<mask_root> --save_results
The scores and results will also be saved in theresults_eval
folder.Please--save_results
for furtherevaluating temporal warping error.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{zhou2023propainter,title={{ProPainter}: Improving Propagation and Transformer for Video Inpainting},author={Zhou, Shangchen and Li, Chongyi and Chan, Kelvin C.K and Loy, Chen Change},booktitle={Proceedings of IEEE International Conference on Computer Vision (ICCV)},year={2023}}
The ProPainter is made available for use, reproduction, and distribution strictly for non-commercial purposes. The code and models are licensed underNTU S-Lab License 1.0. Redistribution and use should follow this license.
For inquiries or to obtain permission for commercial use, please consult Dr. Shangchen Zhou (shangchenzhou@gmail.com).
If you develop or use ProPainter in your projects, feel free to let me know. Also, please include thisProPainter repo link, authorship information, and ourS-Lab license (with link).
- Streaming ProPainter:https://github.com/osmr/propainter
- Faster ProPainter:https://github.com/halfzm/faster-propainter
- ProPainter WebUI:https://github.com/halfzm/ProPainter-Webui
- ProPainter ComfyUI:https://github.com/daniabib/ComfyUI_ProPainter_Nodes
- Cutie (video segmentation):https://github.com/hkchengrex/Cutie
- Cinetransfer (character transfer):https://virtualfilmstudio.github.io/projects/cinetransfer
- Motionshop (character transfer):https://aigc3d.github.io/motionshop
- propainter:https://pypi.org/project/propainter
- pytorchcv:https://pypi.org/project/pytorchcv
If you have any questions, please feel free to reach me out atshangchenzhou@gmail.com.
This code is based onE2FGVI andSTTN. Some code are brought fromBasicVSR++. Thanks for their awesome works.
Special thanks toYihang Luo for his valuable contributions to build and maintain the Gradio demos for ProPainter.
About
[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting