- Notifications
You must be signed in to change notification settings - Fork196
Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
License
sail-sg/EditAnything
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is an ongoing project aims toEdit and Generate Anything in an image,powered bySegment Anything,ControlNet,BLIP2,Stable Diffusion, etc.
Any forms of contribution and suggestionare very welcomed!
2023/08/09 - Revise UI and code, fixed multiple known issues.
2023/07/25 - EditAnything is accepted by the ACM MM demo track.
2023/06/09 - Support cross-image region drag and merge, unleash creative fusion!
2023/05/24 - Support multiple high-quality character editing: clothes, haircut, colored contact lenses.
2023/05/22 - Support sketch to image by adjusting mask align strength insketch2image.py
!
2023/05/13 - Support interactive segmentation with click operation!
2023/05/11 - Support tile model for detail refinement!
2023/05/04 - New demos of Beauty/Handsome Edit/Generation is released!
2023/05/04 - ControlNet-based inpainting model on any lora model is supported now. EditAnything can operate on any base/lord models without the requirements of inpainting model.
More update logs.
2023/05/01 - Models V0.4 based on Stable Diffusion 1.5/2.1 are released. New models are trained with more data and iterations.Model Zoo
2023/04/20 - We support the Customized editing with DreamBooth.
2023/04/17 - We support the SAM mask to semantic segmentation mask.
2023/04/17 - We support different alignment degrees bettween edited parts and the SAM mask, check it out onDEMO!
2023/04/15 -Gradio demo on Huggingface is released!
2023/04/14 - New model trained with LAION dataset is released.
2023/04/13 - Support pretrained model auto downloading and gradio insam2image.py
.
2023/04/12 - An initial version of text-guided edit-anything is insam2groundingdino_edit.py
(object-level) andsam2vlpart_edit.py
(part-level).
2023/04/10 - An initial version of edit-anything is insam2edit.py
.
2023/04/10 - We transfer the pretrained model into diffusers style, the pretrained model is auto loaded when usingsam2image_diffuser.py
. Now you can combine our pretrained model with different base models easily!
2023/04/09 - We released a pretrained model of StableDiffusion based ControlNet that generate images conditioned by SAM segmentation.






prompt: "a paint of a tree in the ground with a river."
More demos.
prompt: "a paint, river, mountain, sun, cloud, beautiful field."
prompt: "a man, midsplit center parting hair, HD."
prompt: "a woman, long hair, detailed facial details, photorealistic, HD, beautiful face, solo, candle, brown hair, blue eye."
Also, you could use the generated image and sam model to refine your sketch definitely!
Edit Your beauty and Generate Your beauty
EditAnything+DreamBooth: Train a customized DreamBooth Model with `tools/train_dreambooth_inpaint.py` and replace the base model in `sam2edit.py` with the trained model.Human Prompt: "A paint of spring/summer/autumn/winter field."
Text Grounding: "dog head"
Text Grounding: "bench"
Human Prompt: "esplendent sunset sky, red brick wall"
BLIP2 Prompt: "a large white and red ferry"(1:input image; 2: segmentation mask; 3-8: generated images.)
- The human prompt and BLIP2 generated prompt build the text instruction.
- The SAM model segment the input image to generate segmentation mask without category.
- The segmentation mask and text instruction guide the image generation.
python sam2semantic.py
Highlight features:
- Pretrained ControlNet with SAM mask as condition enables the image generation with fine-grained control.
- category-unrelated SAM mask enables more forms of editing and generation.
- BLIP2 text generation enables text guidance-free control.
Create a environment
conda env create -f environment.yaml conda activate control
Install BLIP2 and SAM
Put these models inmodels
folder.
# BLIP2 and SAM will be audo installed by running app.pypip install git+https://github.com/huggingface/transformers.gitpip install git+https://github.com/facebookresearch/segment-anything.git# For text-guided editingpip install git+https://github.com/openai/CLIP.gitpip install git+https://github.com/facebookresearch/detectron2.gitpip install git+https://github.com/IDEA-Research/GroundingDINO.git
Download pretrained model
# Segment-anything ViT-H SAM model will be auto downloaded.# BLIP2 model will be auto downloaded.# Part Grounding Swin-Base Model.wget https://github.com/Cheems-Seminar/segment-anything-and-name-it/releases/download/v1.0/swinbase_part_0a0000.pth# Grounding DINO Model.wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth# Get pretrained model from huggingface.# No need to download this! But please install safetensors for reading the ckpt.
Run Demo
python app.py# orpython editany.py# orpython sam2image.py# orpython sam2vlpart_edit.py# orpython sam2groundingdino_edit.py
Model | Features | Download Path |
---|---|---|
SAM Pretrained(v0-1) | Good Nature Sense | shgao/edit-anything-v0-1-1 |
LAION Pretrained(v0-3) | Good Face | shgao/edit-anything-v0-3 |
LAION Pretrained(v0-4) | Support StableDiffusion 1.5/2.1, More training data and iterations, Good Face | shgao/edit-anything-v0-4-sd15shgao/edit-anything-v0-4-sd21 |
- Generate training dataset with
dataset_build.py
. - Transfer stable-diffusion model with
tool_add_control_sd21.py
. - Train model with
sam_train_sd21.py
. - We consider using the
Adan
optimizer for model training.
@InProceedings{gao2023editanything, author = {Gao, Shanghua and Lin, Zhijie and Xie, Xingyu and Zhou, Pan and Cheng, Ming-Ming and Yan, Shuicheng}, title = {EditAnything: Empowering Unparalleled Flexibility in Image Editing and Generation}, booktitle = {Proceedings of the 31st ACM International Conference on Multimedia, Demo track}, year = {2023},}
This project is based on:
Segment Anything,ControlNet,BLIP2,MDT,Stable Diffusion,Large-scale Unsupervised Semantic Segmentation,Grounded Segment Anything: From Objects to Parts,Grounded-Segment-Anything
Thanks for these amazing projects!