Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)

License

NotificationsYou must be signed in to change notification settings

sail-sg/EditAnything

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HuggingFace space

This is an ongoing project aims toEdit and Generate Anything in an image,powered bySegment Anything,ControlNet,BLIP2,Stable Diffusion, etc.

Any forms of contribution and suggestionare very welcomed!

News🔥

2023/08/09 - Revise UI and code, fixed multiple known issues.

2023/07/25 - EditAnything is accepted by the ACM MM demo track.

2023/06/09 - Support cross-image region drag and merge, unleash creative fusion!

2023/05/24 - Support multiple high-quality character editing: clothes, haircut, colored contact lenses.

2023/05/22 - Support sketch to image by adjusting mask align strength insketch2image.py!

2023/05/13 - Support interactive segmentation with click operation!

2023/05/11 - Support tile model for detail refinement!

2023/05/04 - New demos of Beauty/Handsome Edit/Generation is released!

2023/05/04 - ControlNet-based inpainting model on any lora model is supported now. EditAnything can operate on any base/lord models without the requirements of inpainting model.

More update logs.

2023/05/01 - Models V0.4 based on Stable Diffusion 1.5/2.1 are released. New models are trained with more data and iterations.Model Zoo

2023/04/20 - We support the Customized editing with DreamBooth.

2023/04/17 - We support the SAM mask to semantic segmentation mask.

2023/04/17 - We support different alignment degrees bettween edited parts and the SAM mask, check it out onDEMO!

2023/04/15 -Gradio demo on Huggingface is released!

2023/04/14 - New model trained with LAION dataset is released.

2023/04/13 - Support pretrained model auto downloading and gradio insam2image.py.

2023/04/12 - An initial version of text-guided edit-anything is insam2groundingdino_edit.py(object-level) andsam2vlpart_edit.py(part-level).

2023/04/10 - An initial version of edit-anything is insam2edit.py.

2023/04/10 - We transfer the pretrained model into diffusers style, the pretrained model is auto loaded when usingsam2image_diffuser.py. Now you can combine our pretrained model with different base models easily!

2023/04/09 - We released a pretrained model of StableDiffusion based ControlNet that generate images conditioned by SAM segmentation.

Features

Try ourHuggingFace DEMO🔥🔥🔥

Unleash creative fusion: Cross-image region drag and merge!🔥

imageimage

Clothes editing!🔥

image

Haircut editing!🔥

image

Colored contact lenses!🔥

image

Human replacement with tile refinement!🔥

image

Draw your Sketch and Generate your Image!🔥

prompt: "a paint of a tree in the ground with a river."

imageimageimage
More demos.

prompt: "a paint, river, mountain, sun, cloud, beautiful field."

imageimageimage

prompt: "a man, midsplit center parting hair, HD."

imageimageimage

prompt: "a woman, long hair, detailed facial details, photorealistic, HD, beautiful face, solo, candle, brown hair, blue eye."

imageimageimage

Also, you could use the generated image and sam model to refine your sketch definitely!

Generate/Edit your beauty!!!🔥🔥🔥

Edit Your beauty and Generate Your beauty

imageimage

Customized editing with layout alignment control.

image

EditAnything+DreamBooth: Train a customized DreamBooth Model with `tools/train_dreambooth_inpaint.py` and replace the base model in `sam2edit.py` with the trained model.

Image Editing with layout alignment control.

image

Keep the layout and Generate your season!

original paintSAM

Human Prompt: "A paint of spring/summer/autumn/winter field."

springsummerautumnwinter

Edit Specific Thing by Text-Grounding and Segment-Anything

Editing by Text-guided Part Mask

Text Grounding: "dog head"

Human Prompt: "cute dog"p

More demos.

Text Grounding: "cat eye"

Human Prompt: "A cute small humanoid cat"p

Editing by Text-guided Object Mask

Text Grounding: "bench"

Human Prompt: "bench"p

Edit Anything by Segment-Anything

Human Prompt: "esplendent sunset sky, red brick wall"p

More demos.

Human Prompt: "chairs by the lake, sunny day, spring"p

Generate Anything by Segment-Anything

BLIP2 Prompt: "a large white and red ferry"p(1:input image; 2: segmentation mask; 3-8: generated images.)

More demos.

BLIP2 Prompt: "a cloudy sky"p

BLIP2 Prompt: "a black drone flying in the blue sky"p

  1. The human prompt and BLIP2 generated prompt build the text instruction.
  2. The SAM model segment the input image to generate segmentation mask without category.
  3. The segmentation mask and text instruction guide the image generation.

Generate semantic labels for each SAM mask.

p

python sam2semantic.py

Highlight features:

  • Pretrained ControlNet with SAM mask as condition enables the image generation with fine-grained control.
  • category-unrelated SAM mask enables more forms of editing and generation.
  • BLIP2 text generation enables text guidance-free control.

Setup

Create a environment

    conda env create -f environment.yaml    conda activate control

Install BLIP2 and SAM

Put these models inmodels folder.

# BLIP2 and SAM will be audo installed by running app.pypip install git+https://github.com/huggingface/transformers.gitpip install git+https://github.com/facebookresearch/segment-anything.git# For text-guided editingpip install git+https://github.com/openai/CLIP.gitpip install git+https://github.com/facebookresearch/detectron2.gitpip install git+https://github.com/IDEA-Research/GroundingDINO.git

Download pretrained model

# Segment-anything ViT-H SAM model will be auto downloaded.# BLIP2 model will be auto downloaded.# Part Grounding Swin-Base Model.wget https://github.com/Cheems-Seminar/segment-anything-and-name-it/releases/download/v1.0/swinbase_part_0a0000.pth# Grounding DINO Model.wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth# Get pretrained model from huggingface.# No need to download this! But please install safetensors for reading the ckpt.

Run Demo

python app.py# orpython editany.py# orpython sam2image.py# orpython sam2vlpart_edit.py# orpython sam2groundingdino_edit.py

Model Zoo

ModelFeaturesDownload Path
SAM Pretrained(v0-1)Good Nature Senseshgao/edit-anything-v0-1-1
LAION Pretrained(v0-3)Good Faceshgao/edit-anything-v0-3
LAION Pretrained(v0-4)Support StableDiffusion 1.5/2.1, More training data and iterations, Good Faceshgao/edit-anything-v0-4-sd15shgao/edit-anything-v0-4-sd21

Training

  1. Generate training dataset withdataset_build.py.
  2. Transfer stable-diffusion model withtool_add_control_sd21.py.
  3. Train model withsam_train_sd21.py.
  4. We consider using theAdan optimizer for model training.

Acknowledgement

@InProceedings{gao2023editanything,  author = {Gao, Shanghua and Lin, Zhijie and Xie, Xingyu and Zhou, Pan and Cheng, Ming-Ming and Yan, Shuicheng},  title = {EditAnything: Empowering Unparalleled Flexibility in Image Editing and Generation},  booktitle = {Proceedings of the 31st ACM International Conference on Multimedia, Demo track},  year = {2023},}

This project is based on:

Segment Anything,ControlNet,BLIP2,MDT,Stable Diffusion,Large-scale Unsupervised Semantic Segmentation,Grounded Segment Anything: From Objects to Parts,Grounded-Segment-Anything

Thanks for these amazing projects!

About

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp