Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A framework for training segmentation models in pytorch on labelme annotations with pretrained examples of skin, cat, and pizza topping segmentation

License

NotificationsYou must be signed in to change notification settings

WillBrennan/SemanticSegmentation

Repository files navigation

Overview

This project started as a replacement to theSkin Detection project that used traditional computer vision techniques. This project implements two models,

  • FCNResNet101 from torchvision for accurate segmentation
  • BiSeNetV2 for real-time segmentation

These models are trained with masks from labelme annotations. As labelme annotations allow for multiple categories per a pixel we use multi-label semantic segmentation. Both the accurate and real-time models are in the pretrained directory.

Getting Started

The pretrained models are stored in the repo with git-lfs, when you clone make sure you've pulled the files by calling,

git lfs pull

or by downloading them from github directly. This project uses conda to manage its enviroment; once conda is installed we create the enviroment and activate it,

conda env create -f enviroment.ymlconda activate semantic_segmentation

. On windows; powershell needs to be initialised and the execution policy needs to be modified.

conda init powershellSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

Pre-Trained Segmentation Projects

This project comes bundled with several pretrained models, which can be found in thepretrained directory. To infer segmentation masks on your images runevaluate_images.

# to display the outputpython evaluate_images.py --images~/Pictures/ --model pretrained/model_segmentation_skin_30.pth --model-type FCNResNet101 --display# to save the outputpython evaluate_images.py --images~/Pictures/ --model pretrained/model_segmentation_skin_30.pth --model-type FCNResNet101 --save

To run the real-time models change the--model-type,

# to display the outputpython evaluate_images.py --images~/Pictures/ --model pretrained/model_segmentation_realtime_skin_30.pth --model-type BiSeNetV2 --display# to save the outputpython evaluate_images.py --images~/Pictures/ --model pretrained/model_segmentation_realtime_skin_30.pth --model-type BiSeNetV2 --save

Skin Segmentation

This model was trained with a custom dataset of 150 images taken from COCO where skin segmentation annotations were added. This includes a wide variety of skin colours and lighting conditions making it more robust than theSkin Detection project. This model detects,

  • skinSkin Segmentation

Pizza Topping Segmentation

This was trained with a custom dataset of 89 images taken from COCO where pizza topping annotations were added. There's very few images for each type of topping so this model performs very badly and needs quite a few more images to behave well!

  • 'chilli', 'ham', 'jalapenos', 'mozzarella', 'mushrooms', 'olive', 'pepperoni', 'pineapple', 'salad', 'tomato'

Pizza Toppings

Cat and Bird Segmentation

Annotated images of birds and cats were taken from COCO using theextract_from_coco script and then trained on.

  • cat, birds

Demo on Cat & Birds

Training New Projects

To train a new project you can either create new labelme annotations on your images, to launch labelme run,

labelme

and start annotating your images! You'll need a couple of hundred. Alternatively if your category is already in COCO you can run the conversion tool to create labelme annotations from them.

python extract_from_coco.py --images~/datasets/coco/val2017 --annotations~/datasets/coco/annotations/instances_val2017.json --output~/datasets/my_cat_images_val --categories cat

Once you've got a directory of labelme annotations you can check how the images will be shown to the model during training by running,

python check_dataset.py --dataset~/datasets/my_cat_images_val# to show our dataset with training augmentationpython check_dataset.py --dataset~/datasets/my_cat_images_val --use-augmentation

. If your happy with the images and how they'll appear in training then train the model using,

python train.py --train~/datasets/my_cat_images_train --val~/datasets/my_cat_images_val --model-tag segmentation_cat --model-type FCNResNet101

. This may take some time depending on how many images you have. Tensorboard logs are available in thelogs directory. To run your trained model on a directory of images run

# to display the outputpython evaluate_images.py --images~/Pictures/my_cat_imgs --model models/model_segmentation_cat_30.pth --model-type FCNResNet101 --display# to save the outputpython evaluate_images.py --images~/Pictures/my_cat_imgs --model models/model_segmentation_cat_30.pth --model-type FCNResNet101 --save

About

A framework for training segmentation models in pytorch on labelme annotations with pretrained examples of skin, cat, and pizza topping segmentation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

    Packages

    No packages published

    Languages


    [8]ページ先頭

    ©2009-2025 Movatter.jp