Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

🛣️🔍 | Road crack segmentation using UNet in PyTorch > Implementation of different loss functions (i.e Focal, Dice, Dice + CE)

License

NotificationsYou must be signed in to change notification settings

yakhyo/crack-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DownloadsGitHub Repo starsGitHub Repository

Input ImageGround TruthCE LossDice LossDiceCE LossFocal Loss
Dice Score➡️0.97190.98040.97540.9679

Road crack segmentation is the task of identifying and segmenting road cracks in images or videos of roads. In thisproject we usedUNet to detect the cracks on the road.

Table of Contents

Project Description

In this, Road Crack Segmentation project we have implemented UNet model to segment cracks on the roadusingCrack Segmentation dataset. Weevaluated the model's performance using different loss functions and compared their results. We have implementedfollowing loss functions:

We trained the model using above-mentioned loss functions and evaluated their performance byDice coefficient (dice score = 1 - dice loss, so our evaluation metric may not be right metric to compare those models with each other)

*see here to check full implementation of more loss functions (updating...)

Installation

Download the project:

git clone https://github.com/yakhyo/crack-segmentation.gitcd crack-segmentation

Install requirements:

pip install -r requirements.txt

Download the weights of the model fromhere intoweightsfolder. Model weights trained using Dice Loss is provided insideweights folder asmodel.pt

Usage

Dataset

To train the model download the dataset and puttrain andtest folders insidedata folder as following:

data-|     |-train-|             |-images             |-masks                     |-test -|             |-images             |-masks

Train

python train.py

Training arguments:

python train.py -husage: train.py [-h] [--data DATA] [--image_size IMAGE_SIZE] [--save-dir SAVE_DIR] [--epochs EPOCHS] [--batch-size BATCH_SIZE] [--lr LR] [--weights WEIGHTS] [--amp] [--num-classes NUM_CLASSES]Crack Segmentation training argumentsoptional arguments:  -h, --help            show this help message and exit  --data DATA           Path to root folder of data  --image_size IMAGE_SIZE                        Input image size, default: 512  --save-dir SAVE_DIR   Directory to save weights  --epochs EPOCHS       Number of epochs, default: 5  --batch-size BATCH_SIZE                        Batch size, default: 12  --lr LR               Learning rate, default: 1e-5  --weights WEIGHTS     Pretrained model, default: None  --amp                 Use mixed precision  --num-classes NUM_CLASSES                        Number of classes

Inference

python inference.py --weights weights/model.pt --input assets/CFD_001_image.jpg

Inference arguments

python inference.py -husage: inference.py [-h] [--weights WEIGHTS] [--input INPUT] [--output OUTPUT] [--image-size IMAGE_SIZE] [--view] [--no-save] [--conf-thresh CONF_THRESH]Crack Segmentation inference argumentsoptional arguments:  -h, --help            show this help message and exit  --weights WEIGHTS     Path to weight file (default: last.pt)  --input INPUT         Path to input image  --output OUTPUT       Path to save mask image  --image-size IMAGE_SIZE                        Input image size  --view                Visualize image and mask  --no-save             Do not save the output masks  --conf-thresh CONF_THRESH                        Confidence threshold for mask

Contributing

Contributions to improve the crack segmentation project are welcome. Feel free to fork the repository and submit pullrequests,or open issues to suggest features or report bugs.

License

The project is licensed under theMIT license.


[8]ページ先頭

©2009-2025 Movatter.jp