- Notifications
You must be signed in to change notification settings - Fork270
Code for the Lovász-Softmax loss (CVPR 2018)
License
bermanmaxim/LovaszSoftmax
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks
Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko
ESAT-PSI, KU Leuven, Belgium.
Published in CVPR 2018. Seeproject page,arxiv paper,paper on CVF open access.
Files included:
- lovasz_losses.py: Standalone PyTorch implementation of the Lovász hinge and Lovász-Softmax for the Jaccard index
- demo_binary.ipynb: Jupyter notebook showcasing binary training of a linear model, with the Lovász Hinge and with the Lovász-Sigmoid.
- demo_multiclass.ipynb: Jupyter notebook showcasing multiclass training of a linear model with the Lovász-Softmax
The binarylovasz_hinge
expects real-valued scores (positive scores correspond to foreground pixels).
The multiclasslovasz_softmax
expect class probabilities (the maximum scoring category is predicted). First use aSoftmax
layer on the unnormalized scores.
Files included:
- lovasz_losses_tf.py: Standalone TensorFlow implementation of the Lovász hinge and Lovász-Softmax for the Jaccard index
- demo_binary_tf.ipynb: Jupyter notebook showcasing binary training of a linear model, with the Lovász Hinge and with the Lovász-Sigmoid.
- demo_multiclass_tf.ipynb: Jupyter notebook showcasing the application of the multiclass loss with the Lovász-Softmax
Warning: the losses values and gradients have been tested to be the same as in PyTorch (see notebooks), however we have not used the TF implementation in a training setting.
See the demos for simple proofs of principle.
- How should I use the Lovász-Softmax loss?
The loss can be optimized on its own, but the optimal optimization hyperparameters (learning rates, momentum) might be different from the best ones for cross-entropy. As discussed in the paper, optimizing the dataset-mIoU (Pascal VOC measure) is dependent on the batch size and number of classes. Therefore you might have best results by optimizing with cross-entropy first and finetuning with our loss, or by combining the two losses.
See for example how the workLand Cover Classification From Satellite Imagery With U-Net and Lovasz-Softmax Loss by Alexander Rakhlin et al. used our loss in theCVPR 18 DeepGlobe challenge.
- Inference in Tensorflow is very slow...
Compiling from Tensorflow master (or using a future distribution that includes committensorflow/tensorflow@73e3215) should solve this problem; seeissue #6.
Please cite
@inproceedings{berman2018lovasz, title={The Lov{\'a}sz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks}, author={Berman, Maxim and Rannen Triki, Amal and Blaschko, Matthew B}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={4413--4421}, year={2018}}
About
Code for the Lovász-Softmax loss (CVPR 2018)
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.