- Notifications
You must be signed in to change notification settings - Fork51
An unofficial Gluon FR Toolkit for face recognition.https://gluon-face.readthedocs.io
License
THUFutureLab/gluon-face
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
GluonFR is a toolkit based on MXnet-Gluon, provides SOTA deep learning algorithm and models in face recognition.
GluonFR supports Python 3.5 or later.To install this package you need install GluonCV and MXNet first:
pip install gluoncv --prepip install mxnet-mkl --pre --upgrade# if cuda XX is installedpip install mxnet-cuXXmkl --pre --upgrade
Then install gluonfr:
- From Source(recommend)
pip install git+https://github.com/THUFutureLab/gluon-face.git@master
- Pip
pip install gluonfr
GluonFR is based on MXnet-Gluon, if you are new to it, please check outdmlc 60-minute crash course.
This part provides input pipeline for training and validation,all datasets is aligned by mtcnn and cropped to (112, 112) by DeepInsight,they converted images totrain.rec
,train.idx
andval_data.bin
files, please check out[insightface/Dataset-Zoo] for more information.Indata/dali_utils.py
, there is a simple example of Nvidia-DALI. It is worth trying when data augmentation with cpucan not satisfy the speed of gpu training,
The files should be prepared like:
face/ emore/ train.rec train.idx property ms1m/ train.rec train.idx property lfw.bin agedb_30.bin ... vgg2_fp.bin
We use~/.mxnet/datasets
as default dataset root to match mxnet setting.
mobile_facenet, res_attention_net, se_resnet...
GluonFR provides implement of losses in recent, including SoftmaxCrossEntropyLoss, ArcLoss, TripletLoss,RingLoss, CosLoss, L2Softmax, ASoftmax, CenterLoss, ContrastiveLoss, ... , and we will keep updating in future.
If there is any method we overlooked, please open anissue.
examples/
shows how to use gluonfr to train a face recognition model, and how to get Mnist 2-Dfeature embedding visualization.
The last column of this chart is the best LFW accuracy reported in paper, they are trained with different data and networks,later we will give our results of these method with same train data and network.
Method | Paper | Visualization of MNIST | LFW |
---|---|---|---|
Contrastive Loss | ContrastiveLoss | - | - |
Triplet | 1503.03832 | - | 99.63±0.09 |
Center Loss | CenterLoss | ![]() | 99.28 |
L2-Softmax | 1703.09507 | - | 99.33 |
A-Softmax | 1704.08063 | - | 99.42 |
CosLoss/AMSoftmax | 1801.05599/1801.05599 | ![]() | 99.17 |
Arcloss | 1801.07698 | ![]() | 99.82 |
Ring loss | 1803.00130 | ![]() | 99.52 |
LGM Loss | 1803.02988 | ![]() | 99.20±0.03 |
SeeModel Zoo in doc.
- More pretrained models
- IJB and Megaface Results
- Other losses
- Dataloader for loss depend on how to provide batches like Triplet, ContrastiveLoss, RangeLoss...
- Try GluonCV resnetV1b/c/d/ to improve performance
Create hosted docs- Test module
Pypi package
Please checkoutlink.
For Chinese Version:link
{haoxintongYangxvHaoyadongSunhao }
中文社区Gluon-Forum Feel free to use English here :D.
MXNet Documentation and Tutorialshttps://zh.diveintodeeplearning.org/
NVIDIA DALI documentationNVIDIA DALI documentation
Deepinsightinsightface
About
An unofficial Gluon FR Toolkit for face recognition.https://gluon-face.readthedocs.io