- Notifications
You must be signed in to change notification settings - Fork5
Global-Local Attention for Emotion Recognition
License
NotificationsYou must be signed in to change notification settings
minhnhatvt/glamor-net
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
- Python 3
- Installtensorflow (or tensorflow-gpu) >= 2.0.0
- Install some other packages
pip install cythonpip install opencv-python==4.3.0.36 matplotlib numpy==1.18.5 dlib
We provide the NCAER-S dataset with original images and extracted faces (a .txt file with 4 bounding box coordinate) in the NCAERS dataset.
The dataset can be downloaded atGoogle Drive
Note that the dataset and label should have structure like the followings:
NCAER-S │└───images│ ││ └───class_1│ │ │ img1.jpg│ │ │ img2.jpg│ │ │ ...│ └───class_2│ │ img1.jpg│ │ img2.jpg│ │ ...│ └───crop│ ││ └───class_1│ │ │ img1.txt│ │ │ img2.txt│ │ │ ...│ └───class_2│ │ img1.txt│ │ img2.txt│ │ ...
Our code supports these types of execution with argument -m or --mode:
#extract faces from <train, val or test> dataset (specified in config.py)python run.py -m extract dataset_type=train#train the model with config specified in the config.pypython run.py -m train#evaluate the trained model on the dataset <dataset_type>python run.py -meval --dataset_type=test --trained_weights=path/to/weights
Our trained model is available atweights/glamor-net/Model
.
- Firstly, please download the dataset and extract it into "data/" directory.
- Then specified the path to the test data (images and crop):
config=config.copy({'test_images':'path_to_test_images','test_crop':'path_to_test_cropped_faces'#(.txt files),})
- Run this command to evaluate the model. We are using the classification accuracy as our evaluation metric.
# Evaluate our model in the test setpython run.py -meval --dataset_type=test --trained_weights=weights/glamor-net/Model
Firstly please extract the faces from train set (val set is optional)
- Specify the path to the dataset in config.py (train_images, val_images, test_images)
- Specify the desired face-extracted output path in config.py (train_crop, val_crop, test_crop)
config=config.copy({'train_images':'path_to_training_images','train_crop':'path_to_training_cropped_faces'#(.txt files),'val_images':'path_to_validation_images','val_crop':'path_to_validation_cropped_faces'#(.txt files)})
- Perform face extraction on both dataset_type by running the commands:
python run.py -m extract --dataset_type=<train, val or test>
Start training:
# Train a new model from sratchpython run.py -m train# Continue training a model that you had trained earlierpython run.py -m train --resume=path/to/trained_weights# Resume the last checkpoint modelpython run.py -m train --resume=last
We support prediction on single image or on images in a directory by running this command:
# Predict on single imagepython predict.py --trained_weights=weights/glamor-net/Model --input=test_images/1.jpg --output=path/to/out/directory# Predict on images in directorypython predict.py --trained_weights=weights/glamor-net/Model --input=test_images/ --output=out/