Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Apache MXNet Gluon implementation for state of the art FER+ paper for Facial Emotion Recognition -https://arxiv.org/abs/1608.01041

License

NotificationsYou must be signed in to change notification settings

TalkAI/facial-emotion-recognition-gluon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository demonstrates the implementation and deployoment of a facial expression recognition deep learning model using Apache MXNet, based on theFER+ paper by Barsoum et. al..

The repository consists of the following resources:

  1. Scripts for data pre-processing as suggested in the paper.
  2. Notebook for model building and training, usingApache MXNet.
  3. Model deployment for online inference, usingMXNet Model Server

Check out a workingFER+ web application demo, powered by AWS.

Before you start

Pre-requisites

# Install MXNetpip install mxnet-mkl # for CPU machinespip install mxnet-cu92 # for GPU machines with CUDA 9.2    # Other Dependenciespip install Pillow # For image processingpip install graphviz # For MXNet network visualizationpip install matplotlib # For plotting training graphs# Install MXNet Model Server and required dependencies for inference model servingpip install mxnet-model-serverpip install scikit-imagepip install opencv-python

Note: please refer toMXNet installation guide for more detailed installation instructions.

Data preparation

Clone this repository

git clone https://github.com/TalkAI/facial-emotion-recognition-gluoncd facial-emotion-recognition-gluon

Download FER datasetfer2013.tar.gz from theFER Kaggle competition.

Note: You cannot download the dataset withwget. You will have to register on Kaggle, and then login to download the dataset.

Once downloaded:

  • Extract the tar file -fer2013.tar.gz
  • Copyfer2013.csv dataset tofacial-emotion-recognition-gluon/data directory.

We will now generateFER+ train/test/validation dataset from the downloadedFER data by executing the command below:

# In this step, we read the raw FER data, correct the labels using FER+ labels, and save as png images.# -d : path to "data" folder in this repository. It has folder for Train/Test/Validation data with corrected labels.# -fer : path to fer dataset that you have extracted.# -ferplus : path to fer2013new.csv file that comes with this repository in the data folder    python utils/prepare_data.py -d ./data -fer ./data/fer2013.csv -ferplus ./data/fer2013new.csv

Lastly, we will process theFER+ train/test/validation dataset

# This script reads the FER+ dataset (png images) we prepared in the previous step, applies the transformation suggested in the FER+ paper, and saves the processed images as NumPy binaries (npy files).# -d : path to data folder. This is where we have created data from the previous step.python utils/process_data.py -d ./data

Deep Learning Basics You Will Need

Go overthis notebook that provides basic overview and intuitiion for various deep learning concepts and techniaues used in building and training the model.

Model Building, Training and Deployment

Head over to theFER+ tutorial, to go over the process for building, training and deploying FER+ model. It is best to run as a live Jupyter Notebook, and you would need a GPU machine to complete training in a reasonable time.

Advanced Stuff - Left To The Reader

Below are few areas of improvements and next steps for the advanced reader. Contributions back to the repository are welcomed!

  • Hyper-parameter optimization - In this implementation, I have not optimized hyper-parameters (learning rate scheduler for SGD) for best possible result.
  • Implement multi-gpu version of model training. This script provides single GPU implementation only. Time per epoch on single GPU is around 1 minute => approx 50 minutes for full model training (Model converges at around 50th epoch)

Contributors

Citation / Credits

Resources

About

Apache MXNet Gluon implementation for state of the art FER+ paper for Facial Emotion Recognition -https://arxiv.org/abs/1608.01041

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp