Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Code for the paper "Facial Emotion Recognition: State of the Art Performance on FER2013"

NotificationsYou must be signed in to change notification settings

usef-kh/fer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC

This work is published onarXiv

Our final model checkpoint can be foundhere

Overview

In this work, we achieve the highest single-network classification accuracy on FER2013. We adopt the VGGNet architecture, rigorously fine-tune its hyperparameters, and experiment with various optimization methods. To our best knowledge, our model achieves state-of-the-art single-network accuracy of 73.28 % on FER2013 without using extra training data.

Architecture

Architecture

Tuning

In tuning, we experiment with several deifferent optimizers, learning schedulers and run a grid search over all parameters. Some of our results are shown below.

OptimizersSchedulers
OptimizersSchedulers

Confusion Matrix

Saliency Maps

Visualizing the information captured inside deep neural networks helps describe how they differentiate between different facial emotions. A saliency map is a common technique used in visualizing deep neural networks. By propagating the loss back to the pixel values, a saliency map can highlight the pixels which have the most impact on the loss value. It highlights the visual features the CNN cancapture from the input; thus, allowing us to better understand the importance of each feature in the original image on the final classification decision.

Saliency Maps

Installation

To use this repo, create a conda environment usingenvironment.yml orrequirements.txt

# from environment.yml (recommended)conda env create -f environment.yml# from requirements.txtconda create --name <env> --file requirements.txt

Download the officalfer2013 dataset, and place it in the outmost folder with the following folder structuredatasets/fer2013/fer2013.csv

Usage

To train your own version of our network, run the following

python train.py network=vgg name=my_vgg

To change the default parameters, you may also add arguments such asbs=128 orlr=0.1. For more details, please refer toutils/hparams.py

About

Code for the paper "Facial Emotion Recognition: State of the Art Performance on FER2013"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp