Recognize facial emotions in 7 categories: angry, disgust, fear, happy, sad, surprise, neutral.
The dataset is provided by a competition, which is quite similar to FER2013 dataset.
The API for face detection is Google's mediapipe API. The model for emotion recognition is a 15-layer (8 convs + 4 pooling + 3 fcs) VGG style network.
Pipeline
├── face detection: mediapipe
└── emotion recognition: vggnet
The upsampling technique is SMOTE.
If you have a dataset, you can train use training.ipynb
If you want infer directly, use inference.ipynb
The weights I trained is located in saved_models. The default setting is a voting classifer of a model trained by orginal data and a model trained by upsampled data.