MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
A collection of datasets for the purpose of emotion recognition/detection in speech.
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Human Emotion Understanding using multimodal dataset.
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
😎 Awesome lists about Speech Emotion Recognition
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
A survey of deep multimodal emotion recognition.
This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)
audio-text multimodal emotion recognition model which is robust to missing data
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
Published in Springer Multimedia Tools and Applications Journal.
All experiments were done to classify multimodal data.
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."