IRIT-MFU Multi-modal systems for emotion classification for Odyssey 2024 challenge
Résumé
In this paper, we present our contribution to emotion classifi- cation in speech as part of our participation in Odyssey 2024 challenge. We propose a hybrid system that takes advantage of both audio signal information and semantic information ob- tained from automatic transcripts. We propose several models for each modality and three different fusion methods for the classification task. The results show that multimodality im- proves significantly the performance and allows us surpassing the challenge baseline, which is an audio only system, from a 0.311 macro F1-score to 0.337.
Origine | Fichiers produits par l'(les) auteur(s) |
---|