Automatic Annotation and Segmentation of Sign Language Videos: Base-level Features and Lexical Signs Classification - Architectures et Modèles pour l'Interaction
Communication Dans Un Congrès Année : 2021

Automatic Annotation and Segmentation of Sign Language Videos: Base-level Features and Lexical Signs Classification

Résumé

The automatic recognition of Sign Languages is the main focus of most of the works in the field, which explains the progressing demand on the annotated data to train the dedicated models. In this paper, we present a semi automatic annotation system for Sign Languages. Such automation will not only help to create training data but it will reduce as well the processing time and the subjectivity of manual annotations done by linguists in order to study the sign language. The system analyses hand shapes, hands speed variations, and face landmarks to annotate base level features and to separate the different signs. In a second stage, signs are classified into two types, whether they are lexical (i.e. present in a dictionary) or iconic (illustrative), using a probabilistic model. The results show that our system is partially capable of annotating automatically the video sequence with a F1 score = 0.68 for lexical sign annotation and an error of 3.8 frames for sign segmentation. An expert validation of the annotations is still needed.
Fichier principal
Vignette du fichier
102471.pdf (1.49 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03375858 , version 1 (13-10-2021)

Identifiants

Citer

Hussein Chaaban, Michèle Gouiffès, Annelies Braffort. Automatic Annotation and Segmentation of Sign Language Videos: Base-level Features and Lexical Signs Classification. 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), Feb 2021, Online streaming, France. pp.484-491, ⟨10.5220/0010247104840491⟩. ⟨hal-03375858⟩
145 Consultations
189 Téléchargements

Altmetric

Partager

More