Categories |
![]()
SIGN LANGUAGE RECOGNITION
|
About |
The Task: We are organizing a challenge on isolated sign language recognition from signer-independent non-controlled RGB-D data involving a large number of sign categories (>200). The Dataset: A new dataset called AUTSL is considered for this challenge. It consists of 226 sign labels and 36,302 isolated sign video samples that are performed by 43 different signers in total. The dataset has been divided into three sub-datasets for user independent assessments of the models. |
Call for Papers |
The Tracks/Phases: The challenge will be divided into two different competition tracks, i.e., RGB and multimodal RGB-D SLR. The participants are free to join any of these challenges. Both modalities have been temporally and spatially aligned. Each track will be composed of two phases, i.e., development and test phase. At the development phase, public train data will be released and participants will need to submit their predictions with respect to a validation set. At the test (final) phase, participants will need to submit their results with respect to the test data, which will be released just a few days before the end of the challenge. Participants will be ranked, at the end of the challenge, using the test data. It is important to note that this competition involves the submission of results (and not code). Therefore, participants will be required to share their codes and trained models after the end of the challenge (with detailed instructions) so that the organizers can reproduce the results submitted at the test phase, in a "code verification stage". At the end of the challenge, top ranked methods that pass the code verification stage will be considered as valid submissions to compete for any prize that may be offered. |
Credits and Sources |
[1] ChaLearnLAP SLR Challenge : ChaLearn Looking at People RGB and RGBD Sign Language Recognition Challenge |