Track description

We are also organizing a track complimentary to track 2 on continuous gesture recognition from RGB-D data. The goal in this track is to develop methods that can perform simultaneous segmentation and recognition of gesture categories from continuous RGB-D video. The newly created:  ChaLearn LAP RGB-D Continuous Gesture Dataset (ConGD) is considered for this track (sample images taken from this data set are shown in Figure 1).  The data set comprises a total of 47,933 gestures in 22535 RGB-D continuous videoss (about 4G).

Each RGB-D video depicts one or more gestures  and there are 249 gesture categories performed by 21 different individuals. Performance of methods will be judged by its Jaccard index.


Figure 1: Sample images taken from the depth video for a set of the considered gestures.


There are no news registered in