Track description


The focus of this track is on action/interaction recognition on RGB data, providing for training a labeled database of 235 action performances from 17 users corresponding to 11 action categories: Wave, Point, Clap, Crouch, Jump, Walk, Run, Shake Hands, Hug, Kiss, and Fight.

Both competition server and the resources for participating in this Track can be found here: https://www.codalab.org/ChalearnLAP_Action

 


 

Track stages:

  • Development phase: Create a learning system capable of learning from several training annotated human limbs a human action recognition problem. Practice with development data (a database of 150 manually labelled action performances is available) and submit predictions on-line on validation data (90 labelled action performances) to get immediate feed-back on the leaderboard. Recommended: towards the end of the development phase, submit your code for verification purpose. 

  • Final evaluation phase: Make predictions on the new final evaluation data (95 performances) revealed at the end of the development phase. The participants will have few days to train their systems and upload their predictions.

We highly recommend that the participants take advantage of this opportunity and upload regularly updated versions of their code during the development period. Their last code submission before deadline will be used for the verification. 

Data download and description:

Access to the data is password protected. Register and accept the terms and conditions from Codalab competition server to get the authentication information.

The data is organized as a set of sequence, each one unically identified by an string SeqXX, where XX is a 2 integer digit number. Each sequence is provided as a single ZIP file named with its identifier (eg. SeqXX.zip).

Each sample ZIP file contains the following files:

  • SeqXX_color.mp4: Video with the RGB data.
  • SeqXX_data.csv: CSV file with the number of frames of the video.
  • SeqXX_labels: CSV file with the ground truth for the sample (only for labelled data sets). Each line corresponds to an action instance. Information provided is the actionID, the start frame and the end frame of the action instance. The actions identifiers are the ones provided in the gesture table at the begining of this page.

News


There are no news registered in