2017 Looking at People CVPR/IJCNN Coopetition - Explainable impressions (quantitative phase)
Track description
This proposed challenge is part of a larger project on speed interviews: https://gesture.chalearn.org/speed-interviews. The overall goal of the project is help both recruiters and job candidates using automatic recommendations based on multi-media CVs. As a first step, we organized in 2016 two rounds of a challenge on detecting personality traits from short videos, for the ECCV 2016 conference (May 15, 2016 to July 1st 2016), see here and the ICPR 2016 conference (June 30 2016 to 16 August 2016), see here. This second round evaluated using the same data a coopetition setting (mixture of collaboration and competition) in which participants shared code. Both rounds revealed the feasibility of the task (AUC ~ 0.85) and the dominance of deep learning methods. These challenges have been very successful, attracting in total ~100 participants.
We propose for CVPR 2017 a new edition of the challenge with the more ambitious goals to:
- Predict whether the candidates are promising enough that the recruiter wants to invite him/her to an interview (quantitative competition).
- Justify/explain with a TEXT DESCRIPTION the recommendation made such that a human can understand it (qualitative coopetition).
To be considered for the prizes, participants have to share the fact sheets AND the code used in the competition (documented with instructions/requirements, under an opensource license) before the deadline (Feb 10). To this end, they have to send an email to juliojj (at) gmail (dot) com with the following information:
(1) Subject title: CVPR 2017 Competition;
(2) source code;
(3) Link to the code (we strongly recommend you to use a repository such as GitHub);
(4) Codalab username used during competition;
(5) fact sheets and
(6) Contact email.
The template for the fact sheets is available here.
Instructions to submit to the qualitative phase:
-
Submissions will be made through codalab (quantitative phase), which will be open to accept submissions from Feb 16.
-
Submissions should keep the same format as the provided sample (link here)
-
Each submission bundle must have the following files (with same filenames, not in subdirectory, in a single zip file)
- metadata.txt (mandatory) => might contain any useful informations, such as team’s name, code’s url, few lines regarding the developed method, ...
-
prediction.pkl (mandatory) => same pickle file as submitted in the quantitative phase.
- description.pkl (mandatory) => pickle file that contains a dictionary [<str>: (<str>, <ndarray>)] that links a clip name (such as in prediction.pkl) with a tuple. The first entry is a string object: the text description ; the second entry is optional, could be None or <ndarray> corresponding to an image that illustrates the text description. Suggested images size is 355x200 pixels in RGB format. Image is optional for each text description, ie: some text may have image, some not.
- video.mp4 (optional) => Few seconds video that could describe the method developed by the participants. The video must be encoded using pixels format as ‘yuv420p’, which is not the default used by ffmpeg. In order to make sure or to convert to the appropriate format using ffmpeg, you should use the following command: ffmpeg -i input.mp4 -pix_fmt yuv420p output.mp4
-
When you submit your bundle, in this phase codalab will first check if the 4 mentioned files were submitted. It will also check for consistency related to the format of description.pkl. If checking passed finely, you will see the status as “Finished”. An external evaluation program will load all submissions later and check for inconsistency (but at this point, participants will not be able to receive any feedback). A sample code to generate description is available here. You can download a sample description file here.
A template fact sheet for qualitative phase is available here.