Challenge description


Overview

The task: The challenge will use an extension of the LTD Dataset [1] which consists of thermal footage that spans multiple seasons. For deployment and long-term use of machine-learning algorithms in a surveillance context it is vital that the algorithm is robust to the concept drift that occurs as the conditions in the outdoor environment changes. This challenge aims to spotlight the problem of concept drift in a surveillance context and highlight the challenges and limitations of existing methods. To provide a direction of research for the future.

The Dataset: We will use an extension of an existing concept drift dataset and spans 188 days in the period of 14th May 2020  to 30th of April 2021, with a total of 1689 2-minute clips sampled at 1fps with associated bounding box annotations for 4 classes (Human, Bicycle, Motorcycle, Vehicle). The collection of this dataset has included data from all hours of the day in a wide array of weather conditions overlooking the harborfront of Aalborg, Denmark. In this dataset depicts the drastic changes of appearance of the objects of interest as well as the scene over time in a static surveillance context to develop robust algorithms for real-world deployment.

      

The Tracks: This challenge is split into 3 different tracks associated with thermal object detection. Each track will have the same evaluation criteria/data but will vary with both the amount of data as well as the time span of the data.  The training data is chosen by selecting the coldest day, and surrounding data as cold environments introduce the least amount of concept drift. Each track aims at evaluating how robust a given detection method is to concept drift, by training on limited data from a specific time period (day, week, month in February) and evaluation performance across time, by validating and testing performance on months of unseen data (Jan., May., Apr., May., Jun., Jul., Aug. and Sep.).

The Phases: Each track will be composed of two phases, i.e., development and test phase. At the development phase, public train data will be released and participants will need to submit their predictions with respect to a validation set. At the test (final) phase, participants will need to submit their results with respect to the test data, which will be released just a few days before the end of the challenge. That is, in the initial development phase only data from the month of February will have annotations, and the images for validation of the other month will be available. As we progress into the test phase, annotations will become available together with the test images for the final submission.

Participants will be ranked, at the end of the challenge, using the test data. It is important to note that this competition involves the submission of results (and not code). Therefore, participants will be required to share their codes and trained models after the end of the challenge (with detailed instructions) so that the organizers can reproduce the results submitted at the test phase, in a "code verification stage". At the end of the challenge, top ranked methods that pass the code verification stage will be considered as valid submissions to compete for any prize that may be offered.

[1] Nikolov, Ivan Adriyanov, et al. "Seasons in Drift: A Long-Term Thermal Imaging Dataset for Studying Concept Drift." Thirty-fifth Conference on Neural Information Processing Systems. 2021.

           

Important Dates

Schedule already available.

Database

Detailed information about the dataset can be found here.

Baseline

The baseline is a YOLOv5 with the default configuration from the Ultralytics repository, including augmentations. In depth logs and examples for the baseline can be found in the Weights and Biases repository. The baseline is trained with a batch size of 64 for 300 epochs, with an input image size of 384x288 and the best performing model is chosen. For the dev phase the performance is validated on the training set. For the Test phase new baseline models will be submitted (validated on the validation set). Naturally, the labels are converted to the normalized yolo format ([cls] [cx] [cy] [w] [h]) for both training and evaluation. For submission they are converted back to the ([cls] [tl_x] [tl_y] [br_x] [br_y]) coordinates. The models were all trained on the same machine with 2x Nvidia RTX 3090 GPUs, all training is also conducted as multi GPU training using the pytorch distributed learning module.

How to enter the competition

The competition will be run on CodaLab platform. Register on CodaLab in the following links to get access to the decryption keys for training/validation/test data (according to our schedule), and submit predictions during the development and test phase of the challenge. Pick a track (or all tracks) to follow and train on the respective training splits. Depending on the track chosen the training data will vary, however the validation and testing data will remain the same across all challenges.
By submitting a ".pkl" file to the codalab challenge following the format provided in the starting kit and complying with the challenge rules and the submission will be listed on the leaderboard and ranked.

  • Track 1: Detection at day level (competition link): Train on a predefined and single day data and evaluate concept drift across time. The day is the 13th of February 2020 as it is the coldest day in the recorded data, due to the relative thermal appearance of objects being the least varied in colder environments this is our starting point.
  • Track 2: Detection at week level (competition link): Train on a predefined and single week data and evaluate concept drift across time. The week selected is the week of the 13th – 20th of February 2020  - (i.e. expanding from our starting point)
  • Track 3: Detection at month level (competition link): Train on a predefined and single month data and evaluate concept drift across time. And the selected month is the entire month of February.

The participants will need to register through the platform, where they will be able to access the data and submit their predicitions on the validation and test data (i.e., development and test phases) and to obtain real-time feedback on the leaderboard. The development and test phases will open/close automatically based on the defined schedule.

Starting kit

We provide a submission template (".pkl" file) for each phase (development and test), with evaluated samples and associated random predictions. Participants are required to make submissions using the defined templates, by changing the random predictions by the ones obtained by their models. Note, the evaluation script will verify the consistency of submitted files and may invalidate the submission in case of any inconsistency.

  • Development phase: Submission template (".pkl" file) can be downloaded here.
  • Test phase: Submission template (".pkl" file) can be downloaded here.

A python script "data_loader.py" with associated instructions are provided as part of the starting-kit.

Warning: the maximum number of submissions per participant at the test stage will be set to 3. Participants are not allowed to create multiple accounts to make additional submissions. The organizers may disqualify suspicious submissions that do not follow this rule.

Making a submission

To submitt your predicted results (on each of the phases), you first have to compress your "predictions.pkl" file (please, keep the filename as it is) as "the_filename_you_want.zip". To avoid any incompatibility with different python versions, please save your pickle file using protocol = 4. Then,

sign in on Codalab -> go to our challenge webpage (and associated track) on codalab -> go on the "Participate" tab -> "Submit / view results" -> "Submit" -> then select your "the_filename_you_want.zip" file and -> submit.

Warning: the last step ("submit") may take some minutes (e.g., >10min) with status "Running" due to the amount of computation and availble Codalab resources (just wait). If everything goes fine, you will see the obtained results on the leaderboard ("Results" tab).

Note, Codalab will keep on the leaderboard the last valid submission. This helps participants to receive real-time feedback on the submitted files. Participants are responsible to upload the file they believe will rank them in a better position as a last and valid submission.

Evaluation Metric

We follow the COCO evaluation scheme for mAP. The primary metric is, mAP across 10 different IoU thresholds (ranging from 0.5 to .95 at 0.05 increments). This is calculated for each month in the validation/test set and the model is then ranked based on a weighted average of each month (more distant months having a larger weight as more concept drift is present). The evaluation is performed leveraging the official COCO evaluation tools. Detailed mAP explanation can be found here. The official repo can be found here.

Basic Rules

According to the Terms and Conditions of the Challenge,

  • "the maximum number of submissions per participant at the test stage will be set to 3. Participants are not allowed to create multiple accounts to make additional submissions. The organizers may disqualify suspicious submissions that do not follow this rule."
  • "in order to be eligible for prizes, top ranked participants’ score must improve the baseline performance provided by the challenge organizers."
  • "the performances on test data will be verified after the end of the challenge during a code verification stage. Only submissions that pass the code verification will be considered to be in the final list of winning methods;"
  • "to be part of the final ranking the participants will be asked to fill out a survey (fact sheet) where a detailed and technical information about the developed approach is provided."

Note, at the test phase you may see on the leaderboard of Codalab some participants with "Entires" > N (N=3, in our case, "Max number of submissions"). This is because Codalab was not counting the failed submissions to increased the "Max number of submissions" counter.

Wining solutions (post-challenge)

Important dates regarding code submission and fact sheets are defined in the schedule.

  • Code verification: After the end of the test phase, top participants are required to share with the organizers the source code used to generate the submitted results, with detailed and complete instructions (and requirements) so that the results can be reproduced locally (preferably using docker). Note, only solutions that pass the code verification stage are elegible for the prizes and to be anounced in the final list of winning solucions. Participants are required to share both training and prediction code with pre-trained models. Participants are requested to share with the organizers a link to a code repository with the required instructions. This information must be detailed inside the fact sheets (detailed next).
    • Ideally, the instructions to reproduce the code should contain:

      1) how to structure the data (at train and test stage).
      2) how to run any preprocessing script, if needed.
      3) how to extract or load the input features, if needed.
      4) how to run the docker used to run the code and to install any required libraries, if possible/needed.
      5) how to run the script to perform the training.
      6) how to run the script to perform the predictions, that will generate the output format of the challenge.

  • Fact sheets: In addition to the source code, participants are required to share with the organizers a detailed scientific and technical description of the proposed approach using the template of the fact sheets providev by the organizers. Latex template of the fact sheets can be downloaded here.

Sharing the requested information with the organizers: Send the compressed project of your fact sheet (in .zip format), i.e., the generated PDF, .tex, .bib and any additional files to <asjo@create.aau.dk>, and put in the Subject of the E-mail "ECCV 2022 Seasons in Drift Challenge / Fact Sheets and Code repository"

IMPORTANT NOTE: we encourage participants to provide the instructions as more detailed and complete as possible so that the organizers can easily reproduce the results. If we face any problem during code verification, we may need to contact the authors, and this can take time and the release of the list of winners may be delayed.

Challenge Results (test phase)

We are happy to announce the top-2 winning solutions of the ECCV 2022 Seasons in Drift Challenge. These teams had their codes verified at the code verification stage, and are top winning solutions on all challenge tracks (i.e., track 1, track 2, and track 3). Their fact sheets and link to code repository are available in the following links: track 1, track 2, and track 3. The organizers would like to thank all the participants for making this challenge a success.

  • 1st place: GroundTruth - Team Leader: Xiaoqiang Lu (Xidian University). Team members: Tong Gou (Xi’an University of Technology), Zhongjian Huang (Xidian University), Yuting Yang (Xidian University), Xu Liu (Xidian University), LingLing Li (Xidian University), Fang Liu (Xidian University), Licheng Jiao (Xidian University)
  • 2nd place: heboyong - Team Leader: Boyong He (Xiamen University). Team members: Qianwen Ye (Xiamen University), Xianjiang Li (Xiamen University), and Weijie Guo (Xiamen University)

Associated Workshop

Check the associated Real-World Surveillance: Applications and Challenges Workshop.

News


ECCV2022 Seasons in Drift Challenge

The ChaLearn ECCV2022 Seasons in Drift Challenge has just opened on Codalab. Join us to push the boundaries of thermal object detection along with concept drift on the largest annotated public thermal database.