Evaluation metrics


Starting kit

Download it here. It includes:

  • predictions.zip: contains predictions.pkl, the predictions from the baseline submission over the validation set used (generated by the tiny YOLOv3 baseline model). It can serve as example to asses the format of your submissions.
  • evaluation-metric.zip:
    • evaluate.py: script used used to evaluate submissions. We provide all the necessary files to run the code and obtain the evaluation on predictions over the depth training data (the fact that is depth is irrelevant to understand the metric comutation). You only need to run the script providing the the following directories (in this order) as arguments:
      • input_depth_trainpart/: contains two folders, ref and res, which contain training partition ground_truth.pkl(reference) and depth-based predictions.pkl over groundtruth respectively.
      • output_depth_trainpart/:a scores.txt file will be generated with the computed metrics.

 

News


There are no news registered in