Object detection is performed on the whole image over five different classes. The evaluation is performed according to the COCO evaluation metric. We use the mean average precision (mAP) over different intersection over union (IoU) thresholds, namely 0.50:0.05:0.95 (primary COCO challenge metric), and denote this metric by AP. We also report the mAP over an IoU of 0.50 (PASCAL VOC metric) and 0.75 (strict metric), denoted as AP50 and AP75, respectively. We use the official pycocotools
functions to calculate the performance.
Infants and infant seats, as well as children and child seats are treated as two different instances, i.e. the model should learn to separate the child from the child seat. This also means that adults, children and infants should all be classified as a person, i.e. one label for all of them.
Below is the public leaderboard for object detection for different training data and vehicles. We use the following abbreviations for the classes:
- IS = infant seat
- CS = child seat
- Person = Adult passenger, child or baby
- Object = everyday object
Train car all means that one model was trained on each vehicle. The general performance of the method is evaluated on the test set of each vehicle without the test performance of the vehicle it was trained on. Consequently, we calculate the mean of the means of the performances across all vehicles for the overall performance of the method.
If a single car is mentioned as the car the model was trained on, then a single model was trained only on the mentioned car and the performance of this model on the test images of all unseen/unknown vehicles is evaluated. Consequently, we calculate the mean of the means of the performances across all vehicles without the test performance of the vehicle it was trained on.