Instance segmentation is performed on the whole image over five different classes. The evaluation is performed according to the COCO evaluation metric. We use the mean average precision (mAP) over different intersection over union (IoU) thresholds, namely 0.50:0.05:0.95 (primary COCO challenge metric), and denote this metric by AP. We also report the mAP over an IoU of 0.50 (PASCAL VOC metric) and 0.75 (strict metric), denoted as AP50 and AP75, respectively. The evaluation metrics for instance segmentation are exactly the same as for object detection, except that the mean intersection over union is calculated over the masks instead of the boxes. We use the official
pycocotools functions to calculate the performance.
Infants and infant seats, as well as children and child seats are treated as two different instances, i.e. the model should learn to separate the child from the child seat. This also means that adults, children and infants should all be classified as a person, i.e. one label for all of them.
Below is the public leaderboard for instance segmentation for different training data and vehicles. We use the following abbreviations for the classes:
- IS = infant seat
- CS = child seat
- Person = Adult passenger, child or baby
- Object = everyday object
Train car all means that one model was trained on each vehicle. The general performance of the method is evaluated on the test set of each vehicle without the test performance of the vehicle it was trained on. Consequently, we calculate the mean of the means of the performances across all vehicles for the overall performance of the method.
If a single car is mentioned as the car the model was trained on, then a single model was trained only on the mentioned car and the performance of this model on the test images of all unseen/unknown vehicles is evaluated. Consequently, we calculate the mean of the means of the performances across all vehicles without the test performance of the vehicle it was trained on.
|Name||Train Car||AP||AP50||AP75||AP (per class)||AP50 (per class)||AP75 (per class)||Paper||Code||RGB||Gray||Depth||Additional||Team||Title||Conference|
|SVIRO-Team||X5||0.258||0.52||0.221||IS: 0.33 CS: 0.23 Person: 0.24 Object: 0.23||IS: 0.78 CS: 0.38 Person: 0.60 Object: 0.32||IS: 0.25 CS: 0.27 Person: 0.09 Object: 0.28||No||Yes||No||Yes|