Leaderboard

Instructions

This page display the submited results for Detection Leaderboards. For each submission, we display several main metrics in main table. For detailed information, more metrics, per-sequence results and visualisation (coming soon), please click submission name. For all tables, you can click headers to sort the results. Note you can download the submission zip file as well. Legends, metrics descriptions and reference are displayed after leaderboards table.

2D Detection Submissions

Name OSPAIoU AP0.3 AP0.5 AP0.7

0.637 75.97% 67.895% 44.891%
Y. He, W. Yu, J. Han, X. Wei, X. Hong and Y. Gong Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality Collaboration in CVPRW, 2021

0.713 61.898% 52.157% 29.508%
S. Ren, K. He, R. Girshick and J. Sun Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. in NeurIPS, 2015

3D Detection Submissions

Name OSPAIoU AP0.3 AP0.5 AP0.7

0.761 70.724% 39.838% 4.59%
Duy-Tho Le, Hengcan Shi, Hamid Rezatofighi, Jianfei Cai Accurate and Real-time 3D Pedestrian Detection Using an Efficient Attentive Pillar Network in Arxiv

0.778 69.209% 33.677% 2.209%
Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, Oscar Beijbom PointPillars: Fast Encoders for Object Detection from Point Clouds in CVPR 2019

Additional Information Used

Symbol Description
Individual Image Method uses individual images from each camera
Stitched Image Method uses stitched images combined from the individual cameras
Pointcloud Method uses 3D pointcloud data
Online Tracking Method does frame-by-frame processing with no lookahead
Offline Tracking Method does not do in-order frame processing
Public Detections Method uses publicly available detections
Private Detections Method uses its own private detections

Evaluation Measures[1]

Measure Better Perfect Description
OSPA
OSPA2 lower 0.0 OSPA is a set-based metric which can directly capture a distance, between two sets of trajectories without a thresholding parameter[2].
OSPA Localization lower 0.0 Representing different tracking error such as the displacement and size errors, track ID switches, track fragmentation or even track late initiation/early termination[2].
OSPA Cardinality lower 0.0 Representing cardinality mismatch between two sets, penalizing missed or false tracks without an explicit definition for them [2].
AP
AP0.3 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 30% [3].
AP0.5 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 50% [3].
AP0.7 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 70% [3].

Reference

  1. The style and content of the Evaluation Measures section is reference from MOT Challenges.
  2. Hamid Rezatofighi∗, Tran Thien Dat Nguyen∗, Ba-Ngu Vo, Ba-Tuong Vo, Silvio Savarese, and Ian Reid.. How Trustworthy are Performance Evaluationsfor Basic Vision Tasks? Arxiv, 2021.
  3. Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, Andrew Zisserman. The PASCAL Visual Object Classes (VOC) Challenge International Journal of Computer Vision, 88(2), 303-338, 2010.