Leaderboard

Instructions

This page display the submited results for Detection Leaderboards. For each submission, we display several main metrics in main table. For detailed information, more metrics, per-sequence results and visualisation (coming soon), please click submission name. For all tables, you can click headers to sort the results. Note you can download the submission zip file as well. Legends, metrics descriptions and reference are displayed after leaderboards table.

2D Detection Submissions

Name OSPAIoU AP0.3 AP0.5 AP0.7

0.592 75.916% 67.883% 44.897%
Y. He, W. Yu, J. Han, X. Wei, X. Hong and Y. Gong Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality Collaboration in CVPRW, 2021

0.607 75.766% 68.097% 45.121%
Anonymous Submission

0.626 75.075% 67.381% 43.79%
Anonymous Submission

0.629 74.219% 65.985% 40.896%
Anonymous Submission

0.668 65.424% 59.718% 40.158%
Anonymous Submission

0.682 61.787% 52.168% 29.515%
S. Ren, K. He, R. Girshick and J. Sun Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. in NeurIPS, 2015

0.721 63.884% 48.664% 22.769%
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko. End-to-End Object Detection with Transformers. in ECCV, 2020

0.733 61.861% 50.38% 27.165%
T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár. Focal Loss for Dense Object Detection in ICCV, 2017

0.74 55.569% 41.731% 20.906%
J. Redmon and A. Farhadi YOLOv3: An Incremental Improvement in arXiv, 2018

0.957 2.031% 0.365% 0.022%
Anonymous Submission

3D Detection Submissions

Name OSPAIoU AP0.3 AP0.5 AP0.7

0.557 76.282% 47.436% 7.053%
Anonymous Submission

0.56 75.987% 48.066% 6.564%
Anonymous Submission

0.572 76.905% 46.076% 5.296%
Jinzheng Guang, Zhengxi Hu, Shichao Wu, Qianyi Zhang, Jingtai Liu∗ RPEA: A Residual Path Network with Efficient Attention for 3D Pedestrian Detection from Point Clouds in Expert Systems With Applications, 2024

0.572 76.153% 46.314% 5.366%
Anonymous Submission

0.582 76.351% 42.016% 2.649%
Dan Jia, and Bastian Leibe Person-MinkUNet: 3D Person Detection with LiDAR Point Cloud in CVPRW 2021

0.652 75.578% 20.238% 0.632%
Anonymous Submission

0.653 63.589% 17.054% 0.604%
Tengteng Huang, Zhe Liu, Xiwu Chen, and Xiang Bai EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection in ECCV, 2020

0.655 66.631% 18.661% 0.671%
Tengteng Huang, Zhe Liu, Xiwu Chen, and Xiang Bai EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection in ECCV, 2020

0.66 74.284% 42.617% 4.886%
Duy-Tho Le, Hengcan Shi, Hamid Rezatofighi, Jianfei Cai Accurate and Real-time 3D Pedestrian Detection Using an Efficient Attentive Pillar Network in IEEE Robotics and Automation Letters

0.677 59.252% 16.845% 0.418%
Tengteng Huang, Zhe Liu, Xiwu Chen, and Xiang Bai. EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection. in ECCV, 2020.

0.709 39.781% 8.116% 0.186%
Anonymous Submission

0.732 63.922% 27.991% 1.842%
Cong Ma TANet++: Triple Attention Network with Filtered Pointcloud on 3D Detection in arXiv preprint arXiv:2106.1536 (2021)

0.764 38.205% 6.378% 0.081%
C. Qi, W. Liu, C. Wu, H. Su and L. Guibas. Frustum PointNets for 3D Object Detection from RGB-D Data. in CVPR, 2018

0.788 53.867% 4.175% 0.01%
Anonymous Submission

0.828 69.204% 16.7% 0.359%
Anonymous Submission

0.918 57.262% 8.963% 0.157%
Anonymous Submission

Additional Information Used

Symbol Description
Individual Image Method uses individual images from each camera
Stitched Image Method uses stitched images combined from the individual cameras
Pointcloud Method uses 3D pointcloud data
Online Tracking Method does frame-by-frame processing with no lookahead
Offline Tracking Method does not do in-order frame processing
Public Detections Method uses publicly available detections
Private Detections Method uses its own private detections

Evaluation Measures[1]

Measure Better Perfect Description
OSPA
OSPA2 lower 0.0 OSPA is a set-based metric which can directly capture a distance, between two sets of trajectories without a thresholding parameter[2].
OSPA Localization lower 0.0 Representing different tracking error such as the displacement and size errors, track ID switches, track fragmentation or even track late initiation/early termination[2].
OSPA Cardinality lower 0.0 Representing cardinality mismatch between two sets, penalizing missed or false tracks without an explicit definition for them [2].
AP
AP0.3 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 30% [3].
AP0.5 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 50% [3].
AP0.7 higher 100% Average Precision with intersection-over-union of bounding boxes larger than 70% [3].

Reference

  1. The style and content of the Evaluation Measures section is reference from MOT Challenges.
  2. Hamid Rezatofighi∗, Tran Thien Dat Nguyen∗, Ba-Ngu Vo, Ba-Tuong Vo, Silvio Savarese, and Ian Reid.. How Trustworthy are Performance Evaluationsfor Basic Vision Tasks? Arxiv, 2021.
  3. Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, Andrew Zisserman. The PASCAL Visual Object Classes (VOC) Challenge International Journal of Computer Vision, 88(2), 303-338, 2010.