Leaderboard

Instructions

This page display the submited results for Tracking Leaderboards. For each submission, we display several main metrics in main table. For detailed information, more metrics, per-sequence results and visualisation (coming soon), please click submission name. For all tables, you can click headers to sort the results. Note you can download the submission zip file as well. Legends, metrics descriptions and reference are displayed after leaderboards table.

2D Tracking Submissions

Name MOTA↑ OSPA(2)IoU     ↓ IDF1↑ HOTA↑

32.414% 0.92 28.018% 25.24%
| | Anonymous Submission

32.326% 0.886 33.691% 29.931%
| | Y. He, W. Yu, J. Han, X. Wei, X. Hong and Y. Gong Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality Collaboration in CVPRW, 2021

31.679% 0.95 28.152% 25.529%
| | Anonymous Submission

24.05% 0.892 27.312% 23.894%
| | Anonymous Submission

23.878% 0.892 28.031% 24.379%
| | N. Wojke, A. Bewley and D. Paulus Simple Online and Realtime Tracking with a Deep Association Metric in ICIP, 2017

23.132% 0.945 22.168% 21.09%
| | A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Martín-Martín and S. Savar JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset in IROS, 2020

21.313% 0.965 18.872% 18.778%
| | Anonymous Submission

20.28% 0.945 23.327% 20.865%
| | P. Bergmann, T. Meinhardt, L. Leal-Taixé Tracking without bells and whistles in ICCV, 2019

3D Tracking Submissions

Name MOTA↑ OSPA(2)IoU     ↓ IDF1↑ HOTA↑

24.21% 0.975 21.342% 18.93%
| | Anonymous Submission

22.963% 0.982 14.48% 15.8%
| | Aakash Kumar, Jyoti Kini, Ajmal Mian, Mubarak Shah Self Supervised Learning for Multiple Object Tracking in 3D Point Clouds in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) October 23-27, 2022,

22.565% 0.976 13.889% 15.569%
| | A. Kumar, J. Kini, M. Shah and A. Mian PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object Tracking in arXiv, 2021

20.152% 0.956 12.319% 13.053%
| | A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Martín-Martín and S. Savar A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. in IROS, 2020

19.342% 0.971 10.67% 11.812%
| | X. Weng and K. Kitani A Baseline for 3D Multi-Object Tracking in IROS, 2020

14.941% 0.991 5.934% 8.695%
| | Anonymous Submission

Additional Information Used

Symbol Description
Individual Image Method uses individual images from each camera
Stitched Image Method uses stitched images combined from the individual cameras
Pointcloud Method uses 3D pointcloud data
Online Tracking Method does frame-by-frame processing with no lookahead
Offline Tracking Method does not do in-order frame processing
Public Detections Method uses publicly available detections
Private Detections Method uses its own private detections

Evaluation Measures[1]

Measure Better Perfect Description
OSPA(2)IoU
OSPA(2)IoU lower 0.0 OSPA is a set-based metric which can directly capture a distance, between two sets of trajectories without a thresholding parameter [2].
OSPA(2)IoU     Localization lower 0.0 Representing different tracking error such as the displacement and size errors, track ID switches, track fragmentation or even track late initiation/early termination [2].
OSPA(2)IoU     Cardinality lower 0.0 Representing cardinality mismatch between two sets, penalizing missed or false tracks without an explicit definition for them [2].
HOTA
HOTA higher 100% Higher Order Tracking Accuracy [3]. Geometric mean of detection accuracy and association accuracy. Averaged across localization thresholds.
AssA higher 100% Association Accuracy [3]. Association Jaccard index averaged over all matching detections and then averaged over localization thresholds.
DetA higher 100% Detection Accuracy [3]. Detection Jaccard index averaged over localization thresholds.
AssRe higher 100% Association Recall [3]. TPA / (TPA + FNA) averaged over all matching detections and then averaged over localization thresholds.
AssPr higher 100% Association Precision [3]. TPA / (TPA + FPA) averaged over all matching detections and then averaged over localization thresholds.
DetRe higher 100% Detection Recall [3]. TP /(TP + FN) averaged over localization thresholds.
DetPr higher 100% Detection Precision [3]. TP /(TP + FP) averaged over localization thresholds.
LocA higher 100% Localization Accuracy [3]. Average localization similarity averaged over all matching detections and averaged over localization thresholds.
Clear-MOT
MOTA higher 100% Multi-Object Tracking Accuracy (+/- denotes standard deviation across all sequences) [4]. This measure combines three error sources: false positives, missed targets and identity switches.
MOTP higher 100% the total error in estimated position for matched object-hypothesis pairs over all frames, averaged by the total number of matches made. [4]
MT higher 100% Mostly tracked targets. The ratio of ground-truth trajectories that are covered by a track hypothesis for at least 80% of their respective life span.
ML lower 0% Mostly lost targets. The ratio of ground-truth trajectories that are covered by a track hypothesis for at most 20% of their respective life span.
Recall higher 100% Ratio of correct detections to total number of GT boxes.
Precision higher 100% Ratio of TP / (TP+FP).
FP lower 0 The total number of false positives.
FN lower 0 The total number of false negatives (missed targets).
ID Switches lower 0 Number of Identity Switches (ID switch ratio = #ID switches / recall) [4]. Please note that we follow the stricter definition of identity switches as described in the reference
Identity
IDF1 higher 100% ID F1 Score [4]. The ratio of correctly identified detections over the average number of ground-truth and computed detections.
IDP higher 100% Identification recall: IDTP/(IDTP+IDFP)[5]
IDR higher 100% Identification precision: IDTP/(IDTP+IDFN)[5].
others - 100% Referring to the publication[5].
Count
Dets - - Numbers of total detected objects in a sequence.
IDs - - Numbers of total tracked IDs in a sequence.

Reference

  1. The style and content of the Evaluation Measures section is reference from MOT Challenges.
  2. Hamid Rezatofighi∗, Tran Thien Dat Nguyen∗, Ba-Ngu Vo, Ba-Tuong Vo, Silvio Savarese, and Ian Reid.. How Trustworthy are Performance Evaluationsfor Basic Vision Tasks? Arxiv, 2021.
  3. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixé, Bastian Leibe. HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking. International Journal of Computer Vision, 2020.
  4. Keni Bernardin, Rainer Stiefelhagen. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. Image and Video Processing, 2008(1):1-10, 2008.
  5. Yuan Li, Chang Huang and Ram Nevatia. Learning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene . In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009.
  6. Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara and Carlo Tomasi. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking . European Conference on Computer Vision, 2016.