Leaderboard

Instructions

This page display the submited results for Trajectory Forecasting Leaderboards. For each submission, we display several main metrics in main table. For detailed information, more metrics, per-sequence results and visualisation (coming soon), please click submission name. For all tables, you can click headers to sort the results. Note you can download the submission zip file as well. Legends, metrics descriptions and reference are displayed after leaderboards table. For more information on submission preparation, click here.

The challenge is again (after the ICCV workshop) open and running. We show the leading submission from each group on the trajectory forecasting leaderboard. For more information on the dataset, metric and benchmark, please refer to JRDB-Traj paper. Our code to reproduce the baseline is available here.

Trajectory Forecasting Submissions

Name EFE↓ OSPA(2) IDF1↑

2.506 2.627 58.392
Tim Salzmann, Lewis Chiang, Markus Ryll, Dorsa Sadigh, Carolina Parada and Alex Bewley Robots That Can See: Leveraging Human Pose for Trajectory Prediction in IEEE Robotics and Automation Letters 2023

2.558 2.675 56.298
| Yang Gao Social-pose: Human Trajectory Prediction using Input Pose in 2022

2.646 2.76 54.673
| Saeed Saadatnejad, Yang Gao, Hamid Rezatofighi, and Alexandre Alahi JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds in arXiv 2023

2.646 2.76 54.673
| Anonymous Submission

2.666 2.778 54.441
| Anonymous Submission

2.671 2.786 55.258
Francesco Marchetti, Federico Becattini, Lorenzo Seidenari, Alberto Del Bimbo SMEMO: Social Memory for Trajectory Forecasting in Pre-print (in submission)

2.688 2.8 56.352
Anonymous Submission

3.077 3.172 47.689
| Anonymous Submission

3.498 3.579 37.127
| Javad Amirian, Bingqing Zhang, Francisco Valente Castro, Juan Jose Baldelomar, Jean-Bernard Hayet OpenTraj: Assessing Prediction Complexity in Human Trajectories Datasets in 2020

3.939 3.961 23.813
Karttikeya Mangalam, Harshayu Girase, Shreyas Agarwal, Kuan-Hui Lee, Ehsan Adeli, Jitendra Malik, Ad It Is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction in 2020

Additional Information Used

Symbol Description
Individual Image Method uses individual images from each camera
Stitched Image Method uses stitched images combined from the individual cameras
Pointcloud Method uses 3D pointcloud data
Public Tracking Method uses publicly available tracks as observations
Private Tracking Method uses its own private tracks

Evaluation Measures [1]

Measure Better Perfect Description
EFE
EFE lower 0.0 EFE is the End-to-end Forecasting Error which directly measures the distance between two sets of trajectories without requiring IDs [2]. We drew inspiration from the current trend in end-to-end trajectory forecasting and tracking evaluation metrics [3].
EFE Loc lower 0.0 Representing average displacement error between two sets of trajectories.
EFE Card lower 0.0 Representing cardinality mismatch between two sets of trajectories, penalizing missed or extra forecast trajectories.
OSPA(2)
OSPA(2) lower 0.0 OSPA is a set-based metric which can directly capture a distance, between two sets of trajectories without a thresholding parameter [3].
OSPA(2) Localization lower 0.0 Representing prediction error such as the displacement, track ID switches, track fragmentation or even track late initiation/early termination [3].
OSPA(2) Cardinality lower 0.0 Representing cardinality mismatch between two sets, penalizing missed or false tracks without an explicit definition for them [3].
Identity
IDF1 higher 100% ID F1 Score [4]. The ratio of correctly identified detections over the average number of ground-truth and computed detections.
IDP higher 100% Identification recall: IDTP/(IDTP+IDFP) [5]
IDR higher 100% Identification precision: IDTP/(IDTP+IDFN) [5].
others - 100% Referring to the publication [5].
Count
Dets - - Numbers of total detected objects in a sequence.
IDs - - Numbers of total tracked IDs in a sequence.

Reference

  1. The style and content of the Evaluation Measures section is reference from MOT Challenges.
  2. Saeed Saadatnejad, Yang Gao, Hamid Rezatofighi, and Alexandre Alahi. JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds In arXiv, 2023.
  3. Hamid Rezatofighi∗, Tran Thien Dat Nguyen∗, Ba-Ngu Vo, Ba-Tuong Vo, Silvio Savarese, and Ian Reid. How Trustworthy are Performance Evaluationsfor Basic Vision Tasks? IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2023.
  4. Keni Bernardin and Rainer Stiefelhagen. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. Image and Video Processing, 2008(1):1-10, 2008.
  5. Yuan Li, Chang Huang and Ram Nevatia. Learning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene . In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009.