JackRabbot Dataset and Benchmark (JRDB)

Visual Perception for Navigation in Human Environments


JRDB is a novel dataset collected from our social mobile manipulator, JackRabbot. Our goal is to provide training and benchmarking data for research in the areas of autonomous robot navigation and all perceptual tasks related to social robotics in human environments.

Key Statistics

  • 60,000 annotated frames
  • 2.4 million 2D human bounding box annotations
  • 1.8 million 3D human bounding box annotations
  • 2.8 million individual human action labels
  • 600,000 body pose annotations
... and more! Check out the dataset details page for more information.


Please note that you are required to log in before downloading JRDB. If you don't have an account, you can sign up for one here.

See Downloads
Submit to Benchmark

The JackRabbot social mobile manipulator.

News and Announcements

October 10, 2022

We have extended the Pose Estimation Challenge deadline to October 15 AoE (Anywhere on Earth). You can find the submission leaderboard here. We look forward to seeing all of your submissions!

August 26, 2022

2D and 3D test set detections and head bounding boxes for both 2D and stitched images are released. See downloads

August 25, 2022

Visualisation toolkit which supports multiple visualisation settings is published, find it here

August 24, 2022

We excited to launch the challenge leaderboard for JRDB 2022! Check them out the leaderboards here. We're running a challenge for our new dataset, JRDB-Pose, and the winners will get to present their work at our workshop at ECCV 2022.

Jun. 27, 2022

As a part of our ECCV22 workshop, we have released JRDB2022 new annotations including 2D human skeleton pose & head bounding boxes and the improved 2&3D bounding box, action & social grouping. New JRDB22 leaderboards will be launched soon.

March 30, 2022

We will organise a new JRDB workshop in conjunction with ECCV22. Click here for more information

March 2, 2022

The JRDB-ACT paper has been accepted and will presented in CVPR22. Find the paper here

November 26, 2021

JRDB dataset and benchmark has offically moved to Monash University! The previous server at Stanford will shut down soon. All previous submissions have been uploaded by the dataset admins. Please register a new account to make submissions.


JRDB 2022 (JRDB, JRDB-Act, & JRDB-Pose) New!

The newest iteration of JRDB, featuring all of our video scenes and annotations.

Plase log in to view these downloads.

JRDB 2019 (JRDB & JRDB-Act)

Plase log in to view these downloads.

Our Papers

JRDB: A Dataset and Benchmark of Egocentric Robot Visual Perception of Humans in Built Environments

Roberto Martín-Martín*, Mihir Patel*, Hamid Rezatofighi*, Abhijeet Shenoi, JunYoung Gwak, Eric Frankel, Amir Sadeghian, Silvio Savarese.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.

See paper on arXiv


    title={Jrdb: A dataset and benchmark of egocentric robot visual perception of humans in built environments},
    author={Martin-Martin, Roberto and Patel, Mihir and Rezatofighi, Hamid and Shenoi, Abhijeet and Gwak, JunYoung and Frankel, Eric and Sadeghian, Amir and Savarese, Silvio},
    journal={IEEE transactions on pattern analysis and machine intelligence},
JRDB-Act: A Large-scale Dataset for Spatio-temporal Action, Social Group and Activity Detection

Mahsa Ehsanpour, Fatemeh Saleh, Silvio Savarese, Ian Reid, Hamid Rezatofighi.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.

See paper on arXiv


    title={JRDB-Act: A Large-Scale Dataset for Spatio-Temporal Action, Social Group and Activity Detection},
    author={Ehsanpour, Mahsa and Saleh, Fatemeh and Savarese, Silvio and Reid, Ian and Rezatofighi, Hamid},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},