JackRabbot Dataset and Benchmark (JRDB)

Visual Perception for Navigation in Human Environments

Overview

JRDB is a novel dataset collected from our social mobile manipulator, JackRabbot. Our goal is to provide training and benchmarking data for research in the areas of autonomous robot navigation and all perceptual tasks related to social robotics in human environments.

Key Statistics

  • 60,000 annotated frames
  • 2.4 million 2D human bounding box annotations
  • 1.8 million 3D human bounding box annotations
  • 2.8 million individual human action labels
  • 600,000 body pose annotations
... and more! Check out the dataset details page for more information.

Download

Please note that you are required to log in before downloading JRDB. If you don't have an account, you can sign up for one here.

See Downloads
Submit to Benchmark
...

The JackRabbot social mobile manipulator.

News and Announcements

Jun. 27, 2022

As a part of our ECCV22 workshop, we have released JRDB2022 new annotations including 2D human skeleton pose & head bounding boxes and the improved 2&3D bounding box, action & social grouping. New JRDB22 leaderboards will be launched soon.

March 30, 2022

We will organise a new JRDB workshop in conjunction with ECCV22. Click here for more information

March 2, 2022

The JRDB-ACT paper has been accepted and will presented in CVPR22. Find the paper here

November 26, 2021

JRDB dataset and benchmark has offically moved to Monash University! The previous server at Stanford will shut down soon. All previous submissions have been uploaded by the dataset admins. Please register a new account to make submissions.

Downloads

JRDB 2022 (JRDB, JRDB-Act, & JRDB-Pose) New!

Plase log in to view these downloads.

JRDB 2019 (JRDB & JRDB-Act)

Plase log in to view these downloads.

Our Papers

JRDB: A Dataset and Benchmark of Egocentric Robot Visual Perception of Humans in Built Environments

Roberto Martín-Martín*, Mihir Patel*, Hamid Rezatofighi*, Abhijeet Shenoi, JunYoung Gwak, Eric Frankel, Amir Sadeghian, Silvio Savarese.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.

See paper on arXiv

Citation:

@article{martin2021jrdb,
    title={Jrdb: A dataset and benchmark of egocentric robot visual perception of humans in built environments},
    author={Martin-Martin, Roberto and Patel, Mihir and Rezatofighi, Hamid and Shenoi, Abhijeet and Gwak, JunYoung and Frankel, Eric and Sadeghian, Amir and Savarese, Silvio},
    journal={IEEE transactions on pattern analysis and machine intelligence},
    year={2021},
    publisher={IEEE}
}
JRDB-Act: A Large-scale Dataset for Spatio-temporal Action, Social Group and Activity Detection

Mahsa Ehsanpour, Fatemeh Saleh, Silvio Savarese, Ian Reid, Hamid Rezatofighi.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.

See paper on arXiv

Citation:

@inproceedings{ehsanpour2022jrdb,
    title={JRDB-Act: A Large-Scale Dataset for Spatio-Temporal Action, Social Group and Activity Detection},
    author={Ehsanpour, Mahsa and Saleh, Fatemeh and Savarese, Silvio and Reid, Ian and Rezatofighi, Hamid},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2022}
}