JRDB Dataset Information

Overview

JRDB is a large-scale benchmark dataset designed for perceptual tasks necessary for a robot to understand a scene and human behaviour in indoor and outdoor environments.

Dataset Contents

Key Features

Crowded sequences

Some sequences include up to 260 annotated persons in a given environment.

Novel environments

The dataset includes both indoor and outdoor environments. This is the first large dataset with annotated indoor scenes. In outdoor scenes our dataset is acquired from a pedestrian perspective (e.g. from curbs) instead of from a vehicle perspective.

Novel perspective

The data is acquired from a robot platform of human-comparable size and therefore it is similar to the egocentric view of a person. This creates scenarios with multiple occlusions and poses a significant challenge for detection, tracking and other perception tasks.

Stationary and dynamic sensor perspectives

The sequences present a combination of sensor signals acquired from stationary and moving robot perspectives.

JRDB-Pose (New!)

JRDB-Pose contains new annotations for body pose and head box for our entire train and test video set, including 600,000 human body pose annotations and 600,000 head bounding box annotations. Annotations include heavily occluded poses, making JRDB-Pose both difficult and representative of the real-world environment.

Documentation

For JRDB and JRDB-Act, the most relevant parameters and information about the sensor setup are included in the following documents, as well as details about the annotation structure.

JRDB-Pose

For information about JRDB-Pose, please see the JRDB-Pose webpage, which provides details about the annotations and data format.

Downloads

See downloads