ECCV 2022: 3rd Workshop on Visual Perception for Navigation in Human Environments

The JackRabbot Human Body Pose Dataset and Benchmark


Workshop Goals

This is the third workshop from the JRDB workshop series, tailored to many perceptual problems for an autonomous robot to operate, interact and navigate in human environments. These perception tasks include any 2D or 3D visual scene understating problem as well as any other problems pertinent to human action, intention and social behaviour understanding such as 2D-3D human detection, tracking and forecasting, 2D-3D human body skeleton pose estimation, tracking and forecasting and human social grouping and activity recognition.

JRDB dataset contains 67 minutes of the annotated sensory data acquired from the JackRabbot mobile manipulator and includes 54 indoor and outdoor sequences in a university campus environment. The sensory data includes a stereo RGB 360° cylindrical video stream, 3D point clouds from two LiDAR sensors, audio and GPS positions. In our first workshop, we introduced JRDB including annotations for 2D bounding boxes and 3D oriented cuboids around pedestrians. In our second workshop, we further introduced new annotations for individual actions, human social group formation, and social activity of each social group. In this workshop, we additionally release anontations for 2D human body pose including 650,000 annotated human body skeletons with visibility and occlusion labels. We have also invited speakers in the field of visual perception for understanding human action and behavior.

Call for Papers


We invite researchers to submit their papers addressing topics related to autonomous (robot) navigation in human environments. Relevant topics include, but not limited to:

  • 2D or 3D human detection and tracking
  • 2D or 3D skeleton pose estimation and tracking
  • Human motion/body skeleton pose prediction
  • Social group formation and identification
  • Individual, group and social activity recognition
  • Abnormal activity recognition
  • Motion prediction and social models
  • Human walking behaviour analysis
  • 2D and 3D semantic, instance or panoptic segmentation
  • Visual and social navigation in crowded scenes
  • Dataset proposals and bias analysis
  • New metrics and performance measure for different visual perception problems related to autonomous navigation


  • Submission deadline for the full papers:
    July 20 (Anywhere on Earth)
  • Acceptance notification of full papers:
    August 4
  • Camera-ready deadline for the full papers:
    August 18
  • JRDB-Pose competition deadline:
    October 15


Full-paper submissions could follow the ECCV format (maximum 14 single-column pages excluding references), while extended abstract have a maximum of 2 page, single-column, excluding references). Accepted papers have the opportunity to be presented as a poster during the workshop. However, only papers in ECCV format will appear in the proceedings. By submitting to this workshop, the authors agree to the review process and understand that we will do our best to match papers to the best possible reviewers. The reviewing process is double-blind. Submission to the challenge is independent of the paper submission, but we encourage the authors to submit to one of the challenges.

Submissions can be made here. If you have any questions about submitting, please contact us here.

Submit to our Workshop


In addition to the existing benchmarks and challenges on JRDB (2D-3D person detection and tracking, human social group identification, individual action detection, and social activity recognition), in this workshop, we organise two new challenges using our new annotations:

The first winner of each of the challenges will be awarded a prize (TBD) and a certificate. The winners will also have an opportunity to present their work as a spotlight (5 minutes) and poster presentation during the workshop.

Guidelines for participation

The participants should strictly follow the same submission policy provided in the main JRDB webpage, which can be found here. Also, in order to distinguish the challenge submissions from the other regular submissions, each submission name should be followed by a ECCV22 tag, e.g., "submissionname_ECCV22". Otherwise, we ignore those submissions for the challenge.

Evaluation & Toolkit

We use the first metric after "name" in all the leaderboards as the main evaluation for ranking the entries. For each benchmark, we have also created toolkits to work with the dataset, perform evaluation, and create submissions. These toolkits are available at here.

The challenge deadline is October 15.

Dataset details Challenge Leaderboard


Time Speaker Title
14:00 - 14:10 Introduction
14:10 - 14:40 Dana Kulić / Pamela Carreno-Medrano Human motion measurement and modeling for navigation
14:40 - 15:10 Otmar Hilliges Human-Centric 3D Computer Vision for Future AI Systems
15:10 - 15:40 Dima Damen Opportunities in Egocentric Vision
15:40 - 16:10 Gerard Pons-Moll Capturing and Modeling 3D Human Behavior
16:10 - 16:25 Coffee Break & Paper presentation video demo
16:25 - 16:50 Hamid Rezatofighi and Edward Vendrow Introduction to JRDB Pose Dataset and Challenge
16:50 - 17:20 Yaser Sheikh Photorealistic Telepresence
17:20 - 17:50 Adrien Gaidon TBA
17:50 - 18:00 Discussion, Closing Remarks and Awards

Invited Speakers

Gerard Pons-Moll

Professor of Computer Science, University of Tübingen

Yaser Sheikh

Associate Professor in the Robotics Institute, Carnegie Mellon University and Director, Facebook Reality Lab

Dima Damen

Professor of Computer Vision at the University of Bristol

Otmar Hilliges

Associate Professor of Computer Science, ETH Zurich

Dana Kulić

Professor at Monash University

Adrien Gaidon

Head of Machine Learning Research at Toyota Research Institute (TRI)

Program Committee

Name Organization
Aakash Kumar University of Central Florida
Dan Jia RWTH Aachen University
Edwin Pan Stanford University
Ehsan Adeli Stanford University
Haofei Xu University of Tübingen
Hsu-Kuang Chiu Waymo
Huangying Zhan The University of Adelaide
Karttikeya Mangalam UC Berkeley
Michael Wray University of Bristol
Michael Villamizar Idiap Research Institute
Mohsen Fayyaz Microsoft
Nathan Tsoi Yale University
Nikos Athanasiou Max Planck Institute for Intelligent Systems
Saeed Saadatnejad EPFL
Sandika Biswas IIT Bombay - Monash University
Shyamal Buch Stanford University
Tianyu Zhu Monash University
Vida Adeli University of Toronto
Vineet Kosaraju Stanford University
Ye Yuan Carnegie Mellon University


Hamid Rezatofighi

Monash University

Edward Vendrow

Stanford University

Ian Reid

The University of Adelaide

Silvio Savarese

Stanford University