PRORETA 4 Datasets

During the course of the PRORETA 4 project two datasets were published. Here you can find information about them and get access.

1. Object of Fixation Dataset

Download the dataset

The dataset was created by Julian Schwehr and Moritz Knaust of Technische Universität Darmstadt.

This dataset was created in order to evaluate different models for detecting the driver's current object of fixation, i.e. finding the object the driver is looking at, when using a remote gaze tracking system. Determining the tracking quality of the remote gaze tracking system does not assess the advantages and drawbacks of specific algorithmic fusion approaches. Furthermore, when estimating the driver's point of regard (PoR) and the gaze target, all algorithmic approaches share the problem that there exists no ground truth on where the driver is truly looking.

For this purpose, a wearable gaze tracking device was operated in parallel to the vehicle-integrated head-eye-tracking system, serving as source for reference data of the driver's visual attention.

The dataset contains:

  • remote gaze direction measurements, stereo image recordings, and object lists of several artificial and real world scenarios as recorded by the PRORETA 4 test vehicle,
  • images and point of regard as measured by the wearable eye tracking device,
  • some sequences are labeled as outlined in the associated paper,
  • raw data of the real world drive (~5min),
  • more information in the readme of the dataset.

If you use this dataset in your research please cite the associated publication:

Julian Schwehr, Moritz Knaust, and Volker Willert: “How to Evaluate Object-of-Fixation Detection”, IEEE Intelligent Vehicles Symposium (IV), 2019.

Read the Paper at IEEE Xplore

BibTex:

@inproceedings{Schwehr.2019,
author = {Schwehr, Julian and Knaust, Moritz and Willert, Volker},
title = {How to Evaluate Object-of-Fixation Detection},
booktitle = {IEEE Intelligent Vehicles Symposium (IV)},
year = {2019}
}

The mentioned gaze target tracking model is introduced in:
Multi-Hypothesis Multi-Model Driver's Gaze Target Tracking.

2. Visual Feature Track Dataset (P4 VFTD)

Download the dataset

The dataset was created by Stefan Luthardt and Christoph Ziegler of Technische Universität Darmstadt.

This dataset contains 282 visual feature tracks. A visual feature track is a sequence of feature observations of the same real 3D-landmark in consecutive image frames. These tracks are the output of a classical feature matching system, e.g. a Visual Odometry system or a system with Bundle Adjustment.

The feature tracks were recorded at three different days in spring 2017 in a suburban area. The dataset provides each track as a sequence of square image patches which contain the surrounding of the observed feature. Since a stereo camera setup was used, there are two patches per feature observation. In total the dataset contains 3162 of these image patches.

If you use this dataset in your research please cite the associated publication:

Stefan Luthardt, Christoph Ziegler, Volker Willert, and Jürgen Adamy: “How to Match Tracks of Visual Features for Automotive Long-Term SLAM”, IEEE 22nd International Conference on Intelligent Transportation Systems (ITSC), 2019.

Download the Paper (opens in new tab)

This paper also provides future explanations of the track matching task and describes possible approaches to solve this task.

BibTex:

@inproceedings{Luthardt.2019,
author = {Luthardt, Stefan and Ziegler, Christoph and Willert, Volker and Adamy, Jürgen},
title = {How to Match Tracks of Visual Features for Automotive Long-Term-{SLAM}},
booktitle = {IEEE 22nd International Conference on Intelligent Transportation Systems (ITSC)},
year = {2019}
}

Paper describing the associated SLAM algorithm:
LLama-SLAM: Learning High-Quality Visual Landmarks for Long-Term Mapping and Localization. (opens in new tab)

The dataset was created to investigate the task of long-term feature track matching, i.e. finding all tracks that belong to the same landmark. Therefore, the dataset also contains “ground truth” labels which of the tracks from the different days belong together. Furthermore, the distance to the feature is given for each observation.

Like every real-world data, this dataset is not perfect. If you identify a major bug, please write an e-mail to with the track-ID and a description of the problem.