Fernando Amodeo, Noé Perez-Higueras, Luis Merino, Fernando Caballero
Service Robotics Lab, Universidad Pablo de Olavide, Seville, Spain
You can read the published article.
Cite us as:
@article{frog2025,
title = {FROG: a new people detection dataset for knee-high 2D range finders},
author = {Amodeo, Fernando and Pérez-Higueras, Noé and Merino, Luis and Caballero, Fernando},
year = 2025,
journal = {Frontiers in Robotics and AI},
volume = {Volume 12 - 2025},
doi = {10.3389/frobt.2025.1671673},
issn = {2296-9144},
url = {https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2025.1671673},
}Mobile robots require knowledge of the environment, especially of humans located in its vicinity. While the most common approaches for detecting humans involve computer vision, an often overlooked hardware feature of robots for people detection are their 2D range finders. These were originally intended for obstacle avoidance and mapping/SLAM tasks. In most robots, they are conveniently located at a height approximately between the ankle and the knee, so they can be used for detecting people too, and with a larger field of view and depth resolution compared to cameras. In this paper, we present a new dataset for people detection using knee-high 2D range finders called FROG. This dataset has greater laser resolution, scanning frequency, and more complete annotation data compared to existing datasets such as DROW (Beyer et al., 2018). Particularly, the FROG dataset contains annotations for 100% of its laser scans (unlike DROW which only annotates 5%), 17x more annotated scans, 100x more people annotations, and over twice the distance traveled by the robot. We propose a benchmark based on the FROG dataset, and analyze a collection of state-of-the-art people detectors based on 2D range finder data. We also propose and evaluate a new end-to-end deep learning approach for people detection. Our solution works with the raw sensor data directly (not needing hand-crafted input data features), thus avoiding CPU preprocessing and releasing the developer of understanding specific domain heuristics. Experimental results show how the proposed people detector attains results comparable to the state of the art, while an optimized implementation for ROS can operate at more than 500 Hz.
The data is provided in HDF5 format, containing several arrays:
scans: Laser scan data (N, 720).timestamps: Timestamps associated with each laser scan (N).circles: Person annotations (M, 6): x, y, radius, distance, angle, angular radius.circle_idx and circle_num: Mapping between each scan
and its corresponding slice of the circles array (N).split: (Only present in training/validation set) Split associated
to each scan (M; 0=training, 1=testing).A full detailed description of the dataset format can be found in our paper. We also make available raw unaligned odometry data from the robot for each recorded sequence. Finally, the original raw ROS bag files from which this dataset was created are available. They contain data for other sensors present in the robot platform and which are not considered in this work, such as the cameras and back/tilted lasers.
The current best model for FROG is: DR-SPAAM (T = 5)
All model weights are available for download. If you have a new people detector model, please send us an email with a link to your paper and a GitHub repository so that we can include it here!
The author(s) declare that financial support was received for the research and/or publication of this article. FA is supported by the predoctoral grant PRE2022-105119 as part of the INSERTION project (PID2021-127648OB-C31), funded by Ministerio de Ciencia e Innovación. This work is partially supported by the project PICRAH4.0 (PLEC2023-010353) funded by programa Transmisiones 2023 del Ministerio de Ciencia e Innovación, and by the project NORDIC (TED2021-132476B-I00) funded by MCIN/AEI/10.13039/501100011033 and the European Union “NextGenerationEU”/“PRTR”.