|In the project, we are contributing to the development of the ROS-based supporting software architecture for a social tabletop robot. This architecture integrates robot drivers and sensor interfacing, motion control and high level decision making, multi-modal perception for social interaction, dialog and non-verbal communication and memory for long-term interaction. As part of the project, we are also carrying out research on social motion planning and control, including motion generation for robot expressivity, transferring motion from animation, social robot decision making and the application of the robot to different scenarios.
DeepBot: Novel Perception and Navigation Techniques for Service Robots based on Deep Learning
|The last decade has seen unprecedented advances in the development of machine learning techniques with the advent of so-called deep networks. The enormous potential of deep learning has not gone unnoticed in the field of robotics and, thus, it has recently been successfully applied, especially for tasks of perception of the environment.
However, deep learning is not exempt from problems in its application to robotics: the learning process requires large amounts of data and/or interactions to be able to converge to generalizable solutions, which in many cases is not feasible to obtain. Furthermore, in most cases, the available data does not consider the environments and conditions of robotic systems. Additionally, it is necessary that the approximations consider both dynamic and computational restrictions to which robotic systems are subject. In this sense, there are two promising lines that allow addressing these problems. On the one hand, the use of self-supervised approaches allows to significantly reduce the labeling work. On the other hand, the approach of new differentiable layers based on models, and capable of carrying out analytical operations known a priori, also allows reducing the requirements of labeled data and the training time due to the reduction of trainable parameters, as well as considering explicitly system constraints.
The DeepBot project is moving in this direction. The project pursues the idea of obtaining mixed solutions that combine the best of the classic techniques of location, planning and perception of people, with the latest advances in deep learning to create solutions that go beyond the current state of the art in terms of precision, speed or generality, thus giving rise to new possible applications for service robots..
TELEPoRTA: Machine Learning Techniques for Assistive Telepresence Robots
|The project focuses in the development of new technologies for telepresence robots based on machine learning, with special emphasis on assistive robotics applications. The main objective is to develop a semi-autonomous telepresence robot that allows a human-aware and natural interaction with people, as well as developing methods that allow the interaction through telepresence by cognitively impaired people through methods for automatic scene description and image captioning. In this way, the project considers the following subgoals: 1) New machine learning techniques for social robot navigation; 2) New cognitive computing techniques for perception of social robots; 3) Development of new semi-autonomous capabilities for telepresence assistive robots.
Collaboration with the Joint Research Centre, EU Commission, HUMAINT project
|The HUMAINT project aims to provide a multidisciplinary understanding of the state of the art and future evolution of machine intelligence and its potential impact on human behaviour, with a focus on cognitive and socio-emotional capabilities and decision making. The project has three main goals: a) Advance the scientific understanding of machine and human intelligence; b) study the impact of algorithms on humans, focusing on cognitive and socio-emotional development and decision making; c) provide insights to policy makers with respect to the previous issues.
Our group collaborate with the core team in HUMAINT regarding robotics-related technologies, and in studies on child-robot interaction.
COMplex Coordinated Inspection and SEcurity missions by uavs in cooperation with ugv (COMCISE)
|The COMCISE project proposes a smart multi-robot system for inspection in GPS-denied areas composed by a ground robot with enough computation and batteries, and a detachable tethered flying robot equipped with sensors for inspection and navigation. The main hypothesis is that such system offers the main benefits of both platforms: long inspection cycles and high manoeuvrability. UPO’s role will be centered on the cooperative planning, perception and navigation in the coordinated UAV-UGV system.
The specific objectives are:Precise multi-robot cooperative estimation, localization and mapping. The objective is to localize the UGV+UAV system in GPS-denied areas based on the local sensors installed in both vehicles. The approaches will take advantage of the synergies of both systems and also chance to relative estimate the position of one robot with respect the other. Multi-robot motion planning and control for tethered and untethered configurations. the objective is to develop new methods for motion planning and reactive collision avoidance of the whole UGV-UAV, taking into account the restrictions induced by the tether, communications, sensor payloads, etc. Cooperative planning and perception for efficient inspection applications: the objective is to develop new active sensing techniques for multi-robot systems for perception tasks, able to handle uncertainties and constraints and to improve the perception results.
|The NIx project proposes an ATEX certifiable navigation module for ground robotic inspection. The navigation module will be developed so that it can be easily integrated into different robot platforms. NIx development will be divided in two stages. Phase I will focus in the implementation and validation in real experiments of the whole localization and navigation stack. Phase II will focus in the industrialization of the localization and navigation system according with the ATEX certification requirements and on the usability of the system.
|Our lab will participate in the competition as part of the CVAR-UPM/ SRLab-UPO/ UAVRG-PUT team, with our colleagues from the Universidad Politecnica de Madrid and Poznan University of Technology.
|The objective of this project is the development and integration of a human-robot co-working system for warehouse picking applied to production line feeding. The proposed system will bring out the best in everyone; the extreme flexibility and adaptation of humans, and the safety and control of ground robots. This technology is particularly interesting for small and medium factories where fully robotized warehouses are not affordable or overkill, at the time that it allows optimizing the picking task with respect to manual operation.
The project is based on two main pillars to reach its objective: i) robust person detection and tracking for safe robot navigation: this new HORSE software component for detection and tracking of workers in the factory will allow the integration of ultra-wide-band localization systems, image-based people detection and LIDAR- based people detection to build a resilient software that exploits the synergies between different sensor modalities; ii) increase of AGV autonomy to perform autonomous navigation with people awareness in dynamic/changing scenarios.
|The SIAR project will develop a fully autonomous ground robot able to autonomously navigate and inspect the sewage system with a minimal human intervention, and with the possibility of manually controlling the vehicle or the sensor payload when required.
The project uses as starting point IDMind’s robot platform RaposaNG. A new robot will be built based on this know-how, with the following key steps beyond the state of the art required to properly address the challenge: a robust IP67 robot frame designed to work in the hardest environmental conditions with increased power autonomy and flexible inspection capabilities; robust and increased communication capabilities; onboard autonomous navigation and inspection capabilities; usability and cost effectiveness of the developed solution. UPO leads the navigation tasks on the project
|TERESA aims to develop a telepresence robot of unprecedented social intelligence, thereby helping to pave the way for the deployment of robots in settings such as homes, schools, and hospitals that require substantial human interaction. In telepresence systems, a human controller remotely interacts with people by guiding a remotely located robot, allowing the controller to be more physically present than with standard teleconferencing. We will develop a new telepresence system that frees the controller from low-level decisions regarding navigation and body pose in social settings. Instead, TERESA will have the social intelligence to perform these functions automatically.The project’s main result will be a new partially autonomous telepresence system with the capacity to make socially intelligent low-level decisions for the controller. Sometimes this requires mimicking the human controller (e.g., nodding the head) by translating human behavior to a form suitable for a robot. Other times, it requires generating novel behaviors (e.g., turning to look at the speaker) expected of a mobile robot but not exhibited by a stationary controller. TERESA will semi-autonomously navigate among groups, maintain face-to-face contact during conversations, and display appropriate body-pose behavior.
Achieving these goals requires advancing the state of the art in cognitive robotic systems. The project will not only generate new insights into socially normative robot behavior, it will produce new algorithms for interpreting social behavior, navigating in human-inhabited environments, and controlling body poses in a socially intelligent way.
The project culminates in the deployment of TERESA in an elderly day center. Because such day centers are a primary social outlet, many people become isolated when they cannot travel to them, e.g., due to illness. TERESA’s socially intelligent telepresence capabilities will enable them to continue social participation remotely.
UPO leads WP4 within TERESA, “Socially Navigating the Environment”, that deals with the development of the navigation stack (including localization, path planning and execution) required to enable enhanced, socially normative, autonomous navigation of the telepresence robot.
OCELLIMAV: New sensing and navigation systems based on Drosophilas OCELLI for Micro Aerial Vehicles
|Micro unmanned Aerial Vehicles (MAVs) may open up a new plethora of applications for aerial robotics, both in indoor and outdoors scenarios. However, the limited payload of these vehicles limits the sensors and processing power that can be carried by MAVs, and, thus, the level of autonomy they can achieve without relying on external sensing and processing. Flying insects, like Drosophila, on the other hand, can carry out impressive maneuvers with a relatively small neural system. This project will explore the biological fundamentals of Drosophilas ocelli sensory-motor system, one of the mechanisms most likely related to fly stabilization, and the possibility to derive new sensing and navigation systems for MAVs from it.The project will do it by a complete reverse engineering of the ocelli system, estimating the structure and functionality of its neural processing network, and then modeling it through the interaction of biological and engineering research. Novel genetic-based neural tracing methods will be employed to extract the topology of the neural network, and behavioral experiments will be devised to determine the functionalities of specific neurons. The findings will be used to derive a model, and this model will be used to characterize the relevant aspects from the point of view of estimation and control. The model will also serve to determine the adaptation of this sensory-motor system to current MAV platforms, and to design of a proof of concept sensing and navigation system.
The expected results of the project are two-fold: to corroborate or challenge current assumptions on the use of ocelli in flies; and the creation of a proof of concept device with the findings of the project for a MAV system.
|The project main objective is the development of efficient methods for dealing with uncertainties in robotic systems, and, in particular, teams of multiple robots.One of the objectives is the development of an scalable cooperative perception system able to reason on the uncertainties associated to the sensing system in the multi-robot platform. The efficient application of online Partially Observable Markov Decision Processes (POMDPs) to this problem will be analyzed.
In the project, we aim to demonstrate the in real robotic systems for applications like surveillance.
|Autonomous systems and unmanned aerial vehicles (UAVs), can play an important role in many applications including disaster management, and the monitoring and measurement of events, such as the volcano ash cloud of April 2010. Currently, many missions cannot be accomplished or involve a high level of risk for the people involved (pilots and drivers), as unmanned vehicles are not available or not permitted. This also applies to search and rescue missions, particularly in stormy conditions, where pilots need to risk their lives. These missions could be performed or facilitated by using autonomous helicopters with accurate positioning and the ability to land on mobile platforms such as ship decks. These applications strongly depend on the UAV reliability to react in a predictable and controllable manner in spite of perturbations, such as wind gusts. On the other hand, the cooperation, coordination and traffic control of many mobile entities are relevant issues for applications such as automation of industrial warehousing, surveillance by using aerial and ground vehicles, and transportation systems. EC-SAFEMOBIL is devoted to the development of sufficiently accurate common motion estimation and control methods and technologies in order to reach levels of reliability and safety to facilitate unmanned vehicle deployment in a broad range of applications. It also includes the development of a secure architecture and the middleware to support the implementation. Two different kind of applications are included in the project:
UPO’s team is involved through a subcontracting, dealing with the development of multi-vehicle planning under uncertainties and decentralized data fusion algorithms, leaded by Luis Merino. In addition, we developed novel strategies for automatic control of rotary-wing helicopters in landing operations. This work was leaded by Manuel Béjar.
|FROG proposes to develop a guide robot with a winning personality and behaviours that will engage tourists in a fun exploration of outdoor attractions. The work encompasses innovation in the areas of vision-based detection, robotics design and navigation, human-robot interaction, affective computing, intelligent agent architecture and dependable autonomous outdoor robot operation.The FROG robots fun personality and social visitor-guide behaviours aim to enhance the user experience. FROGs behaviours will be designed based on the findings of systematic social behavioural studies of human interaction with robots. FROG adapts its behaviour to the users through vision-based detection of human engagement and interest. Interactive augmented reality overlay capabilities in the body of the robot will enhance the visitors experience and increase knowledge transfer as information is offered through multi-sensory interaction. Gesture detection capabilities further allow the visitors to manipulate the augmented reality interface to explore specific interests.
The is a unique project in respect to Human Robot Interaction in that it considers the development of a robot’s personality and behaviours to engage the users and optimise the user experience. We will design and develop those robot behaviour’s that complement a guide robot’s personality so that users experience and engage with the robot truly as a guide.
We lead WP2 within FROG, dealing with robot localization and social navigation. We are developing algorithms for robust localization and navigation in outdoors scenarios. Furthermore, we are working on models and tools for social navigation.
|The vision of Cooperating Objects is relatively new and needs to be understood in more detail and extended with inputs from the relevant individual communities that compose it. This will enable us to better understand the impact on the research landscape and to steer the available resources in a meaningful way.The main goal of CONET is to build a strong community in the area of Cooperating Objects capable of conducting the needed research to achieve the vision of Mark Weiser.
Therefore, the CONET Project Objectives are the following:
Within this project, we are working on the research cluster on Mobility of Cooperating Objects, mainly in the tasks of multi-robot planning under uncertainty and aerial objects coordination and control.
URUS: Ubiquitous networking Robotics in Urban Settings
|The URUS project focussed in designing a network of robots that in a cooperative way interact with human beings and the environment for tasks of assistance, transportation of goods, and surveillance in urban areas. Specifically, the objective of the project was to design and develop a cognitive network robot architecture that integrates cooperating urban robots, intelligent sensors, intelligent devices and communications.Among the specific technology that has been developed in the project, it can be found: navigation coordination; cooperative perception; cooperative map building; task negotiation; human robot interaction; and wireless communication strategies between users (mobile phones), the environment (cameras), and the robots. Moreover, in order to make easy the tasks in the urban environment, commercial platforms that have been specifically designed to navigate and assist humans in such urban settings will be given autonomous mobility capabilities.
Proof-of concept tests of the systems developed took place in the UPC campus, a car free area of Barcelona.
The participation of the UPO in the project was two-fold. In one hand, we developed the navigation algorithms for our robot Romeo, to allow it to navigate safely in a pedestrian environment.
On the other hand, we participated in the development of a decentralized fusion system for collaborative person tracking and estimation using all the elements of the system (robots, camera network and sensor network).
|The main objective of COMETS is to design and implement a distributed control system for cooperative activities using heterogeneous Unmanned Aerial Vehicles (UAVs). Particularly, both helicopters and airships are included.Technologies involved in COMETS Project:
A key aspect in this project is the experimentation: local UAV experiments and general multi-UAV demonstrations.
Fernando Caballero and Luis Merino were directly involved in the project.