See also

The laboratory web site for the Cognitive Robotics Laboratory -


Localization and mapping is a fundamental competence for design of any mobile robot system. The research is focused on design of localization and mapping systems that are robust to data association errors, have scalable complexity and generalize across multiple sensory modalities. Graphical models are leveraged for estimation, Bayesian models are used for multi sensory fusion and system are evaluated across in- and out-door scenes. Current research considers use of vision as a primary modality for mapping and localization. Research also includes integration of vision, IMU and RGBD sensors. Finally, recent research includes opportunitic localization. See GitHub for details on code, ....

Autonomous Vehicle Laboratory

The Autonomous Vehicle Laboratory is studying level 4-5 autonomous cars for urban transportation. The laboratory collaborates with a number of industrial companies to field real systems. We study perception, systems integration, local mapping, adpative planning, and interaction with other road-users such as pedestrians and bicyclists. The research is performed in collaboration with UCSD facilities in a trial for automated logistic services across campus.

Cognitive Robotics

The next challenge in mobile robotics is to endow the systems with cognitive capabilities. Cognition implies competencies to represent knowledge about the external world in terms of objects, events, and agents, to autonomously be able to acquire such knowledge and to reason about the world to facilitate action generation. The research is focused on several aspect of cognitive robots such as, recognition of objects and activities, and dialog generation to enable interaction with humans as part of learning, and execution of tasks.

Sensor Based Manipulation

Traditionally robot manipulators have achieved their accuracy through use of excellent mechanisms and strong models for control. This has enabled design of robots with accuracies below 1 mm. To achieve repeatable accuracies better than 0.1mm there is a need to integrate sensors into the outer feedback loop. A number of different sensory modalities can be utilized such as force-torque, tactile, and computer vision. We are particularly interested in vision and range data for non-contact sensing and use of force-torque for control as part of contact configuration. The objective is here to integrate multiple models into hybrid dynamic control models that optimize accuracy and robustness.

Human Robot Interaction

The acceptance of robots by non-experts is essential to wide adoptable and utilization of robot systems. The Human-Robot Interaction is essential to such adoption. This requires consideration of all aspects of HRI from design, over social interaction to physical interaction. In our research we focus in particular on physical HRI and the interplay between design and interaction.