Learning from experience: the way humans evolve, the way robots will

People have great expectations toward robots. We would like them to be independent, smart and capable of understanding our requests; we would like them to be able to accomplish tasks that are part of our daily life, and to help us in actions that we consider to be fairly easy. Unfortunately, robots and humans do not share the same “ease” concept, and everyday researchers from all over the world cooperate to push boundaries in terms of what a robot is able to do in a human context.

A common problem that roboticists face on a daily basis is how to allow robots to expand their tasks portfolio i.e. learning to do new tasks as required since knowledge-static robots do not provide the elasticity required in heavily dynamic environments such as the human ones are: what if a nurse in a retirement home requires a robot to physically support an elderly person in a specific way, according to preferences of the elderly that only the nurse is aware of? What if a chef asks their robot assistant to help them julienning the vegetables, when in fact, the robot only knows how to chop them instead? There is a clear need for robots to learn during their everyday life to adapt to new situations not considered previously when programming a robot. But more importantly, it is crucial that robots learn from the humans themselves to effectively support them based on their actual needs and preferences. In short, they should learn from human experience.

The learning concept has had quite an impact on robotic research, since the idea of robots being able to autonomously (or with a limited degree of supervision) apprehend new behaviors could be one of the key aspects to actually reach the pervasive distribution of robots in everyone’s daily life. Researchers all over the world are currently working on techniques to implement smart learning skills in robots, covering the whole behavioral spectrum: from physical tasks (e.g. manipulating objects, walking, jumping, etc.), perceptual abilities (e.g. recognition of objects, actions, etc.) to social interaction (e.g. effectively communicating with humans). Roboticists are exploring different learning paradigms developed within the AI field, such as, Reinforcement Learning and Learning from Demonstration, in which previous research has provided successful results when it comes to robots acquiring new skills or even refining already known ones, but in very specific tasks. Such advances open a major challenge: will the research community be able to smoothly integrate such different learning approaches that have proved to be effective in specific areas, to achieve the goal of modeling a general learning paradigm for daily human-robot interaction? The mission is as complicated as exciting, and successful results could have a remarkable effect on society. The PERSEO project will be part of this journey, one more reason (if needed) to follow-up on the evolution of the work done!

Leave a Reply