Wednesday, April 20, 2011

Leonardo funny robot


Social Learning Overview
Rather than requiring people to learn a new form of communication to interact with robots or to teach them, our research concerns developing robots that can learn from natural human interaction in human environments.

Learning by Spatial Scaffolding
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers.

Learning by Socially Guided Exploration
Personal robots must be able to learn new skills and tasks while on the job from ordinary people. How can we design robots that learn effectively and opportunistically on their own, but are also receptive of human guidance --- both to customize what the robot learns, and to improve how the robot learns?


Learning by Tutelage
Learning by human tutelage leverages from structure provided through interpersonal interaction. For instance, teachers direct the learners' attention, structures their experiences, supports their learning attempts, and regulates the complexity and difficulty of information for them. The teacher maintains a mental model of the learner's state (e.g. what is understood so far, what remains confusing or unknown) in order to appropriately structure the learning task with timely feedback and guidance. Meanwhile, the learner aids the instructor by expressing his or her current understanding through demonstration and using a rich variety of communicative acts such as facial expressions, gestures, shared attention, and dialog.

Learning to Mimic Faces
This work presents a biologically inspired implementation of early facial imitation based on the AIM model proposed by Meltzoff & Moore. Although there are competing theories to explain early facial imitation (such as an innate releasing mechanism model where fixed-action patterns are triggered by the demonstrator's behavior, or viewing it as a by-product of neonatal synesthesia where the infant confuses input from visual and proprioceptive modalities), Meltzoff presents a compelling account for the representational nature and goal-directedness of early facial imitation, and how this enables further social growth and understanding.



Learning to Mimic Bodies
This section describes the process of using Leo's perceptions of the human's movements to determine which motion from the robot's repertoire the human might be performing. The technique described here allows the joint angles of the human to be mapped to the geometry of the robot even if they have different morphologies, as long as the human has a consistent sense of how the mapping should be and is willing to go through a quick, imitation-inspired process to learn this body mapping. Once the perceived data is in the joint space of the robot, the robot tries to match the movement of the human to one of its own movements (or a weighted combination of prototype movements). Representing the human's movements as one of the robot's own movements is more useful for further inference using the goal-directed behavior system than a collection of joint angles.

0 التعليقات:

Post a Comment