Wednesday, April 20, 2011

Leonardo funny robot


Social Learning Overview
Rather than requiring people to learn a new form of communication to interact with robots or to teach them, our research concerns developing robots that can learn from natural human interaction in human environments.

Learning by Spatial Scaffolding
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers.

Learning by Socially Guided Exploration
Personal robots must be able to learn new skills and tasks while on the job from ordinary people. How can we design robots that learn effectively and opportunistically on their own, but are also receptive of human guidance --- both to customize what the robot learns, and to improve how the robot learns?


Learning by Tutelage
Learning by human tutelage leverages from structure provided through interpersonal interaction. For instance, teachers direct the learners' attention, structures their experiences, supports their learning attempts, and regulates the complexity and difficulty of information for them. The teacher maintains a mental model of the learner's state (e.g. what is understood so far, what remains confusing or unknown) in order to appropriately structure the learning task with timely feedback and guidance. Meanwhile, the learner aids the instructor by expressing his or her current understanding through demonstration and using a rich variety of communicative acts such as facial expressions, gestures, shared attention, and dialog.

Learning to Mimic Faces
This work presents a biologically inspired implementation of early facial imitation based on the AIM model proposed by Meltzoff & Moore. Although there are competing theories to explain early facial imitation (such as an innate releasing mechanism model where fixed-action patterns are triggered by the demonstrator's behavior, or viewing it as a by-product of neonatal synesthesia where the infant confuses input from visual and proprioceptive modalities), Meltzoff presents a compelling account for the representational nature and goal-directedness of early facial imitation, and how this enables further social growth and understanding.



Learning to Mimic Bodies
This section describes the process of using Leo's perceptions of the human's movements to determine which motion from the robot's repertoire the human might be performing. The technique described here allows the joint angles of the human to be mapped to the geometry of the robot even if they have different morphologies, as long as the human has a consistent sense of how the mapping should be and is willing to go through a quick, imitation-inspired process to learn this body mapping. Once the perceived data is in the joint space of the robot, the robot tries to match the movement of the human to one of its own movements (or a weighted combination of prototype movements). Representing the human's movements as one of the robot's own movements is more useful for further inference using the goal-directed behavior system than a collection of joint angles.

Tuesday, April 19, 2011

ASMIO Robot


From Honda Motor Co.comes a new small, lightweight humanoid robot named ASIMO that is able to walk in a manner which closely resembles that of a human being.

One area of Honda's basic research has involved the pursuit of developing an autonomous walking robot that can be helpful to humans as well as be of practical use in society. Research and development on this project began in 1986. In 1996 the prototype P2 made its debut, followed by P3 in 1997.

"ASIMO" is a further evolved version of P3 in an endearing people-friendly size which enables it to actually perform tasks within the realm of a human living environment. It also walks in a smooth fashion which closely resembles that of a human being. The range of movement of its arms has been significantly increased and it can now be operated by a new portable controller for improved ease of operation.

ASIMO Special Features:
Smaller and Lightweight
More Advanced Walking Technology
Simple Operation
Expanded Range of Arm Movement
People-Friendly Design

Small & Lightweight Compared to P3, ASIMO's height was reduced from 160cm to 120cm and its weight was reduced from 130kg to a mere 43kg. A height of 120cm was chosen because it was considered the optimum to operate household switches, reach doorknobs in a human living space and for performing tasks at tables and benches. By redesigning ASIMO's skeletal frame, reducing the frame's wall thickness and specially designing the control unit for compactness and light weight, ASIMO was made much more compact and its weight was reduced to a remarkable 43kg.

Advanced Walking Technology Predicted Movement Control (for predicting the next move and shifting the center of gravity accordingly) was combined with existing walking control know-how to create i-WALK (intelligent real-time flexible walking) technology, permitting smooth changes of direction. Additionally, because ASIMO walks like a human, with instant response to sudden movements, its walking is natural and very stable.

Simple Operation To improve the operation of the robot, flexible walking control and button operation (for gesticulations and hand waving) can be carried out by either a workstation or from the handy portable controller.

Expanded Range of Movement By installing ASIMO's shoulder's 20 degrees higher than P3, elbow height was increased to 15 degrees over horizontal, allowing a wider range of work capability. Also, ASIMO's range of vertical arm movement has been increased to 105 degrees, compared to P3's 90-degree range.

People-Friendly Design In addition to its compact size, ASIMO features a people-friendly design that is attractive in appearance and easy to live with.

About the Name
ASIMO is an abbreviation for "Advanced Step in Innovative Mobility"; revolutionary mobility progressing into a new era.

Specifications
Weight: 43kg
Height: 1,200mm
Depth: 440mm Width 450mm
Walking Speed: 0 - 1.6km/h
Operating Degrees of Freedom*
Head: 2 degrees of freedom
Arm: 5 x 2 = 10 degrees of freedom
Hand: 1 x 2 = 2 degrees of freedom
Leg: 6 x 2 = 12 degrees of freedom
TOTAL: 26 degrees of freedom
Actuators: Servomotor + Harmonic Decelerator + Drive ECU
Controller: Walking/Operation Control ECU, Wireless Transmission ECU Sensors - Foot: 6-axis sensor
Torso: Gyroscope & Deceleration Sensor
Power Source: 38.4V/10AH (Ni-MN)
Operation: Work Station & Portable Controller

VIA


















Monday, April 18, 2011

Robots can Full of Love



However they are assisting the elderly, or simply popping human skulls like ripe fruit, robots aren't usually known for their light touch. And while this may be fine as long as they stay relegated to cleaning floors and assembling cars, as robots perform more tasks that put them in contact with human flesh, be it surgery or helping the blind, their touch sensitivity becomes increasingly important.

Thankfully, researchers at the University of Ghent, Belgium, have solved the problem of delicate robot touch.

Unlike the mechanical sensors currently used to regulate robotic touching, the Belgian researchers used optical sensors to measure the feedback. Under the robot skin, they created a web of optical beams. Even the faintest break in those beams registers in the robot's computer brain, making the skin far more sensitive than mechanical sensors, which are prone to interfering with each other.

Robots like the da Vinci surgery station already register feedback from touch, but a coating of this optical sensor-laden skin could vastly enhance the sensitivity of the machine. Additionally, a range of Japanese robots designed to help the elderly could gain a lighter touch with their sensitive charges if equipped with the skin.

Really, any interaction between human flesh and robot surfaces could benefit from the more lifelike touch provided by this sensor array. And to answer the question you're all thinking but won't say: yes. But please, get your mind out of the gutter. This is a family site.



Sunday, April 17, 2011

Swarm Robot

Use Microsoft Surface to Control With Your Fingertips

Sharp-looking tabletop touchscreen can be used to command robots and combine data from various sources, potentially improving military planning, disaster response and search-and-rescue operations.

Mark Micire, a graduate student at the University of Massachusetts-Lowell, proposes using Surface, Microsoft's interactive tabletop, to unite various types of data, robots and other smart technologies around a common goal. It seems so obvious and so simple, you have to wonder why this type of technology is not already widespread.

In defending his graduate thesis earlier this week, Micire showed off a demo of his swarm-control interface, which you can watch below.

You can tap, touch and drag little icons to command individual robots or robot swarms. You can leave a trail of crumbs for them to follow, and you can draw paths for them in a way that looks quite like Flight Control, one of our favorite iPod/iPad games. To test his system, Micire steered a four-wheeled vehicle through a plywood maze.

Control This Robot With a Touchscreen: Mark Micire/UMass Lowell Robotics Lab
The system can integrate a variety of data sets, like city maps, building blueprints and more. You can pan and zoom in on any map point, and you can even integrate video feeds from individual robots so you can see things from their perspective.

As Micire describes it, current disaster-response methods can’t automatically compile and combine information to search for patterns. A smart system would integrate data from all kinds of sources, including commanders, individuals and robots in the field, computer generated risk models and more.

Emergency responders might not have the time or opportunity to get in-depth training on new technologies, so a simple touchscreen control system like this would be more useful. At the very least, it seems like a much more intuitive way to control future robot armies.



Underwater Robot

Controlled by Underwater Tablets Show Off their Swimming Skills
The  New Scientist has some great new video of our flippered friends.

The Aqua robots can be used in hard-to-reach spots like coral reefs, shipwrecks or caves. 

Though the diver remains at at a safe distance, he can see everything the robot sees. Check out this robot’s-eye-view of a swimming pool.

Aqua robots are controlled by tablet computers encased in a waterproof shell. Motion sensors can tell how the waterproofed computer is tilted, and the robot moves in the same direction, New Scientist reports.

As we wrote earlier this summer, tablet-controlled robots working in concert with human divers would be much easier to command than undersea robots controlled from a ship. Plus, they just look so cute.




Willow Garage Robot

Willow Garage's  playing  Billiards
Proving that robots really do have a place at the pub-time to change your archaic anti-droid policies, Mos Eisley Cantina the team over at Willow Garage has programmed one of its PR2 robots to play a pretty impressive game of pool.

More impressively, they did it in just under one week.

In order to get the PR2 to make pool shark worthy shots, the team had to figure out how to make it recognize both the table and the balls, things that come easily to all but the thirstiest pool hall patrons.

PR2 used its high-res camera to locate and track balls and to orient itself to the table via the diamond markers on the rails.

 It further oriented itself by identifying the table legs with its lower laser sensor.

Once the bot learned how to spatially identify the balls and the table, the team simply employed an open-source pool physics program to let the PR2 plan and execute its shots.


Slim HRP-4 Humanoid Robot

In the Japan’s newest RoboCop-looking humanoid robot practices yoga, tracks faces and objects and, in what seems to be a robo-requirement these days, pours drinks.

The industrial HRP-4 robot was designed to coexist with people, and its thin athlete frame is meant to be more appealing, according to Kawada Industries, which built the robot with Japan’s National Institute of Advanced Industrial Science and Technology.

5 foot tall and 86 pound robot is a deliberately downsized version of its larger sibling, the HRP-2.

Kawada first developed HRP-2 seven years ago, and wanted to design an updated version, according to a press release.

HRP-4 has 34 degrees of freedom and can move its arm seven ways. It can carry about a pound in each arm. All joint motors are less than 80 watts,as CNET reports.

With a small laptop can be installed in HRP-4’s back to increase its data processing capabilities.


Murata Girl Robot

Murata Girl And Her Beloved Unicycle Murata

Following in the footsteps of many robots we’ve seen who perform awesome but random feats, Japanese electronics company Murata has revealed an update of their Little Seiko humanoid robot for 2010. 

Murata Girl, like she is known, is 50 centimeters tall weighs six kilograms and can unicycle backwards and forwards. Whereas in her previous iteration, she could only ride across a straight balance beam, she is now capable of navigating an S-curve as thin as 2.5 centimeters only one centimeter wider than the tire of her unicycle .

The secret is a balancing mechanism that calculates the degree she needs to turn at to safely maneuver around the curves. She also makes use of a perhaps more rudimentary, but nonetheless effective, balancing mechanism and holds her arms stretched out to her sides,Nastia Liukin-style. Murata Girl is battery-powered, outfitted with a camera, and controllable via Bluetooth or Wi-Fi. 

Also, because we know you were wondering, she’s a Virgo and her favorite pastime is (naturally) practicing riding her unicycle at the park.




Archer Robot Learns How To Aim and Shoot A Bow and Arrow

By using a learning algorithm, Italian researchers taught a child-like humanoid robot archery, even      outfitting it with a spectacular headdress to celebrate its new ski .

Petar Kormushev, Sylvain Calinon and Ryo Saegusa of the Italian Institute of Technology developed an algorithm called “Archer,” for Augmented Reward Chained Regression.

The iCub robot is taught how to hold the bow and arrow, but then learns by itself how to aim and shoot the arrow so it hits the center of a target. Watch it learn below.

The researchers say this type of learning algorithm would be preferable to even their own reinforcement learning techniques, which require more input from humans. 

The team used an iCub, a small humanoid robot designed to look like a 3 year old child. It was developed by a consortium of European universities with the goal of mimicking and understanding cognition, according to Technology Review.

It has several physical and visual sensors, and “Archer” takes advantage of them to provide more feedback than other learning algorithms, the researchers say.

The team will present their findings with the archery learning algorithm at the Humanoids 2010 conference in December.