Wednesday, April 20, 2011

TOFU Robot funny one


TOFU is a project to explore new ways of robotic social expression by leveraging techniques that have 
been used in 2d animation for decades.


 Disney Animation Studios pioneered animation tools such as "squash and stretch" and "secondary motion" in the 50's. Such techniques have since been used widely by animators, but are not commonly used to design robots. 


TOFU, who is named after the squashing and stretching food product, can also squash and stretch.


Clever use of compliant materials and elastic coupling, provide an actuation method that is vibrant yet robust. Instead of using eyes actuated by motors, TOFU uses inexpensive OLED displays, which offer highly dynamic and lifelike motion.

Leonardo funny robot


Social Learning Overview
Rather than requiring people to learn a new form of communication to interact with robots or to teach them, our research concerns developing robots that can learn from natural human interaction in human environments.

Learning by Spatial Scaffolding
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers.

Learning by Socially Guided Exploration
Personal robots must be able to learn new skills and tasks while on the job from ordinary people. How can we design robots that learn effectively and opportunistically on their own, but are also receptive of human guidance --- both to customize what the robot learns, and to improve how the robot learns?


Learning by Tutelage
Learning by human tutelage leverages from structure provided through interpersonal interaction. For instance, teachers direct the learners' attention, structures their experiences, supports their learning attempts, and regulates the complexity and difficulty of information for them. The teacher maintains a mental model of the learner's state (e.g. what is understood so far, what remains confusing or unknown) in order to appropriately structure the learning task with timely feedback and guidance. Meanwhile, the learner aids the instructor by expressing his or her current understanding through demonstration and using a rich variety of communicative acts such as facial expressions, gestures, shared attention, and dialog.

Learning to Mimic Faces
This work presents a biologically inspired implementation of early facial imitation based on the AIM model proposed by Meltzoff & Moore. Although there are competing theories to explain early facial imitation (such as an innate releasing mechanism model where fixed-action patterns are triggered by the demonstrator's behavior, or viewing it as a by-product of neonatal synesthesia where the infant confuses input from visual and proprioceptive modalities), Meltzoff presents a compelling account for the representational nature and goal-directedness of early facial imitation, and how this enables further social growth and understanding.



Learning to Mimic Bodies
This section describes the process of using Leo's perceptions of the human's movements to determine which motion from the robot's repertoire the human might be performing. The technique described here allows the joint angles of the human to be mapped to the geometry of the robot even if they have different morphologies, as long as the human has a consistent sense of how the mapping should be and is willing to go through a quick, imitation-inspired process to learn this body mapping. Once the perceived data is in the joint space of the robot, the robot tries to match the movement of the human to one of its own movements (or a weighted combination of prototype movements). Representing the human's movements as one of the robot's own movements is more useful for further inference using the goal-directed behavior system than a collection of joint angles.

AR Drone to fly


We have seen the AR Drone here before. This is the quadcopter that can be flown around using the WiFi from your iPhone, iPad or an iPod Touch. 

The accelerometers in the Apple product are used to fly the device around, you simply tip it in the direction you would like the AR Drone to fly. 

The quadcopter has a sophisticated onboard processor which allows the AR Drone to maintain predictable flight. 
There is an ultrasonic sensor on the bottom to allow the height of the quadcopter to be easily maintained. 
Movement of the AR Drone is watched by a bottom facing camera, by analyzing each passing frame it can be determined how far it has flown. This is a similar technology to what is done in an optical mouse to detect in what direction and how far the mouse has been moved.

The front facing camera allows the AR Drone to be flown out of visual range, you simply watch the action on the iPhone, iPad or iPod Touch. 

Some of the AR Drones you see have some colored bands, these are used to allow for some fun augmented reality games where you can follow another drone and even make it look different.

Watch the video for some technical details of what is inside the AR Drone and continue watching the second half of the video for some flight action. 

The people flying the quadcopters are out of the crowd so it shows how easy it is to just pick up and play. 

I can see all sorts of extended applications beyond the fun flying aspects, just imagine the security guards that need to make long far patrols, instead of walking a mile he could just make a quick flight around to make sure nothing is wrong.

 via 



Sunday, April 17, 2011

Slim HRP-4 Humanoid Robot

In the Japan’s newest RoboCop-looking humanoid robot practices yoga, tracks faces and objects and, in what seems to be a robo-requirement these days, pours drinks.

The industrial HRP-4 robot was designed to coexist with people, and its thin athlete frame is meant to be more appealing, according to Kawada Industries, which built the robot with Japan’s National Institute of Advanced Industrial Science and Technology.

5 foot tall and 86 pound robot is a deliberately downsized version of its larger sibling, the HRP-2.

Kawada first developed HRP-2 seven years ago, and wanted to design an updated version, according to a press release.

HRP-4 has 34 degrees of freedom and can move its arm seven ways. It can carry about a pound in each arm. All joint motors are less than 80 watts,as CNET reports.

With a small laptop can be installed in HRP-4’s back to increase its data processing capabilities.


Murata Girl Robot

Murata Girl And Her Beloved Unicycle Murata

Following in the footsteps of many robots we’ve seen who perform awesome but random feats, Japanese electronics company Murata has revealed an update of their Little Seiko humanoid robot for 2010. 

Murata Girl, like she is known, is 50 centimeters tall weighs six kilograms and can unicycle backwards and forwards. Whereas in her previous iteration, she could only ride across a straight balance beam, she is now capable of navigating an S-curve as thin as 2.5 centimeters only one centimeter wider than the tire of her unicycle .

The secret is a balancing mechanism that calculates the degree she needs to turn at to safely maneuver around the curves. She also makes use of a perhaps more rudimentary, but nonetheless effective, balancing mechanism and holds her arms stretched out to her sides,Nastia Liukin-style. Murata Girl is battery-powered, outfitted with a camera, and controllable via Bluetooth or Wi-Fi. 

Also, because we know you were wondering, she’s a Virgo and her favorite pastime is (naturally) practicing riding her unicycle at the park.




Robot Guides

Robot Guides  make life easier for the directionally challenged

Robot Guides Robot guides help visitors navigate a banking center in Madrid. Y Dreams

In an innovative solution to the problem of crowd control in a business complex filled with 5,500 employees, a banking center in Madrid has assembled a team of stylish helpful robots to help people navigate. 

According to the robots' designers, the helmet-shaped Santander Interactive Guest Assistants  (SIGA)  are the first machines to use swarm robotics in a commercial context as opposed to say in submarine exploration or flying art.

After meeting the robots, guests choose their language and destination on the console's touchscreen. The robotic butlers then take them anywhere, from the meeting room, to the auditorium, to the exit toward a bus stop.




Cell Phones Dance Robot

It`s can Cry, Throw Tantrums, and Talk to Each Other
It`s can act out phone users' feelings


Canadian researchers trying to integrate robots into our lives have come up with a pair of dancing, crying cell phone 'bots. The robots, called Callo and Cally, are cell phones with limbs.



Cally stands about 7 inches high and walks, dances and mimics human behavior. Callo stands about 9 inches tall, and his face, which is a cell phone display screen, shows human facial expressions when he receives text-messaged emotions. When he receives a smile emoticon, Callo stands on one leg, waves his arms and smiles. If he receives a frown, his shoulders slump and he will cry. If he gets an urgent message, or a really sad one, he'll wave his arms frantically.


Ji-Dong Yim is a PhD student in interactive arts and technology at Simon Fraser University in Vancouver, says it's basically a simple avatar system. 
The robots can communicate with each other, for instance when their masters are on a video call.
When you move your robot, my robot will move the same, and vice versa, so that we can share emotional feelings using (physically smart) robot phones, he says in an SFU release.
The robots, which are made from Nokia N82 phone parts and components from a Bioloid robot kit, can detect human faces using OpenCV software.
Cally can even track users' facial expressions during a phone call.

Robot can also be preprogrammed to move in certain ways when receiving calls from specific phone numbers.
The same concepts could be used to make other helper robots communicate with people and build long-term intimacy with them, researchers say.


Saturday, April 16, 2011

Autom Robot Help You Lose Weight

Autom wants to make you healthier

This small robot keeps track of your eating and exercise habits  and encourages you to stay in shape.
Autom speaks with a synthetic female voice, and you interact with it using its touch-screen belly. It won't scold you if you ate two desserts last night; Autom is a very kind robot.

But it really help you to lose weight ?
We met Autom, and one of its creators, Cory Kidd, co-founder and CEO of Intuitive Automata, at CES early this month.
Kidd claims that, yes, Autom can help people lose weight. The robot is more effective than weight-loss websites and smartphone apps, he says, because people develop a bond with the robot and stick with it longer.
Kidd started developing Autom a few years ago while a grad student at MIT, and with two colleagues he founded Intuitive Automata, which is based in Hong Kong, to commercialize the robot.

 I think they are onto something here, but I see some limitations in the current robot. First, the speech synthesis is very robotic. Second, the robot has no voice recognition at all. it would be nice if the robot could speak more naturally and if at least basic interactions -- like answering "yes" or "no" -- could happen via voice. The good thing is the company might be able to improve these features in the future with software updates.

Another question is whether consumers want a robotic weight-loss coach in the first place, and how much they're willing to shell out.
Intuitive Automata plans to start selling Autom on its website later this year for around US $500 or $600. But in the video Kidd mentions something interesting: They plan to sell the robot also via health insurance companies and employers, which would give -- or subsidize -- the robots to customers and employees.



Aldebaran Robotics

seeking beta-testers for its Nao humanoid robot
Image: Aldebaran Robotics

The French robotics company Aldebaran Robotics, which introduced its Nao humanoid last year, is conducting a beta-test for people interested in helping improve the robot.
Looks like a great opportunity for robo-loving people but there are a few things to note. First, the trial is open only to individuals living in France and the UK. The other thing: beta-testers have to pay. And it's not cheap: 4800 euros for two robots. At least taxes are included!
From their site:
As a beta-tester, you will really be at core of Nao’s adventure. Your experience, your feedbacks, your suggestions and your requests will be the inputs enabling us to improve Nao. We will build a special and close relationship with every beta-tester : you will be invited to exclusive events ; you will be the firtst to know the latest developments on Nao ; you will have access to a dedicated forum to share with us and the other beta-testers ; you will be involved in challenges (not only for advanced programmers) and show us your creativity and skills. You will help us make Nao!

Now, here are the details of the beta-test:
- it is open for individual customers living in France or UK only
- it is priced at 4800€, all taxes included
- for the price, you'll get 2 NAOs: a first one to be beta-tested and, as a gift for your participation and help, a second one as soon as we release the product to the general public
- the Nao's version to be beta-tested is called "V3+" - this is the most advanced Nao we have ever     designed
- the package also includes all necessary documentation and software, including ChoregrapheC.
- moreover, if we feel an upgrade of your beta-tested Nao is necessary, we will do it and won't charge you for this

Spider-Bot With Six Legs Customizable and Cheap + VIDEO

Hexapod Spider-Bot via Robots Dreams
 KMR-M6  a fresh arachnid-like Robot from Japanese architect Kondo Robot that you can own for aloof beneath $900. It scurries about like a analytical spider, bouncing a leg aback it encounters an obstacle and dispatch calculating to ensure alike footing.
It has alone two servos per leg, one for vertical control and one for horizontal, which reduces costs. A arrangement of springs and bar linkages gives the robot added flexibility, according to the Japanese apprentice blog Robots Dreams. It is advised to handle asperous terrain, although it’s appealing absorbing to watch it march, goose-step-style, on a collapsed surface.
Along with the hexapod kit, Kondo will advertise alone legs and parts, so home apprentice builders can architecture whatever they appetite — like active or cape for cameras, sensors, grippers and added uses. The spider-bot will set you aback 76,000 yen, or about $880. Kondo expects to activate shipments in aboriginal May, targeting the apprenticeship and hobbyist markets.


Happy Workerbot



New cheerful factory robot aims to keep European industry competitive by working alongside humans, smiling when it accomplishes a task or when its bosses ensure it stays busy. The pi4_workerbot, developed at Fraunhofer labs, has fingertip sensitivity  it`s completes the perennially difficult robot task of grasping an egg  and a variety of facial expressions.

It`s have three cameras and two arms and stands as tall as an average human.  It`s seamlessly integrated into assembly lines, according to Fraunhofer’s Research News.

Happy Workerbot  can pick up two pieces gear wheel and a housing  and carefully fiddle with them until the two pieces engage.

The robot smiles, and places the correctly assembled part on the conveyor belt, a Fraunhofer news release explains. The robot’s shoulders swivel, affording it several degrees of freedom, and it also has an rotating wrist, which allows precise hand movements.

It has a 3-D camera to see its surroundings, and two other cameras allow it to inspect factory items with greater precision than a human eye. In an automotive factory, for instance, it could examine a chrome-plated object by studying how light reflects off the material. The robot is a product of the European Union-funded PISA project, which aims for greater industrial efficiency using robots.

If a company needs to produce something fast but has no worker resources, the idea would be to rent the workerbot and integrate it into human working spaces,” Fraunhofer researcher Dragoljub Surdilovic told The Engineer.

The better can work for  24 hours and it would prefer to stay busy because If its work is going smoothly, it will smile happily.  If it looks bored, it’s waiting for work, and the production manager knows the production process can be speeded up,

 By  Fraunhofer Research News.