Wednesday, April 20, 2011

TOFU Robot funny one


TOFU is a project to explore new ways of robotic social expression by leveraging techniques that have 
been used in 2d animation for decades.


 Disney Animation Studios pioneered animation tools such as "squash and stretch" and "secondary motion" in the 50's. Such techniques have since been used widely by animators, but are not commonly used to design robots. 


TOFU, who is named after the squashing and stretching food product, can also squash and stretch.


Clever use of compliant materials and elastic coupling, provide an actuation method that is vibrant yet robust. Instead of using eyes actuated by motors, TOFU uses inexpensive OLED displays, which offer highly dynamic and lifelike motion.

Leonardo funny robot


Social Learning Overview
Rather than requiring people to learn a new form of communication to interact with robots or to teach them, our research concerns developing robots that can learn from natural human interaction in human environments.

Learning by Spatial Scaffolding
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers.

Learning by Socially Guided Exploration
Personal robots must be able to learn new skills and tasks while on the job from ordinary people. How can we design robots that learn effectively and opportunistically on their own, but are also receptive of human guidance --- both to customize what the robot learns, and to improve how the robot learns?


Learning by Tutelage
Learning by human tutelage leverages from structure provided through interpersonal interaction. For instance, teachers direct the learners' attention, structures their experiences, supports their learning attempts, and regulates the complexity and difficulty of information for them. The teacher maintains a mental model of the learner's state (e.g. what is understood so far, what remains confusing or unknown) in order to appropriately structure the learning task with timely feedback and guidance. Meanwhile, the learner aids the instructor by expressing his or her current understanding through demonstration and using a rich variety of communicative acts such as facial expressions, gestures, shared attention, and dialog.

Learning to Mimic Faces
This work presents a biologically inspired implementation of early facial imitation based on the AIM model proposed by Meltzoff & Moore. Although there are competing theories to explain early facial imitation (such as an innate releasing mechanism model where fixed-action patterns are triggered by the demonstrator's behavior, or viewing it as a by-product of neonatal synesthesia where the infant confuses input from visual and proprioceptive modalities), Meltzoff presents a compelling account for the representational nature and goal-directedness of early facial imitation, and how this enables further social growth and understanding.



Learning to Mimic Bodies
This section describes the process of using Leo's perceptions of the human's movements to determine which motion from the robot's repertoire the human might be performing. The technique described here allows the joint angles of the human to be mapped to the geometry of the robot even if they have different morphologies, as long as the human has a consistent sense of how the mapping should be and is willing to go through a quick, imitation-inspired process to learn this body mapping. Once the perceived data is in the joint space of the robot, the robot tries to match the movement of the human to one of its own movements (or a weighted combination of prototype movements). Representing the human's movements as one of the robot's own movements is more useful for further inference using the goal-directed behavior system than a collection of joint angles.

AR Drone to fly


We have seen the AR Drone here before. This is the quadcopter that can be flown around using the WiFi from your iPhone, iPad or an iPod Touch. 

The accelerometers in the Apple product are used to fly the device around, you simply tip it in the direction you would like the AR Drone to fly. 

The quadcopter has a sophisticated onboard processor which allows the AR Drone to maintain predictable flight. 
There is an ultrasonic sensor on the bottom to allow the height of the quadcopter to be easily maintained. 
Movement of the AR Drone is watched by a bottom facing camera, by analyzing each passing frame it can be determined how far it has flown. This is a similar technology to what is done in an optical mouse to detect in what direction and how far the mouse has been moved.

The front facing camera allows the AR Drone to be flown out of visual range, you simply watch the action on the iPhone, iPad or iPod Touch. 

Some of the AR Drones you see have some colored bands, these are used to allow for some fun augmented reality games where you can follow another drone and even make it look different.

Watch the video for some technical details of what is inside the AR Drone and continue watching the second half of the video for some flight action. 

The people flying the quadcopters are out of the crowd so it shows how easy it is to just pick up and play. 

I can see all sorts of extended applications beyond the fun flying aspects, just imagine the security guards that need to make long far patrols, instead of walking a mile he could just make a quick flight around to make sure nothing is wrong.

 via 



movies become real Laser Gun


Remember all the people warning you never to look directly at the sun because you will go blind !!

Well believe me when I say do not look into the barrel of this gun because it will make you blind.

This DIY Pulse Laser Gun was built by Patrick Priebe who you might remember from the Iron Man Repulsor Light Laser Glove Project.

This is what you get when you convert a ton of energy into light energy in a fraction of a second.

Looks fun enough to build but I probably won’t since I would probably end up with a few holes in my hand or something.

It holds a small pulse laser head, capable of generating aMW-pulse of coherent infra-red light.

One shot can punch through a razorblade, plastic, 5mm styrofoam when focussed.

Effective range on 3m (dark surfaces)…you will see a stinging flame and a 5mm stain will remain on target. The goal was, to create handheld device…AS COMPACT as possible.

Its 320mm long and weights about 2 pounds.

Materials used: Plexi for the center-plate, and brass / aluminum for the casing. Each and every part, handmade…took about 70 hours of work.





Iron Man Light Laser it`s become really


If you have seen the Iron Man movie the image above is sure to be familiar to you.

Patrick from Germany shared this project with us in the Hacked Gadgets forum, we have seen other cool Iron Man Repulsor Light projects before but as far as I know this is the first that it truly very dangerous.

So a word of warning, do not attempt to copy this build unless you know what you are doing! Patrick already has plans for version 2 in his head, I look forward to seeing that in the future.

The goal was to create a hand-held laser…powerful…balloons pop across the room…cuts plastic…
Made SOME laser-guns before, and the most “useless” space-eating part, was the grip.

So I had to get rid of it. I am am a HUGE fan of the new Iron Man movies, so I decided to try my own design and make a glove. Took a whole weekend to make it, and another 2 days for the paint-job {made EVERYTHING myself…from metal-work, wiring, paint-job}

 PLZ not this not for kids play this like super weapon 

Technical info:

# made of 2mm brass-sheet
#constant current LM317 driver
# 445nm 1000mW laser diode
# 2x 3.7V Li Ion cells (=7.4V total)






3D iPad


3D iPad Demo is future of portable media Laurence Nigay, a professor from France was inspired by the 3D work done by Johnny Lee ,  works for the Google now .

We track the head of the user with the front facing camera in order to create a glasses-free monocular 3D display. Such spatially-aware mobile display enables to improve the possibilities of interaction. 

It`s  not use the accelerometers and relies only on the front camera. 








Ant navigation Robot



Next time you find yourself lost despite having a map and satellite navigation, spare a thought for the unfortunate ant that must take regular trips home to avoid losing its way. Dr Markus Knaden, from the University of Zurich, will report that a visit back to the nest is essential for ants to reset their navigation equipment and avoid getting lost on foraging trips. "Knowledge about path integration and landmark learning gained from our experiments with ants has already been incorporated in autonomous robots. Including a 'reset' of the path integrator at a significant position could make the orientation of the robot even more reliable", says Dr Knaden who will speak on Tuesday 4th April at the Society for Experimental Biology's Main Annual Meeting in Canterbury, Kent [session A4]

Ants that return from foraging journeys can use landmarks to find their way home, but in addition they have an internal backup system that allows them to create straight shortcuts back to the nest even when the outbound part of the forage run was very winding. This backup system is called the 'path integrator' and constantly reassesses the ant's position using an internal compass and measure of distance travelled. Knaden and his colleagues hypothesised that because the path integrator is a function of the ant's brain, it is prone to accumulate mistakes with time. That is, unless it is regularly reset to the original error-free template; which is exactly what the researchers have found.

When they moved ants from a feeder back to a position either within the nest or next to the nest, they found that only those ants that were placed in the nest were able to set off again in the right direction to the feeder. Those left outside the nest set off in a feeder-to-home direction (i.e. away from the nest in completely the opposite direction to the source of food) as if they still had the idea of 'heading home' in their brains. "We think that it must be the specific behaviour of entering the nest and releasing the food crumb that is necessary to reset the path integrator", says Knaden. "We have designed artificial nests where we can observe the ants after they return from their foraging trips in order to test this."

What next? The group plan to study other ant species that live in landmark rich areas. "Maybe we will find that such ants rate landmarks more highly and use them, not the nest, to reset the path integrator", explains Knaden. 
A 'NASA explores' article on testing robots at farming



Tuesday, April 19, 2011

SONY DOG



Following on from the sale of the first ever autonomous entertainment robot AIBO ERS-110, Sony now introduce a 2nd Generation "AIBO" ERS-210 that has a greater ability to express emotion for more intimate communication with people. Available now, with no restriction on the number of units produced or the time period for orders: all customers ordering "AIBO" ERS-210 will be able to purchase a unit.

The new AIBO has additional movement in both ears and an increased number of LED (face x 4, tail x 2) and touch sensors (head, chin, back) which means that it can show an abundant array of emotions such as "joy" and "anger". In order to increase interaction with people, the ERS-210 series most distinctive feature, its autonomous robot technology (reacting to external stimulus and making its own judgements) that allows AIBO to learn and mature, has been enhanced. It will now include features frequently requested by AIBO owners such as a Name Recording Function (recognizes its own name). Voice Recognition (recognizes simple words) and Photo Taking.

The technologies that allow the ERS-210 to communicate, such as the autonomous feature which gives AIBO the ability to learn and mature plus the voice recognition technologies etc. will be available on a special flash memory AIBO Memory Stick software application (Autonomous AlBO-ware) called "AIBO Life" [ERF-210AW01] (*sold separately).

So that people can enjoy using AIBO in a variety of new ways an additional two application software (AlBO-ware), "Hello AIBO! Type A" [ERF-210AW02] demonstration software and "Party Mascot" [ERF-21 OAW03] game software (*both sold separately), are also being introduced. A new line-up of AIBO accessories such as a carrying case and software that enables owners to perform simple edits to AlBO's movements and tonal sounds on a home PC will also be offered to personalize the way owners can enjoy interacting with their AIBO.

Main Features of "AIBO" ERS-210

Three Different Color Variations

The [ERS-210] is available in three colour variations (silver, gold and black) so customers can choose the one that suits them best.

Autonomous Robot AIBO - actions based on own judgement

When used with Memory Stick application (AlBO-ware) "AIBO Life" (*sold separately) [ERF- 210AW01] AIBO acts as a fully autonomous robot and can make independent decisions about its own actions and behavior. AIBO grows up with abundant individuality by interacting with its environment and communicating with people by responding to its own instincts such as "the desire to play with people" and "the desire to look for the objects it loves".

Enhanced Features to Express Emotions

When used in conjunction with "AIBO Life" (*sold separately) AIBO [ERS-210] owners can enjoy the following features to their full capacity:

Touch Sensors on the head. chin and back
In addition to the sensor on the head, new touch sensors have been added to the back and under the chin for more intimate interaction with people.
20 Degrees of Freedom
A greater variety of expressions due to an increase in the degrees of freedom of movement from 18 on the [ERS-110] and [ERS-111] (mouth x 1, head x 3, tail x 2, leg x3 per leg) to 20 degrees of freedom with new movement added to the ears on the AIBO IERS-210].
LED on the Tail
In addition to LED (light-emitting diodes) on the face, LED have been added to the tail. A total of 4 LED on the face (expressing emotions such as "joy" "anger") plus 2 on the tail (expressing emotions like "anxiety" "agitation") allows AIBO to express a greater variety of emotions.

Enhanced Communication Ability with New Advanced Features

When used in conjunction with "AIBO Life" (*sold separately) AIBO [ERS-210] has the following features: Personalized Name (name recording & recognition): Owners can record their own personal name for "AIBO" and it will respond to this name by actions and emitting a special electronic sound. @ Word Recognition (voice recognition function): Depends on AlBO's level of development and maturity. The number of words and numbers AIBO can recognize will change as it grows up until it can recognize about 50 simple words. In response to the words it recognizes AIBO will perform a variety of actions and emit electronic sounds. Response to Intonation of Speech (synthetic AIBO language): When spoken to, "AIBO" can imitate the intonation (musical scale) of the words it hears by using its own "AIBO language" (tonal electronic language).

Photo Taking Function
If used in conjunction with "AIBO Life" and "AIBO Fun Pack" software applications (*both sold separately) AIBO will take a photograph of what it can see using a special colour camera when it hears someone say "Take a photo". Using "AIBO Fun Pack" software [ERF-PC01] photographs taken by AIBO can be seen on a home PC screen.

Wireless LAN Card
By purchasing a seperate IEEE802.11b Wireless LAN card [ERA-201D1], inserting it into a PC card slot and using "AIBO Master Studio" software (*sold seperately) the movements and sounds AIBO makes can be created on a home PC and sent wirelessly to control AIBO's movements.
Other Features
Open-R v1.1 architecture
Uses Sony's original real-time Operating System "Aperios".
The head and legs can be removed from the body and changed. 
The Official Sony AIBO Website
http://www.aibo-europe.com




ASMIO Robot


From Honda Motor Co.comes a new small, lightweight humanoid robot named ASIMO that is able to walk in a manner which closely resembles that of a human being.

One area of Honda's basic research has involved the pursuit of developing an autonomous walking robot that can be helpful to humans as well as be of practical use in society. Research and development on this project began in 1986. In 1996 the prototype P2 made its debut, followed by P3 in 1997.

"ASIMO" is a further evolved version of P3 in an endearing people-friendly size which enables it to actually perform tasks within the realm of a human living environment. It also walks in a smooth fashion which closely resembles that of a human being. The range of movement of its arms has been significantly increased and it can now be operated by a new portable controller for improved ease of operation.

ASIMO Special Features:
Smaller and Lightweight
More Advanced Walking Technology
Simple Operation
Expanded Range of Arm Movement
People-Friendly Design

Small & Lightweight Compared to P3, ASIMO's height was reduced from 160cm to 120cm and its weight was reduced from 130kg to a mere 43kg. A height of 120cm was chosen because it was considered the optimum to operate household switches, reach doorknobs in a human living space and for performing tasks at tables and benches. By redesigning ASIMO's skeletal frame, reducing the frame's wall thickness and specially designing the control unit for compactness and light weight, ASIMO was made much more compact and its weight was reduced to a remarkable 43kg.

Advanced Walking Technology Predicted Movement Control (for predicting the next move and shifting the center of gravity accordingly) was combined with existing walking control know-how to create i-WALK (intelligent real-time flexible walking) technology, permitting smooth changes of direction. Additionally, because ASIMO walks like a human, with instant response to sudden movements, its walking is natural and very stable.

Simple Operation To improve the operation of the robot, flexible walking control and button operation (for gesticulations and hand waving) can be carried out by either a workstation or from the handy portable controller.

Expanded Range of Movement By installing ASIMO's shoulder's 20 degrees higher than P3, elbow height was increased to 15 degrees over horizontal, allowing a wider range of work capability. Also, ASIMO's range of vertical arm movement has been increased to 105 degrees, compared to P3's 90-degree range.

People-Friendly Design In addition to its compact size, ASIMO features a people-friendly design that is attractive in appearance and easy to live with.

About the Name
ASIMO is an abbreviation for "Advanced Step in Innovative Mobility"; revolutionary mobility progressing into a new era.

Specifications
Weight: 43kg
Height: 1,200mm
Depth: 440mm Width 450mm
Walking Speed: 0 - 1.6km/h
Operating Degrees of Freedom*
Head: 2 degrees of freedom
Arm: 5 x 2 = 10 degrees of freedom
Hand: 1 x 2 = 2 degrees of freedom
Leg: 6 x 2 = 12 degrees of freedom
TOTAL: 26 degrees of freedom
Actuators: Servomotor + Harmonic Decelerator + Drive ECU
Controller: Walking/Operation Control ECU, Wireless Transmission ECU Sensors - Foot: 6-axis sensor
Torso: Gyroscope & Deceleration Sensor
Power Source: 38.4V/10AH (Ni-MN)
Operation: Work Station & Portable Controller

VIA


















Monday, April 18, 2011

Robots can Full of Love



However they are assisting the elderly, or simply popping human skulls like ripe fruit, robots aren't usually known for their light touch. And while this may be fine as long as they stay relegated to cleaning floors and assembling cars, as robots perform more tasks that put them in contact with human flesh, be it surgery or helping the blind, their touch sensitivity becomes increasingly important.

Thankfully, researchers at the University of Ghent, Belgium, have solved the problem of delicate robot touch.

Unlike the mechanical sensors currently used to regulate robotic touching, the Belgian researchers used optical sensors to measure the feedback. Under the robot skin, they created a web of optical beams. Even the faintest break in those beams registers in the robot's computer brain, making the skin far more sensitive than mechanical sensors, which are prone to interfering with each other.

Robots like the da Vinci surgery station already register feedback from touch, but a coating of this optical sensor-laden skin could vastly enhance the sensitivity of the machine. Additionally, a range of Japanese robots designed to help the elderly could gain a lighter touch with their sensitive charges if equipped with the skin.

Really, any interaction between human flesh and robot surfaces could benefit from the more lifelike touch provided by this sensor array. And to answer the question you're all thinking but won't say: yes. But please, get your mind out of the gutter. This is a family site.



Honda U3-X no more traffic


It's a nifty little device: essentially a sit down Segway unicycle that looks like a figure-8-shaped boombox, with a pop-out seat and footrests.
The machine balances itself, with or without a rider. 

You move and steer by leaning where you want to go, forward, backward, and -- in a unique twist - side to side. That's thanks to an impressive new wheel Honda's developed that's actually constructed from a bunch of much smaller wheels that can rotate perpendicular to the main wheel. The balance is very easy and intuitive - possibly too much so, as overconfidence can lead to a sideways pratfall .


It's pretty compact and weighs in at roughly 22 pounds, which makes it easy to pick up by the handle and lug up a flight of stairs. But the fastest it'll go is about 4 miles an hour, just a brisk walking pace, and the lithium-ion battery runs for just about an hour, so it's hard to imagine what the potential market for this thing would be.

Honda spokesman suggested it could be used by security guards who need to patrol around a site, or rented out to museumgoers so they can browse from painting to painting for an hour or so without tiring their tootsies. Although the high-pitched vacuum-cleaner-like whine from the motor might be a bit distracting to other art lovers.

Probably the most likely nearish-term use of the technology on display here would be to re-purpose the innovative wheels onto conventional wheelchairs, allowing for far greater lateral mobility. For now, Honda's got no plans to bring this to market, and no guess at what the price would be if and when it did.

Honda's Omni Traction Drive System: The wheel of the U3-X is made up of a series of smaller independent wheels that rotate perpendicular to the main one  Honda

via popsci






Sunday, April 17, 2011

Swarm Robot

Use Microsoft Surface to Control With Your Fingertips

Sharp-looking tabletop touchscreen can be used to command robots and combine data from various sources, potentially improving military planning, disaster response and search-and-rescue operations.

Mark Micire, a graduate student at the University of Massachusetts-Lowell, proposes using Surface, Microsoft's interactive tabletop, to unite various types of data, robots and other smart technologies around a common goal. It seems so obvious and so simple, you have to wonder why this type of technology is not already widespread.

In defending his graduate thesis earlier this week, Micire showed off a demo of his swarm-control interface, which you can watch below.

You can tap, touch and drag little icons to command individual robots or robot swarms. You can leave a trail of crumbs for them to follow, and you can draw paths for them in a way that looks quite like Flight Control, one of our favorite iPod/iPad games. To test his system, Micire steered a four-wheeled vehicle through a plywood maze.

Control This Robot With a Touchscreen: Mark Micire/UMass Lowell Robotics Lab
The system can integrate a variety of data sets, like city maps, building blueprints and more. You can pan and zoom in on any map point, and you can even integrate video feeds from individual robots so you can see things from their perspective.

As Micire describes it, current disaster-response methods can’t automatically compile and combine information to search for patterns. A smart system would integrate data from all kinds of sources, including commanders, individuals and robots in the field, computer generated risk models and more.

Emergency responders might not have the time or opportunity to get in-depth training on new technologies, so a simple touchscreen control system like this would be more useful. At the very least, it seems like a much more intuitive way to control future robot armies.