These robots are teaching themselves to walk, run and jump 

This year marks 20 years since IBM computer Deep Blue beat chess world champion Garry Kasparov in a six-game match. It was undoubtedly a milestone: for years a game of chess had been the yardstick by which progress in the world of artificial intelligence (AI) had been measured.

Honda's latest version of the Asimo humanoid robot runs during a presentation in Zaventem near Brussels July 16, 2014. Honda introduced in Belgium an improved version of its Asimo humanoid robot that it says has enhanced intelligence and hand dexterity, and is able to run at a speed of some 9 kilometres per hour (5.6 miles per hour).  REUTERS/Francois Lenoir (BELGIUM - Tags: SCIENCE TECHNOLOGY BUSINESS SOCIETY) - RTR3YVQI

Image: REUTERS/Francois Lenoir

There have been predictions of a robo-future where machines move like humans. In reality, in the two decades since Deep Blue, machine intelligence has advanced faster and further than machine mobility. The robot Olympics is probably still a long way off.

Alphabet, Google’s parent company, is one of the organizations sinking significant time and money into developing robots’ physical intelligence. Its AI subsidiary, DeepMind, has managed to produce an artificially intelligent machine that can walk, run and jump in simulated environments. Crucially it can learn to do this itself, without prior guidance.

World chess champion Garry Kasparov (L) reaches to make a move early in game six of the six-game $1.1 million chess match series against IBM supercomputer Deep Blue operated by IBM's Joe Hoane (R) in New York, May 11. The supercomputer made chess history when it defeated Kasparov for an overall victory in their six game re-match, the first time a computer has triumphed over a reigning world champion in a classical match. Kasparov resigned after 19 moves.CHESS DEEPBLUE - RTR3MU9

Image: Reuters/Peter Morgan

In a series of three papers DeepMind researchers have demonstrated how simulated robots can use AI to adapt and respond to various obstacles in a virtual environment.

In the first paper, scientists demonstrate how they managed to get a variety of simulated robot types to learn to jump, turn and crouch without specific instructions to do so. The simulations were only given high-level objectives, such as moving forwards without falling.

The second paper shows how movement learning can be applied to more human-like robots, using motion capture data of human behaviour to pre-learn certain skills, such as walking, getting up from the ground, running and turning. These skills can then be applied to overcome other virtual obstacles, meaning the humanoid AI can learn how to climb stairs or navigate walled corridors.

The final paper shows how scientists have developed a model that learns relationships between particular behaviours and to imitate actions it is shown. This means, for example, it can switch between different walking styles and adapt movements, despite never having been shown how to.

Leave a Reply