UC Berkeley
News + Trends

Robots learn faster with human help

Debora Pape
4.2.2025
Translation: machine translated

In the real world, robots have to react to unpredictable events. A team of researchers has developed a training method that enables robots to do this better and faster.

Researchers at the University of California in Berkeley, USA, have developed an efficient learning method for robots. The aim is for robots to learn more quickly how to correctly perform tasks in the real world that require dexterity and precision. Machines trained in this way can, for example, assemble Ikea shelves, flip a fried egg by flinging it up in the air or use a whip to precisely knock individual blocks out of a Jenga tower.

"But robots have been able to assemble cars for decades," you might be thinking. Yes, because they are programmed for the individual steps and always follow the same programme sequences. However, they cannot react to changing circumstances or take on new tasks without detailed instructions.

The new method is called "Human-in-the-Loop Sample-Efficient Robotic Reinforcement Learning" (HIL-SERL for short). It combines reinforcement learning, i.e. algorithm-based learning by trial and error, with human feedback and the imitation of human work steps. This means that humans are involved in the training. That's why it's called "human in the loop".

Learning in the real world is more complex

The difficulty with learning in the real world is the variable parameters. In the real world, physics is an important factor. The AI has to take forces and masses into account, for example to flip a fried egg. The position of the fried egg in the pan is just as important as its size and shape. The robots used by the researchers are therefore equipped with a camera.

Another example is Jenga Whipping. This is a trend in which skilful people use a whip to knock individual wooden blocks out of the game's pile. In order for the robot to do the same, it has to hit the correct spot precisely, judge the movement of the whip and strike with the right force. The researchers use Jenga Whipping as a pure game of skill for the robot.

Another problem is that training scenarios in the real world cannot be repeated as quickly as a virtual chess game. If the fried egg falls on the floor, the robot needs a new egg. If the Jenga tower topples over, someone has to rebuild it. This makes training time-consuming and expensive.

Humans help the robot to learn

That's why the researchers are experimenting with human instructions. They can control their robot with a special mouse and show it which strategies it might prefer to use. They also evaluate its attempts and give it feedback. As a result, the robot only needs a lot of attention at the beginning to get it on the right track. After that, it manages with less and less control. At the end of the training, the robot has a 100 per cent success rate. You can watch videos of this here.

Practical tasks are also among the activities that the robot learns after a short time: among other things, it can assemble an Ikea shelf, mount a toothed belt on rollers and attach components to a computer mainboard. The robot then carries out a functional test of the mainboard.

The researchers deliberately incorporate disruptions into the learning process, such as moving objects or causing the robot to drop them. The robot learns to react to these unexpected situations and still perform its task.

The study represents basic research. It is intended to show that the HIL-SERL system can be applied to many task areas. The results should make it easier to develop robust and versatile robots.

Header image: UC Berkeley

9 people like this article


User Avatar
User Avatar

Feels just as comfortable in front of a gaming PC as she does in a hammock in the garden. Likes the Roman Empire, container ships and science fiction books. Focuses mostly on unearthing news stories about IT and smart products.

These articles might also interest you

  • News + Trends

    Attachments transform the vacuum cleaner into a household robot

    by Lorenz Keller

  • News + Trends

    New Dreame robot hoover has legs and climbs six centimetres high

    by Lorenz Keller

  • News + Trends

    The new generation of robotic lawnmowers relies on AI

    by Stephan Lamprecht

Comments

Avatar