One robot was able to watch another bot and predict its actions

This could be the first demonstration of a robot showing basic empathy.

We humans start out as self-centered little things. It’s only around the age of three that most of us realize that other people have feelings, wants, and needs different from our own.

Not long after developing that skill — called “theory of mind” — we learn another: empathy. That’s the ability to put ourselves in another’s shoes, to understand their perspective even when it differs from our own.

It turns out, robots may be capable of displaying a kind of empathy, too — a discovery that could help teams of bots better serve us in the future.

Empathetic Robots

Many experts predict that we’re headed toward a future in which scores of AI robots live among us — they’ll be our colleagues, our cooks, our maids, and even our drivers.

If those robots know to some degree what the bots around them are going to do, it will help them work together and also stay out of one another’s way.

“Self-driving cars, for example, can better plan ahead if they can understand what other autonomous vehicles will do next,” Columbia University engineer Boyuan Chen told Freethink. “When two robots are tasked to assemble a table, if one anticipates that the other is going to put on the leg, it can help by picking up the table leg outside the reachable space of that robot.”

But training every robot to know what every other robot is going to do in every situation wouldn’t be feasible, and it would be expensive to equip every bot with the systems needed to make real-time communications with all other bots possible.

If a robot was able to demonstrate theory of mind and empathize with other robots — naturally putting itself in their shoes and predicting their actions — it could learn how to work within the larger network just by observing.

Now, a new study by Chen and his colleagues suggests that empathy between robots may be possible.

Predicting the Future

For their study, the Columbia researchers started by building a six-square-foot “playpen” for the robots.

One of the bots could roll around on its wheels inside the playpen and was trained to move toward any green circle it saw on its floor.

However, a red cube in the playpen would sometimes block the robot’s view of a green circle. In those instances, the bot either wouldn’t move, or it would move toward a different circle that it could see.

The other robot in the experiment was positioned above the center of the playpen. It couldn’t move, but it could see everything happening down below: the other robot, the cube, and every green circle.

For two hours, the “observer” robot watched the bot below as it rolled toward one green circle after another or stood still.

After that, it was able to predict the path of its partner robot 98 out of 100 times — even though it had never been told that the robot was programmed to move toward green circles or that it couldn’t see past the red cube.

“Our findings begin to demonstrate how robots can see the world from another robot’s perspective,” Chen said in a press release.

“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” he continued.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Farmers can fight invasive insects with AI and a robotic arm
As the invasive spotted lanternfly threatens to expand its range, Carnegie Mellon researchers are developing a robot to fight back.
Google unveils AI try-on feature for shopping
Google’s AI-powered virtual try-on feature lets shoppers see what an article of clothing would look like on a wide range of models.
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
AI is riding to the rescue on wildfires
AI-powered systems designed to detect, confirm, and detail wildfires at the earliest possible time may help firefighters tame infernos in the West.
Up Next
foot drop
Subscribe to Freethink for more great stories