If you train robots like dogs, they learn faster

Instead of needing a month, it mastered new “tricks” in just days with reinforcement learning.

Treats-for-tricks works for training dogs — and apparently AI robots, too.

That’s the takeaway from a new study out of Johns Hopkins, where researchers have developed a new training system that allowed a robot to quickly learn how to do multi-step tasks in the real world — by mimicking the way canines learn new tricks.

Reinforcement Learning

One day, AI robots could clean our homes, care for our elderly, and do all of the dull, dirty, and dangerous jobs we don’t want to do.

But the real world is complicated. Developers will need to train robots to learn on the job — it’d be impossible to program a dish-cleaning robot to recognize every possible dirty dish, for example, but it still needs to know what to do when an unfamiliar one turns up in the sink.

One way developers train AIs is by letting them explore a virtual world and “rewarding” them when they do something right. This technique is called reinforcement learning, and it’s not unlike how we train dogs — they do a trick, they get a treat.

While it can be effective, reinforcement learning can also be time-consuming — the AI might try a lot of things before landing on the reward-worthy trick.

To overcome this limitation, the JHU team developed a new reinforcement learning framework they call Schedule for Positive Task (SPOT).

“The question here was how do we get the robot to learn a skill?” lead author Andrew Hundt said in a press release. “I’ve had dogs so I know rewards work and that was the inspiration for how I designed the learning algorithm.”

See SPOT Stack

In the SPOT framework, the robot’s “reward” isn’t a tasty treat but numerical points. The “trick,” meanwhile, is stacking multiple blocks on top of one another.

One way to speed up training time, the researchers discovered, was to reward their AI for doing “sub tasks.” These would be equivalent to trying to train a dog to sit and giving it a treat if it starts to lower its rear — the dog didn’t do exactly what you wanted, but it’s on the right path.

It used to take a month to achieve 100% accuracy. We were able to do it in two days.


Andrew Hundt

It also helped if the AI lost points for doing something that negated its previous progress, like knocking over the blocks after stacking them — this is called “progress reversal.”

They also coded some common sense into the AI, pre-programming it with intuitions to avoid wasting time on dead ends and recognize what it was supposed to do more quickly.

“(G)rasping at thin air isn’t worth a robot’s time, but (since) robots learn through trial and error, they would not typically have this intuition, until now,” Hundt told Freethink. “We have developed a practical way for the robot to incorporate this common sense knowledge into a safety check, which skips the actions which are definitely not worth trying.”

The Future of the SPOT Framework

In total, their framework allowed them to train an actual robot — not just an AI in a virtual world — to accurately complete multi-step tasks much faster than another common reinforcement learning method.

“(The robot) quickly learns the right behavior to get the best reward,” Hundt said in the press release. “In fact, it used to take a month of practice for the robot to achieve 100% accuracy. We were able to do it in two days.”

His hope is that the SPOT framework might one day help AI developers train robots to do things far more complicated than stacking blocks.

“We believe that with further development, this technology has the potential to change a variety of industries for the better, from home care and surgery to warehousing and even self-driving cars,” he told Freethink.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Farmers can fight invasive insects with AI and a robotic arm
As the invasive spotted lanternfly threatens to expand its range, Carnegie Mellon researchers are developing a robot to fight back.
Google unveils AI try-on feature for shopping
Google’s AI-powered virtual try-on feature lets shoppers see what an article of clothing would look like on a wide range of models.
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
AI is riding to the rescue on wildfires
AI-powered systems designed to detect, confirm, and detail wildfires at the earliest possible time may help firefighters tame infernos in the West.
Up Next
Diabolical Ironclad Beetle
Subscribe to Freethink for more great stories