Deep learning vs. machine learning: Explained

Both are powerful forms of AI, but one’s more mysterious than the other.

Well, you clicked this, so obviously you’re interested in some of the finer nuances of artificial intelligence. Little wonder; it’s popping up everywhere, taking on applications as far ranging as trying to catch asymptomatic COVID infections via cough, creating maps of wildfires faster, and beating up on esports pros.

It also listens when you ask Alexa or summon Siri, and unlocks your phone with a glance.

But artificial intelligence is an umbrella term, and when we start moving down the specificity chain, things can get confusing — especially when the names are so similar, e.g. deep learning vs. machine learning.

Deep Learning vs. Machine Learning

Let’s make that distinction between deep learning vs machine learning; they’re pretty closely related. Machine learning is the broader category here, so let’s define that first.

 Machine learning is a field of AI wherein the program “learns” via data. It existed on paper in the 1950s and in rudimentary forms by the 1990s, but only recently has the computing power it needs to really shine been available.

That learning data could come from a large set labeled by humans — called a ground truth — or it can be generated by the AI itself.

For example, to train a machine learning algorithm to know what’s a cat — you knew the cat was coming — or not you could feed it an immense collection of images, labeled by humans as cats, to act as the ground truth. By churning through it all, the AI learns what makes something a cat and something not, and can then identify it.

The key difference for deep learning vs machine learning is that deep learning is a specific form of machine learning powered by what are called neural nets.

As their name suggests, neural nets are inspired by the human brain. Between your ears, neurons work in concert; a deep learning algorithm does essentially the same thing. It uses multiple layers of neural networks to process the information, delivering, from deep within this complicated system, the output we ask it to.

Take the computer program AlphaGo. By playing the strategy board game Go against itself countless times, AlphaGo developed its own unique playing style. Its technique was so unsettling and alien that during a game against Lee Sedol, the best Go player in the world, it made a move so discombobulating that Sedol had to leave the room. When he returned, he took another 15 minutes to think of his next step.

He has since announced his retirement. “Even if I become the number one, there is an entity that cannot be defeated,” Sedol told Yonhap News Agency.

Notice how Sedol called AlphaGo an “entity?” That’s because it didn’t play like a run-of-the-mill Go program, or even a typical AI. It made itself into something … else.

Deep learning systems like AlphaGo are, well, deep. And complex. They create programs we really do call entities because they take on a “thinking” pattern that is so complex that we don’t know how they arrive at their output. In fact, deep learning is often referred to as a “black box.”.

The Black Box Problem

Since deep learning neural nets are so complex, they can actually become too complex to comprehend; we know what we put into the AI, we know what it gave us, but in-between, we don’t know how it arrived at that output — that’s the black box.

This may not seem too concerning when the AI in question is recognizing your face to open your iPhone, but the stakes are considerably higher when it’s recognizing your face for the police. Or when it is trying to determine a medical diagnosis. Or when it is keeping autonomous vehicles safely on the road. While not necessarily dangerous, black boxes pose a problem in that we don’t know how the entities are arriving at their decision — and if the medical diagnosis is wrong or the autonomous vehicle goes off the road, we may not know exactly why.

Does this mean we shouldn’t use black boxes? Not necessarily. Deep learning experts are divided on how to handle the black box.

Some researchers, like Auburn University computer scientist Anh Nguyen, want to crack open these boxes and figure out what makes deep learning tick. Meanwhile, Duke University computer scientist Cynthia Rudin thinks we should focus on building AI that doesn’t have a black box  problem in the first place, like more traditional algorithms. Still other computer scientists, like the University of Toronto’s Geoff Hinton and Facebook’s Yann LeCun, think we shouldn’t be worried about black boxes at all. Humans, after all, are black boxes as well.

It’s a problem we’ll have to wrestle with, because it can’t really be avoided; more complex problems require more complex neural nets, which means more black boxes. In deep learning vs machine learning, the former’s going to wipe the floor with the later when problems get tough — and it uses that black box to do so.

As Nguyen told me, there’s no free lunch when it comes to AI.

Related
Farmers can fight invasive insects with AI and a robotic arm
As the invasive spotted lanternfly threatens to expand its range, Carnegie Mellon researchers are developing a robot to fight back.
Google unveils AI try-on feature for shopping
Google’s AI-powered virtual try-on feature lets shoppers see what an article of clothing would look like on a wide range of models.
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
AI is riding to the rescue on wildfires
AI-powered systems designed to detect, confirm, and detail wildfires at the earliest possible time may help firefighters tame infernos in the West.
Up Next
ai military jet
Subscribe to Freethink for more great stories