Google built a neural network to warn ships of whales. will it help?

Google trained an artificial neural network to locate whales and alert ships — cautioning them to slow down. Getting them to do it is another story.

In 2018, researchers from the National Oceanic and Atmospheric Administration (NOAA) approached Google and asked them if they could make sense of hours and hours of underwater recordings. The audio files contained the sounds of dolphins, ships, and low-frequency sonar.

Amidst those sounds — sometimes indistinguishable from the rest of the underwater noise — were the calls of southern resident orcas, a population of killer whales in the Salish Sea (the coastal waterways near Puget Sound).

The NOAA team wanted to know if Google could find a way to identify the orcas amidst the cacophony of noise. So engineers at Google began working on an artificial intelligence model that could identify the whales by their calls, reports the New York Times. They can use this system to locate injured or lost whales and alert shipping traffic when a whale is present in their area, cautioning them to slow down or alter course.

Ship Strikes Threaten The Whales

The southern resident orcas are a distinct population of killer whales. They live in the northwest waterways surrounding Vancouver and Seattle. The population is listed as endangered under the Canadian Species at Risk Act — only 72 remain in the wild.

Like other whales, this population suffers from a dwindling food supply (chinook salmon), climate change, and noise pollution — anthropogenic underwater noise that interferes with the whale’s ability to echolocate, use sound to navigate and find food. But collisions with ships are one of the largest threats to the southern resident killer whales.

There is a good reason to be especially concerned about the animals. Most of their essential habitat overlaps with major shipping channels. With industrial developments in the works — like a port expansion and oil pipeline terminal — large ship traffic is expected to increase.

Training an Artificial Neural Network to Listen for Whales

NOAA researchers handed over about 1,800 hours of annotated underwater recordings that contained the calls of different whales. Engineers at Google used the recordings to train an artificial neural network to identify and classify unknown sounds. The neural network is similar to the one Google created to identify different objects in images (essentially, how Google learns what images to show you when you search for text).

To date, the neural network has successfully identified the type of whale, but cannot yet distinguish between a resident killer whale (who have a smaller, more local range) and a transient one. The two populations look nearly identical to the untrained eye and their ranges overlap. But, they are very different — they do not mate, compete for the same food, or interact socially, and they are genetically unique.

This whale detection model is the next project in a lineup of efforts with Google AI. Other examples include an artificial neural network to listen for and identify chainsaw sounds in the rainforest — a potential sign of illegal logging.

The Google AI team can use the artificial neural network, combined with an array of hydrophones (underwater microphones) that already exist across the Salish Sea. The hydrophones essentially “listen for” killer whales and capture the sound of their voices.

The neural network will determine if it is in distress, alert the Marine Mammal Unit at Fisheries and Oceans Canada of the whale’s whereabouts, who then decide if and how to assist the whale. Also, whale detection alerts are sent to the smartphone of the Department of Fisheries and Oceans (DFO) officials in Canada, who could ultimately alert large ships, asking them to slow down to reduce noise pollution and the chance of a fatality from a whale-ship collision, reports VentureBeat.

“One of the primary threats that the whales are facing are entanglement and just difficulty foraging due to vessel noise. So they can use this (alert system) to advise traffic or the vessels themselves that orcas are in this location, consider slowing down or consider a change of course,” Matt Harvey, software engineer with Google AI, told the CBC.

Slowing Down Is Hard to Do

But getting ships to slow down for whales is another story.

In 2015, NOAA launched a voluntary speed limit near California. They even created a program that offered incentives to shipping companies that slowed down to 10 knots (nautical miles per hour) in these whale regions. But most vessels didn’t comply. 54% of the shipping companies broke the speed limit — a number that remained steady for several years. In 2019, only 15 of about 74 shipping companies complied with the California slow-down request.

In 2017, researchers looked into the impact of a voluntary commercial vessel slow-down in shipping lanes in the Salish Sea. They conducted a 3-month trial, requesting captains of large ships to slow down to 11 knots. Analyzing AIS’s ship traffic data, they found that 350 of 951 (37%) ships slowed down to this target speed.

Researchers Are Optimistic about  AI

The ocean is one area of research that could benefit from massive amounts of data. The sheer volume of the ocean that is untested, undocumented, means that so much is still unknown, even with endless sampling.

Artificial intelligence can fill in the gaps. For example, Google AI is also working on an artificial neural network to detect humpback whales. The New York Times reports that Kakani Katija, an engineer at the Monterey Bay Research Aquarium Institute, is using AI and lasers to study giant larvaceans — actually tiny ocean creatures that look like tadpoles but build homes that have the structural integrity of a loogie. Larvaceans carry the homes like a turtle carries a shell. But the homes dissolve easily, which makes the sea creature difficult to capture and study in a lab. Katja’s solution is to use lasers and AI for her research.

“What I love about technology or the progress we’re seeing in AI, I think it’s a hopeful time because if we get this right, I think it will have profound effects on how we observe our environment and create a sustainable future,” Katija told the New York Times.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Farmers can fight invasive insects with AI and a robotic arm
As the invasive spotted lanternfly threatens to expand its range, Carnegie Mellon researchers are developing a robot to fight back.
Google unveils AI try-on feature for shopping
Google’s AI-powered virtual try-on feature lets shoppers see what an article of clothing would look like on a wide range of models.
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
AI is riding to the rescue on wildfires
AI-powered systems designed to detect, confirm, and detail wildfires at the earliest possible time may help firefighters tame infernos in the West.
Up Next
Track Wildfires
Subscribe to Freethink for more great stories