Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on May 3, 2019

A Twitter bot that translates jungle sounds to existential questions might just help save the rainforest


A Twitter bot that translates jungle sounds to existential questions might just help save the rainforest

On Thursday, March 21, at 00:36, a cricket in Borneo chirped “fuck me and you and he he he and his and what he really?” At least, that is what a not-particularly-good speech-to-text algorithm turned some live jungle sounds into, and then posted to Twitter.

The next entry, from the next day, reads “to whirl hey go big for will whoa whoa whoa whoa whoa whoa we hugged will rule of luo by it who hit him when he will blow a call will hit?” When, indeed.

Existential Jungle Bot is a bot created by PhD researcher Sarab Sethi, who apart from making this beautiful bit of internet, is currently working on a project developing low-cost devices that can autonomously monitor jungle sounds in real-time to listen for changes in biodiversity.

The idea is that the devices can create a kind of audio fingerprint for their surroundings. These fingerprints can be compared, to for example find the difference between a pristine bit of forest, an area that’s being logged, and one that’s on the mend.

In addition, the system might be able to pick out sounds made by individual species of animals, to check for their presence – or absence – from certain areas.

The hope is that a system like this, once operational, can assist ecologists in monitoring the health of ecosystems – a task that nowadays is labor-intensive, highly specialized, expensive, and slow.

Credit: The SAFE project Acoustics team

I spoke to Sethi on the phone, the day after he was awarded a NETEXPLO Innovation Forum Award for his project. The award is organized by UNESCO to celebrate the top 10 projects they deem innovative enough: “It’s a pretty extreme interdisciplinary project, it’s run across three different departments: forest ecology, applied math, and design engineering,” Sethi explained.

The award-winning project started about three years ago. “What I started out doing, was tackling the hardware side of it,” he told me. That was a real challenge, because electronics and jungle do not like each other. Jungles are wet and hot, and teeming with critters that can crawl in and jam up even the most carefully designed hardware.

“It seemed feasible, but most people in this field are coming from an ecology background, whereas we approached as an engineering problem.” So together with Dr Lorenzo Picinali from Imperial’s Dyson School of Design Engineering, Sethi managed to iterate his way toward a device that was impervious to insects, dampness, and could even function on the jungle’s weak cell reception for data transfer.

Along the way their devices were smashed by orangutans and invaded by ants that found their way into waterproof enclosures and ate the microphones. But after a number of tries they ended up with a device that was both sturdy and low-cost, which could be produced ready-made, or assembled with minimal training on-site.

Credit: The SAFE project Acoustics team

The kit is based on a Raspberry Pi, and is connected to the internet through a 3G phone signal. Power comes from solar panels that are installed in the upper tree canopy. If you’d like to see exactly how it works – and how to build it, step-by-step instructions are available online.

“Our overall goal, of course, is to do automated ecosystem monitoring,” Sethi tells me, which means that apart from the hardware, the system should be able to analyze and categorize the real-time audio coming in – which is where the applied mathematics enter the picture.

Basically, what the system does is that it takes in the audio of a certain type of surrounding, and tries to find a fingerprint, or set of characteristics in the data, for that type of surrounding. “That fingerprint captures all the information of all the animals calling, and then you try to see how that changes between different types of land use,” Sethi said.

Once those fingerprints have been established, they will hopefully predict or at least alert rainforest conservationists when an area is changing or under threat of degradation.

“At the moment it’s a research study. We’re talking a good ten years down the line here, but hopefully researchers will be able to look at the data and say ‘this bit of forest is acting weird’ and do something about it,” Sethi said.

He told me that ecosystems often tend to act in a non-linear way, so they’re fine one moment and suddenly collapse the next. “So if you can see a strange pattern emerging before that happens, you can point your attention there and hopefully prevent it from happening.”

In the meanwhile, it’s actually possible for us normal people to listen to the result of Sethi’s achievements. Thanks to the SAFE Project, that also facilitated the deployment of the devices, a real-time stream of different audio monitoring stations is available for perusal. It’s very soothing. (Editor’s note: the real-time streaming is buggy at the moment of publishing, but that should be resolved soon, according to Sethi)

And of course you can follow the Existential Jungle Bot on Twitter, which relies on the same livestreams, but feeds those to a speech-to-text algorithm, albeit a not very good one. Sethi: “I tried Google’s algorithm, but  the issue was that it was too good. It would tell us there was no speech in the clip. So we needed an intentionally not great one and ended up with PocketSphinx.”

“What happened was that mostly question words rolled out, because bird sounds normally sound like ‘who’ or ‘why’ or ‘where’. So I just stuck a question mark at the end to make it more existential. That’s the whole story of that bot,” Sethi says.

Also “I’m secretly an artist, trying to get by as a scientist,” Sethi jokes. But in an email he sent me after our conversation with a “slightly more serious artistic message to the bot rather than stupid humour” he states that “I’d say it’s a comment on how fallible seemingly advanced machine learning techniques are and how far we are from a robot apocalypse.”

He unfortunately didn’t know which animals produced the best words. “The bot normally tends to tweet in the night in Borneo, so the animals that you hear are mostly frogs or crickets. Short chirps that are coming out as ‘who’ or ‘why.’”

Which means we’ll probably never know who or what called out “hey hey hey hey and hurt his okay okay?” at four in the morning on a Tuesday in March. But the technology that recorded it might just help save the rainforest, some day.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with