Hearing loss sucks. It’s exhausting. I’ve suffered from partial hearing loss in both ears since the mid 2000’s. In order to function in the real world, I’m forced to exist in a state of constant vigilance. I have to actively listen all the time in order to avoid creating an environment where people are constantly raising their voice at me.
Over time I developed a methodology for interpreting physical and verbal cues to understand what people were saying to me in situations where it was difficult to hear – such as at a conference or at a table with multiple conversations happening at once.
Then COVID-19 happened and everyone started wearing masks. It was like starting all over again because I couldn’t watch people’s lips to fill in the blanks my hearing left out.
It’s estimated over 5% of the world’s population suffers from hearing loss. While it’s most commonly associated with the elderly, hearing loss is also the most prevalent service-related disability among US military veterans.
The fact of the matter is that hearing loss affects people of all demographics, from children with congenital conditions, to otherwise-healthy adults who’ve suffered injury or illness, to the elderly who experience age-related onset.
Unfortunately, as the CEO and cofounder of Whisper, Dwight Crow recently told me, “It isn’t a very sexy problem to solve.”
AI for good
Whisper’s an interesting company. It builds niche hardware as a means to onboard potential customers to its subscription-based update service. That’s probably not how the company’s marketing team would like its work described, but it’s challenging to reconcile the startup’s ambition with its simplicity.
The big idea here is pretty basic: You get hardware into people’s hands and then use your algorithms to keep them coming back for more. Usually, this model is reserved for entities such as YouTube and Twitter. The end game is typically to keep your attention for as long as possible so you’ll watch as many ads as the big tech bosses can shove down your throat.
But Whisper’s not trying to dupe you into infinitely scrolling in order to soften you up for impulse purchases, it’s trying to solve all of the problems with the hearing aid market.
Hearing aids suck
Hearing devices and the examinations necessary for a medical professional to recommend them are not covered by Medicare or most insurers in the US. This means people with hearing loss – of which low or fixed-income people comprise a significantly high percentage of – have to come out of pocket for their devices more often than not. And that means paying anywhere from one to six thousands dollars per device on average.
The high-end devices using traditional hearing aid tech are okay – once you surpass the cost of a pair of audiophile-worthy music headphones it stands to reason you’ll get more than just “it makes things louder.” Okay is better than nothing, but it still means people have to live with substandard hearing, even when its augmented.
Whisper’s solution to hearing loss offers the prospect of not only augmenting your hearing, but reaching superhuman levels when it comes to distinguishing targeted sounds from noise.
How it works
In a nutshell, algorithms pick apart audio to find all the salient sounds through a process called segmentation. This works similar to how AI figures out what’s in a photograph. If, for example, you snap a selfie in front of a sunset, Google’s AI can pick apart different pieces of the image to label. It might decide there’s a you, a sunset, a beach, some clouds, and some birds in the picture.
Later, if you’ve got the proper hardware and you’re using Google Photos, you can simply say “hey Google, show me all my beach pics,” or “hey Google, find images with clouds,” and the AI can surface those results.
It works the same with audio segmentation, though it’s much trickier to work with overlapping noisy sounds than it is to work with flat images.
Whisper didn’t invent the technology its using – natural language processing and audio detection, segmentation, and isolation, have been around for as long as there’s been audio devices – but it’s among the first companies to develop it into an immediately-useful solution to an ages old problem.
Whisper uses a proprietary ear device that’s designed to be more comfortable than average hearing aids. It connects wireless to a “Whisper Brain” that processes the audio using modern algorithms, this keeps it from being bulky. What’s revolutionary, aside from the tech implementation, is how Whisper solves the surrounding problems concerning hearing loss.
Rather than charge thousands for the device, Whisper works on a subscription plan. This not only allows customers to experience hearing improvements without investing thousands up front, but ensures they’ll receive regular updates as the company improves its AI.
Better still, Whisper offers full damage and loss replacement for three years so you don’t have to worry about you or your loved ones doing without one of their five senses just because something bad happens or they don’t have a large enough emergency fund.
Why it’s important
Numerous studies have shown a direct link between hearing loss and dementia. Yet there have been few longitudinal studies involving long-term outcomes for Alzheimer’s patients who’ve had hearing loss interventions. The research shows that people suffering from hearing loss experience isolation, which can be correlated to worsening dementia symptoms, but exactly how much cognitive benefit a better hearing device could provide people remains unclear.
When I spoke to Whisper CEO Dwight Crow, he explained that the time was right for disruption:
We’ve seen an explosion in the ability to extract semantic sense from language … ultimately, we want to provide people with a better signal to noise ratio.
But how much difference can “better” make when it comes to hearing aids? The status quo aren’t too far removed, in purpose, from the old go stick a horn in your ear method from the pre-electronics age. Now, hearing aids use specialty microphones to pick up noises and an onboard audio processor to boost the signals the device gauges as within the proper frequency — but the end benefit for users isn’t all that much greater than just turning the volume up.
It turns out that hearing aids can not only get a lot better, but even a tiny bit of clarity actually makes a huge difference. According to Crow:
Modern AI algorithms can deliver 2-3 decibels better signal to noise ratio than any existing hearing aid. That’s the difference, for many people, between comprehensible and incomprehensible.
This isn’t a turnkey AI solution where some fly-by-night startup taps into a hardware market to peddle repackaged university AI (looking at you, Amazon’s second-page AI smart gadgets market).
Whisper’s built a lab in California, it’s worked with the Mitsubishi group on research, and its product development process includes working closely with groups of people who live with hearing loss. And, from what I could tell from my conversation with Crow, the company really cares.
When I asked why they wanted to build a better hearing aid instead of taking the same technology and know-how and building spy-tech with DARPA for the Pentagon or something like that, Crow said it was because with Whisper “there’s just such an opportunity to help people.” Both Crow and his co-founder decided to create the company after watching loved ones struggle with hearing loss and the status quo.
You can find out more about Whisper here.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural