The faces spark with the switch. For some hearing noise for the first time, it’s a look of joyful awe; for others it’s startlement at the sudden coloring of the vibrations that always surrounded them.
This child’s face can explain the miracle of the cochlear implant—a device that bypasses a deaf person’s damaged nerve cells, simulating a sense of sound in the brain—better than I can.
That child will probably grow up with a near-normal ability to decipher speech in relative quiet. He probably will have a natural sounding voice. But as of now, he will not be able to hear music—or at least all the aspects of it. He’ll hear the rhythm, the beats of the music, but he won’t be able to discern pitch or the melody. He won’t be able to differentiate between a flute and an oboe. To him, it will all be noise. But maybe not for his whole life: Science is getting closer to giving music to the deaf.
“You do polls of people who have cochlear implants, the first thing they want to do is hear speech,” says Jay Rubinstein, a surgeon who installs the implants (he’s also fittingly the brother of Jon Rubinstein, a coinventor of the iPod). “The second thing they want to do generally is hear speech in background noise. The third thing is music.” But we can really only give them the first. Even our best software can’t distinguish one voice among many—just try giving Siri a command in a noisy room. And the reduced pitch information makes understanding tonal languages like Chinese much harder. “All the electrical is all done rather crudely compared to normal listening, quite crudely,” says Les Atlas, an audiological-systems researcher at the University of Washington.
Let’s take a listen. This is what a person with a traditional implant might hear when listening to a song (these are all simulations, as it is impossible to actually hear what another human hears, save for outright telepathy).
This is what the rest of us hear.
This is what the rest of us hear.
The difference is incomparable. The first is creepy noise, the second is music. And that’s the gap that Atlas, Rubinstein, and their collaborators are trying to bridge with a new approach to cochlear-implant processing.
In modulating the sounds coming through the implant, Atlas and his team have been able to increase the perception of music among eight test subjects. “Imagine that instead of playing a piano with your fingers, you are playing it with your forearms,” Atlas says of the music that comes through with a traditional processor. Now “imagine you can take those piano keys and you can push them up and down really fast, at the rate of the pitch. It’s 100 times per second, 200 times, 300 times. That’s what we added.” And in adjusting the noise in that manner, those forearm smashes of the keys begin to sound more like fist smashes of the keys. There’s a bit more resolution to the sound.
Here’s that same file from above, but with Atlas’s new process.
Obviously, it isn’t all the way there. But it’s the origins of melody. “They’d like to take it home,” Rubinstein said of the reactions of the eight subjects tested with the new process. Seven of them showed a substantial ability to differentiate musical instruments. Three of the subjects showed better recognition of melody (though not statistically significant in the small sample size.) “But two out of those three showed huge improvements in melody perception.”
This isn’t a superficial enhancement so implant users can listen to the latest Top 40s on their Beats by Dre. Life with less stimulus is less stimulating. Because of the degraded signals that come through the cochlear implants, it takes more mental power to process speech, which can be isolating.
Hearing loss, write researchers in the International Journal of Audiology, often leads to “withdrawal from social activities, rejection of invitations to parties, and no more visits to theatres, cinemas, churches, lectures, etc. This, in turn, leads to reduced intellectual and cultural stimulation, and an increasingly passive and isolated citizen.” Atlas knows this firsthand, having watched his father and grandfather struggle with hearing loss in a musical family. “They seemed at first isolated because it was hard to understand people,” he says, “but felt even more isolated because they couldn’t enjoy music, which was a pretty big part of our family.” I know it too, remembering my nearly deaf grandpa retreating to the world between the couch and closed-caption television. Hearing loss as we age relates to cognitive decline, with those hard of hearing declining at a rate 40 percent faster.
For now, Atlas and Rubinstein’s developments will be confined to the lab. “It’s too computationally complex to do on a wearable speech processor,” Rubinstein said. They’ll turn now to devising a simplified approach.
Hearing music for a first time can be a profound experience, because it is so integral to the human experience. “We humans are a musical species no less than a linguistic one,” writes the neurologist and author Oliver Sacks. “And to this largely unconscious structural appreciation of music is added an often intense and profound emotional reaction to music.”
Austin Chapman didn’t have implants, but when he got an advanced pair of hearing aids that allowed him to hear music for the first time, it was a revelation—in an almost spiritual sense. “Being able to hear the music for the first time ever was unreal,” he wrote. “When Mozart’s “Lacrimosa” came on, I was blown away by the beauty of it. At one point of the song, it sounded like angels singing and I suddenly realized that this was the first time I was able to appreciate music. Tears rolled down my face and I tried to hide it.”