Understanding human speech recognition

4/14/2008 Lauren Eichmann, ECE Illinois

ECE Associate Professor Jont Allen is researching speech processing patterns that may help detect hearing loss and improve functioning of hearing devices such as hearing aids and cochlear implants.

Written by Lauren Eichmann, ECE Illinois

Jont Allen
Jont Allen

Hearing impairments affect approximately 28 million Americans, according to the National Institute on Deafness and Other Communication Disorders. ECE Associate Professor Jont Allen is trying to reduce that number by researching speech processing patterns that may help detect hearing loss and improve functioning of hearing devices such as hearing aids and cochlear implants.

"I am researching human-speech recognition, as opposed to machine-speech recognition," said Allen. "We fundamentally don’t understand how the cochlea - the inner ear - and the brain, process speech. We just don’t have the information."

Through his research, Allen has discovered that adding background noise or delays in certain frequencies of noise can change the brain’s perception of it, making a "ba" sound be heard as a "pa," for example. "Since we’ve identified what exactly is the acoustic feature you’re listening for, we can enhance it selectively and thereby make speech more robust to noise," said Allen. "That we have done."

Allen said he feels his research team has made significant progress since he started at the University four years ago. "We have actually now identified what’s going on for about half a dozen, or maybe more, sounds of language," he said. "We understand the sounds and how they’re coded by the brain. I consider this to be quite an important thing."

Identifying how normal people process speech is possibly best understood by recognizing how the speech system fails with hearing loss, according to Allen. "Disease is also a good vector towards understanding problems," he said. "Looking at how something is broken can tell you how it works."

Allen and his students have in fact done just that. They are currently studying an elderly hearing-impaired patient whom they are extensively testing. "We’ve discovered there are just a small number of sounds that these hearing-impaired people can’t hear," said Allen. "Usually people just assumed it was all sounds that you couldn’t hear. But it’s very specific sounds, and each ear appears to be different, according to some research by one of my students. This is really a shocking result is some ways that there are only a small number of sounds that a hearing-impaired person has trouble with, and that each person is different. It’s not the same sounds. By taking advantage of this fact, we would like to ultimately give people back their 16-year-old hearing."

One of Allen’s students is fine-tuning the hearing aid signal processing so it can be specifically adjusted to a person’s individual consonant-sound-loss profile. Phonak, a hearing aid company, began funding Allen’s research six months ago. Allen said previous research had been done on speech processing in the 1950s, but the study "died" because people failed to see how it was relevant to hearing problems. "Of course, signal processing and computer technology [have] come a long way in the meantime, allowing us to do things we never dreamed of before," Allen said. "No one has taken this approach, where they look at specific sounds. But the goal is to make speech more robust to noise. There are some very basic observations that should have been made years ago," he continued. "The problem really hasn’t been properly investigated, but it could lead to a lot of interesting things. The timing, edges and rhythms of sounds are very important in human speech processing."

With even a small amount of hearing loss, people have trouble understanding speech in noisy environments, said Allen. "This is a well-known phenomenon. It commonly becomes such a problem that they stop going to social events because they just can’t communicate. It’s just too difficult," he said.

Along with Cynthia Johnson, a professor in the Speech and Hearing Science Department whom Allen has been working with for nearly three years, studying children’s hearing loss has become a research focus. They have been collaborating with The Reading Group in Champaign, an organization that helps children who have reading disorders. Allen theorizes that when young children have a temporary middle ear hearing problem, they may have difficulty reading because they can’t hear certain distinct sounds at a critical learning time.

Johnson and Allen have teamed up to identify the speech perception of hearing-disabled children. Allen said many of the children in their testing cannot hear certain sounds, and tests on half of the third graders indicated there may be a tangible correlation between their hearing and reading difficulties. "This is a very interesting problem," said Allen. "I’m very pleased to be involved in such a thing."

For the test, eight third graders must indicate which of three speech sounds differ using three blocks. They must identify the verbalized sound and choose the block whose sound was not mentioned. This "oddball task," as they call it, takes 10 to 15 minutes and is entirely perceptual, said Allen. They are also completing tests on students who serve as controls. Beside The Reading Group, Allen and Speech and Hearing Science Professor David Gooler, study subjects at Carle Clinic, including some extensive testing on a few individuals. Much of this work is only published in conferences, and is in the process of being written up for peer reviewed journals.

Through another aspect of his research, Allen developed a Middle Ear Power Analyzer, commercialized by Champaign-based Mimosa Acoustics after FDA approval in February 2006. He owns the company with his wife, who serves as its president. The device and hardware measure the acoustic impedance of the ear canal, or how much sound is reflected off the eardrum as a function of frequency. When Allen first developed the instrument, he was associated with AT&T Bell Labs, the City University of New York, and Columbia University.

According to Allen, three newborns in 1,000 are born deaf. "Parents often cannot tell the child is deaf until the child doesn’t start to develop speech," he said. "So it’s years before they find out, which is unbelievable, but true."

Allen said it is necessary to find ways of accurately testing and identifying the three kids in 1,000. "You can’t find the three if you’re getting a positive on 100," he said of the former system which often mistook serious hearing problems for minor, temporary ones. "You’re getting a false positive on 97, and the true positives are buried in the false positives. So this is a serious problem."

Allen’s test using the Middle Ear Power Analyzer is now able to accurately differentiate false positives from true positives. It is also applied to brain trauma cases for monitoring skull pressure at Harvard University and by one of Allen’s former students, Susan Voss of Smith College of Engineering. The inside of the cochlea is the same pressure as the fluid in the brain; thus it is possible to measure brain pressure by identifying how much sound comes out of the cochlea.

Allen worked with Bell Laboratories in Murray Hill, N.J., in the acoustics division for 30 years and has helped create better hearing aid technology called wide-dynamic range multi-band compression, which is now used in most hearing aids today. He received a bachelor’s in electrical engineering from Illinois in 1966 and his master’s and PhD from the University of Pennsylvania in 1968 and 1970, respectively.

For more information about his work, visit his Web site at www.auditorymodels.org/jba/.


Share this story

This story was published April 14, 2008.