Hearing the call: Jont Allen brings industry experience to Illinois

2/24/2010 Steve McGaughey, Beckman Institute

After spending 32 years working on speech, hearing, and signal processing issues at what was then known as Bell Labs, ECE Associate Professor [profile:jontalle] left his comfortable position in the business world to join the University of Illinois and the Beckman Institute. His decision was based partly on professional reasons, but was also heavily influenced by a desire to mentor and interact with those who could benefit from the knowledge he had gained from more than three decades of doing research.

Written by Steve McGaughey, Beckman Institute

Jont Allen
Jont Allen

After spending 32 years working on speech, hearing, and signal processing issues at what was then known as Bell Labs, ECE Associate Professor Jont Allen left his comfortable position in the business world to join the University of Illinois and the Beckman Institute. His decision was based partly on professional reasons, but was also heavily influenced by a desire to mentor and interact with those who could benefit from the knowledge he had gained from more than three decades of doing research.

“It was an easygoing life because I could do whatever I wanted to do,” Allen said of his time at Bell, later known as AT&T Labs. “But it was much less rewarding because I didn’t teach and I didn’t have students.”

Allen joined Illinois in 2003 having never taught a college course and with no funding or research group to continue his work.

“When I came here, I had never taught, especially undergraduates,” he said. “It was pretty stressful to teach the first couple of undergraduate courses and I am understating stressful. Then I slowly got the funding and built up the research group and now it’s not so stressful. As a matter of fact, it’s very rewarding.”

Allen is also a full-time faculty member in Beckman’s Artificial Intelligence group. His Human Speech Recognition research group has five PhD candidates, and a few more graduate students who study issues in biomedical imaging, bioengineering, and acoustics.

Much of their research follows on work Allen did at Bell Labs, including topics such as mathematical modeling of cochlear functioning, human speech recognition, speech processing for hearing aids, and many other issues. Allen is still focusing on many of the same research areas he did at Bell, but finding more satisfaction with the process. 

“I love teaching and I love having the interaction with the students,” Allen said. “It’s very rewarding. A good student, you start off teaching them and after they’ve been doing the research for awhile, they teach you. I really like it when a student goes beyond me on a particular topic and they start informing me instead of the other way around. That’s when you know you’ve hit paydirt.”

Allen earned his undergraduate degree in electrical engineering from Illinois in 1966 and his master’s and PhD in electrical engineering from the University of Pennsylvania. He then went to work for Bell Labs, spending 32 years in its renowned acoustic research department that also featured future ECE and Illinois colleagues such as ECE Professor Stephen E Levinson.

Allen has built an impressive professional résumé during his time spent in the business and academic worlds. He is the recipient of the IBM Faculty Award and the IEEE Third Millennium Award and is a Fellow of IEEE and the Acoustical Society of America.

Allen maintains a broad and substantial research line. One five-year project involves a collaboration with Speech and Hearing Science faculty member Cynthia Johnson looking at a possible correlation between children’s hearing problems and their later difficulties in learning how to read. The researchers have been collecting data from elementary age students at a not-for-profit learning center called the Reading Group.

“We want to demonstrate that one key reason why young children don't learn how to read is because they didn’t learn how to hear speech,” Allen said. “We test their hearing confusions, which sounds they confuse, and we correlate that against their reading ability. For example, if a child failed to learn the distinction between b and d, say in the first six months of life, and then tries to learn to read.

“Imagine you were this kid, and you teacher shows you a ba and a da, and you hear ‘Johnny, this is a xa and that’s a xa.’ That would confuse you. You would assume that b and d are the same, and that strange little loop should be ignored. A few such confusions would be very disheartening I would think.”

As with his work at Bell, Allen’s research at Illinois has real-world applications when it comes to diagnosing and aiding those with hearing problems. He was a major contributor to the development of the popular Resound hearing aid while at Bell, and has continued to work in the hearing aid development area with his research group.

Allen said his focus is on problems of human speech perception, which critically depends on the workings of the cochlea, the portion of the inner ear that decodes sounds. The research seeks to understand how humans decode basic speech sounds and how that deteriorates through cochlear hearing loss due to factors like aging or noise trauma.

“As you age, the cochlea deteriorates and your ability to understand speech drops off,” Allen said. “People consider getting hearing aids but hearing aids, which do work in quiet, can seriously fail in noisy environments.

“It’s an important problem and nobody has resolved the key issues because they didn’t know how. But my experience and my background in cochlear modeling gave me the insight to see what the nature of what the problem might be. It’s a problem of auditory, cochlear processing, or signal processing. We have collected a huge database of confusions in hearing impaired ears. This is not unsimilar to the reading problem I just mentioned, but here, unlike the kids, the problem is with the aging or damaged inner ear.”

One of the major discoveries coming out of Allen’s work is that there are differences in hearing loss between the left and right ear of those suffering from the disorder.

“What we discovered that is really very important is that in the past people have only characterized the average loss and two people can have an identical average loss,” Allen said. “But the individual losses for each ear are completely different and across subjects it is even more so.”

That discovery has implications for accurate diagnoses of hearing problems and for the development of efficient hearing aids.

“In many ears it’s a small number of sounds, say four or five, that they can’t hear and that wasn’t appreciated before,” Allen said. “You don’t want to concentrate on the sounds they can hear; you want to concentrate on the sounds they can’t hear. You want to design and tune your hearing aid to what they can’t hear, without affecting the sounds they can ear.

Allen said existing amplification systems frequently reduce the ability to hear some sounds while helping with others.

“This can represent a net loss of sounds,” he said. “Any hearing aid that hurts more than it helps, if it increases the error on sounds the person can hear without the hearing aid, then the user won’t use it.

“And each ear is different. It’s like a fingerprint. Same brain, same person, but right versus left, they are significantly different. That is a deep insight because it means we want to change the way we diagnose the ear. We want to measure particular sounds and find out which sounds they are having trouble with and then go after those sounds for a particular ear.”

Another project Allen leads is attacking the problem through the creation of a large database of confusions in noise for normal hearing ears.

“We’ve made detailed comparisons of their speech confusions due to background noise,” Allen said. “We’ve done it on the scale of hundreds of normal hearing subjects, students here at the university.”

Allen said he was strongly influenced by the work of Harvey Fletcher, who in 1920 designed speech perception experiments involving thousands of people toward the design of the Bell Telephone system.

“Nobody does experiments like that today, but we need to think about the problem in a more grandiose way,” Allen said. “So I really went after big databases. My personal contribution is in figuring out how to design a set of experiments that will really get at the real issues.

“If we had the funding, we would use hundreds of talkers, and a thousand listeners, or even more. Once we saw patterns in the hearing and listening, we would stop collecting data. But just think about the variation in the way people speak. The same must be true of how they hear speech, don’t you think?”

Allen’s students are analyzing bits of that database, and extracting more and more information toward the development of improved analytical tools that will provide a better understanding of hearing problems. Allen’s research in this area has convinced him of how best to approach the problem of hearing loss, and eventually maybe, reading problems.

The issues Allen currently works on aren’t the only aspect of his work at Beckman that is similar to his time at Bell Labs; the approach to solving research problems at both places is the same. Allen believes that is one reason former Beckman Institute Director Pierre Wiltzius, another Bell Labs alumnus, helped recruit him to Illinois.

“I was always doing interdisciplinary work,” Allen said. “I think that was one of the arguments in favor of bringing me here, that Bell scientists are a natural fit in this environment. Pierre Wiltzius went out looking for Bell people who would contribute to this environment. From my point of view, it was a great match.”


Share this story

This story was published February 24, 2010.