Making communication more personal

6/19/2008 Bridget Maiellaro, ECE Illinois

Imagine receiving an e-mail from a friend or colleague that does not include any text. Instead, when you open the letter, a synthetic talking face appears on the screen and delivers the message, complete with facial expression and emotion. While it may seem far-fetched, ECE Professor Thomas Huang and his graduate students are developing ways to make it a reality.

Written by Bridget Maiellaro, ECE Illinois

ECE researchers start with an image and create an animated avatar to deliver a message.
ECE researchers start with an image and create an animated avatar to deliver a message.
Monroe Avatar
Monroe Avatar

Imagine receiving an e-mail from a friend or colleague that does not include any text. Instead, when you open the letter, a synthetic talking face appears on the screen and delivers the message, complete with facial expression and emotion. While it may seem far-fetched, ECE Professor Thomas S Huang and his graduate students are developing ways to make it a reality.

The researchers currently have a variety of facial recognition algorithms and animation projects to help aid in their studies. By using a generic avatar from one of the algorithms, they are able to construct a 3D model of any person based on a single photograph.

Through the preliminary software, Huang and his students, Hao Tang and Yuxiao Hu, can reconstruct a 3D face according to the input image and then personalize the 3D face model. For each new avatar, researchers load an image, generate the model, and locate the key points on the person’s face, such as the corners of a subject’s eyes, lips, and nose. By clicking and dragging on those key points, the researchers are able to reshape the structure and texture of each model.

Thomas S Huang
Thomas S Huang

Once created, the avatars have certain features that make them similar to real people. For instance, they are able to blink, move their heads in natural and realistic motions, and have voices that use emotional inflections.

“When we have the 3D face model, we can animate the 3D face model by providing differences in its expressions and other features,” Huang said.

Currently, the demo includes emotions of anger, happiness, sadness, surprise, neutrality. If needed or wanted, the researchers have found a way for the avatar to express more than one emotion when speaking the words the user puts into the text box. 

As a way to study how to make the avatar’s emotions more realistic, Huang and Zhihong Zeng, a Beckman Fellow, are working with Psychology Professor Glenn Roisman. Roisman has a database with hours of footage of adult attachment interviews, which involve a psychologist asking an adult participant questions about his or her childhood.

“During the interview, the person will exhibit real emotion,” Huang said. “Distinguishing between positive and negative emotions is OK. However, distinguishing different degrees of negative, like fear, anger, and disgust, tends to be more subtle and person independent… So thus far we have only focused on positive and negative.”

Huang has been developing emotion recognition algorithms for more than ten years. He hopes to extend his research to different areas, such as “Electronic Consumer Relations Management.” He believes the technology may be able to measure a person’s response to a variety of displays, including advertisements.

“More and more we have public displays, like in the elevator, and it would be nice if the display was adaptive to the audience,” Huang said.

Huang said that through the use of a camera, it would be possible to recognize whether a viewer’s emotion is positive or negative when he or she looks at the display.

“If you can sense that the audience is not happy, you can change the display to a different commercial. If they are happy, you can show more of the same type of thing,” he said.

Even after all his goals are met, Huang said research in this area will continue to develop.

“The real motivation behind good research is always curiosity,” he said. “I may at some point get interested in something else, but I think this project will continue going on as long as we can keep improving it.”


Share this story

This story was published June 19, 2008.