ECE 417

ECE 417 - Multimedia Signal Processing

Fall 2023

TitleRubricSectionCRNTypeHoursTimesDaysLocationInstructor
Multimedia Signal ProcessingECE417A68112LEC40930 - 1050 T R  2013 Electrical & Computer Eng Bldg  Mark Hasegawa-Johnson
Multimedia Signal ProcessingECE417ONL75645OLC40930 - 1050 T R     Mark Hasegawa-Johnson

Official Description

Characteristics of speech and image signals; important analysis and synthesis tools for multimedia signal processing including subspace methods, Bayesian networks, hidden Markov models, and factor graphs; applications to biometrics (person identification), human-computer interaction (face and gesture recognition and synthesis), and audio-visual databases (indexing and retrieval). Emphasis on a set of MATLAB machine problems providing hands-on experience. Course Information: 4 undergraduate hours. 4 graduate hours. Prerequisite: ECE 310 or ECE 401; one of ECE 313, CS361, or STAT 400.

Subject Area

  • Signal Processing

Description

Basic characteristics of speech and image signals; important analysis and synthesis tools for multimedia signal processing including subspace methods, Bayesian networks, hidden Markov models, and factor graphs; applications to biometrics (person identification), human-computer interaction (face and gesture recognition and synthesis), and audio/visual databases (indexing and retrieval). Emphasis is on a set of MATLAB machine problems which provide hands-on experience.

Lab Projects

Machine problems are the main educational tool in ECE 417; there are seven of them, each covering a different aspect of video and audio synthesis and understanding.

Lab Equipment

None

Topical Prerequisites

Prerequisite: ECE 313 and ECE 310.

Texts

Distributed on the course web page

References

Distributed on the course web page

Required, Elective, or Selected Elective

Elective

Course Goals

The goal of the course is to prepare the students for industrial positions in the emerging field of multimedia and for pursuing further graduate studies in signal processing. Through a set of carefully designed machine problems, the student learns important tools in audio-visual signal processing, analysis, and synthesis, and their applications to biometrics, human-computer interaction, and multimedia indexing and search.

Instructional Objectives

After Machine Problem 1 (MP1), Week 3 of the semester, the students should be able to analyze and synthesize speech signals using a multiband-excitation (sub-band-filtered combination of pulse train and white noise signals) and using a linear predictive (LPC) model of the spectrum (1,6).

After MP2, Week 5 of the semester, the students should be able to understand principal component analysis and linear discriminant analysis, and their applications to face recognition (1,6).

After MP3, Week 7 of the semester, the students should be able to understand maximum likelihood (ML) classifiers, Gaussian mixture models, and multimodal fusion, and their applications to audio-visual person identification (1,6).

After MP4, Week 9 of the semester, the students should be able to understand hidden Markov model (HMM), including algorithms for learning, inference, and decoding, and its application to audio-visual speech recognition (1,6).

After MP5, Week 11 of the semester, the students should be able to train and test a convolutional neural network for face detection, and visualize the receptive fields learned by the network kernels (1,6).

After MP6, Week 13 of the semester, the students should be able to understand 3D face modeling and animation and applications to speech-driven lip movement in an audio-visual avatar (synthetic talking head) (1,6).

After MP7, Week 15 of the semester, the students should be able to understand deep canonical correlation analysis (DCCA) for audiovisual event detection (1,6).

Last updated

7/20/2018by James Andrew Hutchinson