ECE 418
ECE 418 - Introduction to Image and Video Processing
Spring 2024
Title | Rubric | Section | CRN | Type | Hours | Times | Days | Location | Instructor |
---|---|---|---|---|---|---|---|---|---|
Image & Video Processing | ECE418 | NB1 | 33737 | LAB | 0 | - | Pei You | ||
Image & Video Processing | ECE418 | NL1 | 33738 | LEC | 4 | 1400 - 1520 | T R | 2017 Electrical & Computer Eng Bldg | Pierre Moulin |
See full schedule from Course Explorer
Official Description
Subject Area
- Signal Processing
Course Director
Description
Goals
To introduce students to both the fundamentals and emerging techniques in image and video processing.
Topics
- Introduction
- Basic multidimensional signal processing
- sampling
- Fourier transform
- filtering
- interpolation and decimation
- Human visual perception
- Image scanning and display
- Video scanning and display
- Image enhancement
- Image compression
- Video compression
- Image analysis
Detailed Description and Outline
To introduce students to both the fundamentals and emerging techniques in image and video processing.
Topics:
- Introduction
- Basic multidimensional signal processing
- sampling
- Fourier transform
- filtering
- interpolation and decimation
- Human visual perception
- Image scanning and display
- Video scanning and display
- Image enhancement
- Image compression
- Video compression
- Image analysis
Texts
Lecture Notes (required)
Recommended text:
A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, 1989.
Course Goals
Instructional Objectives
A. By the time of Exam No.1 (after 10 lectures), the students should be able to do the following:
1. Compute a two-dimensional (2-D) Fourier transform in both discrete and continuous spatial coordinates, and implement the 2-D discrete Fourier transform. (1,2,6)
2. Identify an appropriate sampling resolution, given the 2-D spectrum of a continuous image. (1,2,6)
3. Perform 2-D finite-impulse-response (FIR) filtering of images. (1,2,6)
4. Design 2-D decimation and interpolation schemes. (1,2,6)
5. Understand basic properties of the human visual system. (1,2,6)
6. Quantitatively evaluate image quality based on Frei and Baxter’s color vision model. (1,2,6)
7. Determine appropriate specifications for a charge-coupled-device (CCD) camera. (1,2,6)
8. Determine appropriate specifications for the amplitude and spatial resolutions of an image display system based on the characteristics of the images and the contrast sensitivity and spatio-temporal modulation transfer function of the human visual system. (1,2,4,6)
9. Design appropriate specifications for a color quantization system based on the color vision properties of the human visual system. (1,2,4,6)
10. Design appropriate gamma correction techniques. (1,2,4,6)
11. Perform halftoning and error diffusion operations. (1,2,4,6)
B. By the time of Exam No.2 (after 20 lectures), the students should be able to do all of the items listed under A, plus the following:
1. Know basic features of digital video standards for high-definition television (HDTV), standard-definition television (SDTV), videoconferencing (CIF) and videophones (QCIF). (1,2,4,6)
2. Select an appropriate format for various video applications. (1,2,4,6)
3. Design a point operation to improve contrast or modify the dynamic range of grayscale and color images. (1,2,4,6)
4. Perform histogram equalization and histogram specification on an image. (1,2,4,6)
5. Design an edge-enhancing linear filter. (1,2,4,6)
6. Design low-pass and median filters for reducing the noise level in an image without drastically affecting image contents, and select an appropriate filter depending on noise characteristics. (1,2,4,6)
7. Derive and implement the linear Wiener restoration filter, given a measurement model and image and noise statistics. (1,2,4,6)
8. Derive and implement an adaptive Wiener filter. (1,2,4,6)
9. Estimate the entropy of a sequence of statistically independent symbols. (1,2,4,6)
10. Apply the concept of entropy to estimate the bit rate of a lossless image coder. (1,2,4,6)
11. Perform run-length coding of a binary image and bit plane encoding of a grayscale image. (1,2,4,6)
12. Perform lossless predictive coding of a grayscale image. (1,2,4,6)
13. Derive the Karhunen-Loeve transform based on second-order image statistics. (1,2,4,6)
14. Perform lossy predictive coding and transform coding of images. (1,2,4,6)
15. Know basic features of the JPEG compression standard (baseline, lossless, progressive and hierarchical modes) and wavelet image compression. (1,2,4,6)
C. By the time of the Final Exam (after 28 lectures), the students should be able to do all of the items listed under A and B, plus the following:
1. Perform motion compensation of video sequences using mean-squared-error and mean-absolute-error block matching criteria, and full or fast search techniques. (1,2,4,6)
2. Perform motion-compensated predictive coding of video using forward, backward, and bidirectional predictive methods. (1,2,4,6)
3. Select an appropriate encoding method for each macroblock in a video sequence. (1,2,4,6)
4. Know basic features of MPEG-2, MPEG-4, and H.264 video compression standards. (1,2,4,6)
5. Implement an edge detection algorithm. (1,2,4,6)
6. Represent a region boundary using chain codes and Fourier descriptors, and evaluate the effects of geometric image transformations on Fourier descriptors. (1,2,4,6)
7. Select appropriate features for image segmentation. (1,2,4,6)