skip to main content

ECE 448 - Introduction to Artificial Intelligence

Fall 2015

TitleRubricSectionCRNTypeHoursTimesDaysLocationInstructor
Artificial IntelligenceCS440ONL63671ONL -    Svetlana Lazebnik
Artificial IntelligenceCS440Q336047LCD31530 - 1645 T R  1404 Siebel Center for Comp Sci Svetlana Lazebnik
Artificial IntelligenceCS440Q436053LCD41530 - 1645 T R  1404 Siebel Center for Comp Sci Svetlana Lazebnik
Artificial IntelligenceECE448ONL63709ONL -    Svetlana Lazebnik
Artificial IntelligenceECE448Q336055LCD31530 - 1645 T R  1404 Siebel Center for Comp Sci Svetlana Lazebnik
Artificial IntelligenceECE448Q436059LCD41530 - 1645 T R  1404 Siebel Center for Comp Sci Svetlana Lazebnik

Official Description

Course Information: Same as CS 440. See CS 440.

Subject Area

Computer Engineering

Description

Introductory description of the major subjects and directions of research in artificial intelligence; topics include AI languages (LISP and PROLOG), basic problem solving techniques, knowledge representation and computer inference, machine learning, natural language understanding, computer vision, robotics, and societal impacts.

Notes

Same as: CS 440

Goals

This course is designed to give students an overview of major results and current research directions in artificial intelligence, along with an in-depth treatment of a number of representative systems, through programming exercises and class discussions.

Topics

  • Introduction
  • AI languages and formalisms
  • Problem solving
  • Knowledge representation
  • Deductive inference
  • Inductive inference and machine learning
  • Natural language understanding
  • Computer vision
  • Robotics
  • Societal impacts
  • Exams

Detailed Description and Outline

This course is designed to give students an overview of major results and current research directions in artificial intelligence, along with an in-depth treatment of a member of representative systems, through programming exercises and class discussions.

Topics:

  • Introduction
  • AI languages and formalisms
  • Problem solving
  • Knowledge representation
  • Deductive inference
  • Inductive inference and machine learning
  • Natural language understanding
  • Computer vision
  • Robotics
  • Societal impacts
  • Exams

Same as: CS 440

Lab Projects

Design and implementation of python programs for: (1) search; (2) constraint satisfaction problems; (3) two-player games; (4) naive Bayes; (5) perceptron; (6) reinforcement learning (Q-learning); (7) deep Q-learning.

Topical Prerequisites

  • Stored-program concepts
  • data structures
  • high-level programming languages
  • interpretation vs. Compilation
  • editing
  • debugging and break packages

Texts

Required, Elective, or Selected Elective

Elective

ABET Category

Engineering Science: 2 credits or 67%
Engineering Design: 1 credit or 33%

Instructional Objectives

1. By the end of the second lecture (tested on the first exam), students will understand key phases in the history of AI (e.g., the boom/bust cycle and the "AI winters"), and key differences between the research goals and evaluation criteria of different AI researchers and popular writers (7).

2. By the end of the fifth lecture (tested on MP1and exam 1), students will understand search algorithms including BFS, DFS, and A* search (1,5,6).

3. By the end of the seventh lecture (tested on MP2 and exam 1), students will understand special applications of search for puzzle-solving and deterministic planning (2,5).

4. By the end of the ninth lecture (tested on MP3 and exam 1), studens will understand the alpha-beta pruning algorithm for two-player games, the historical uses of two-player games as a proxy measure of intelligence, and the fundamental measures of equilibrium output in game theory (1,5).

5. By the end of the fifteenth lecture (tested on MP4 and exam 2), students will understand the use of binary random variables to encode events in artificial intelligence, Bayesian inference, and the naive Bayes algorithm (1,5).

6. By the end of the seventeenth lecture (tested on MP5 and exam 2), students will understand at least four methods for inferring or training linear classifiers: (a) as logic gates, (b) using naive Bayes, (c) using the perceptron learning algorithm, (d) using the softmax function and gradient descent (2,5).

7. By the end of the twentieth lecture (tested on exam 2), students will understand the use of Bayes nets to represent conceptual relationships among random variables, and to infer their values and conditional distributions, especially for hidden Markov models. (6)

8. By the end of the twenty-second lecture (tested on MP6 and exam 2), students will understand how to formulate Markov decision processes (MDP), how to solve a given MDP using value iteration or policy iteration, and how to learn a partially uknown or unobservable MDP using discrete-state reinforcement learning (1,5,6).

9. By the end of the twenty-fourth lecture (tested on MP7 and exam 2), students will understand how to train multi-layer neural networks for nonlinear regression or softmax classification, and how to use neural networks to represent value functions and policy functions for reinforcement learning (1,5,6).

10. By the end of the twenty-seventh lecture (tested on exam 2), students will understand key issues in the modern literature about the ethics of artificial intelligence, including questions about the use and abuse of AI as a tool that is available only to some users and not others, and questions about the possible dangers (and present intrinsic biases) of autonomous AI, and including the role of professional computer engineers in guaranteeing the safety of AI. (4)

Last updated

9/26/2019by Mark Hasegawa-Johnson