Illinois and NVIDIA prepare students for parallel computing revolution

4/18/2007 Brad Petersen, ECE ILLINOIS

ECE students are part of a pioneering new class this semester that?s preparing them to be leaders in the parallel computing revolution. In a unique arrangement, the class is co-taught by David Kirk, chief scientist at graphics-processing industry leader NVIDIA, and Wen-mei Hwu, the AMD Jerry Sanders Chair of Electrical and Computer Engineering at Illinois.

Written by Brad Petersen, ECE ILLINOIS

ECE graduate student and class TA John Stratton,
NVIDIA Chief Scientist David Kirk, and ECE Professor
Wen-Mei Hwu, participate in a discussion during a
recent meeting of ECE 498.
ECE graduate student and class TA John Stratton, NVIDIA Chief Scientist David Kirk, and ECE Professor Wen-Mei Hwu, participate in a discussion during a recent meeting of ECE 498.

ECE students are part of a pioneering new class this semester that’s preparing them to be leaders in the parallel computing revolution.

In a unique arrangement, the class is co-taught by David Kirk, chief scientist at graphics-processing industry leader NVIDIA, and Wen-mei Hwu, the AMD Jerry Sanders Chair of Electrical and Computer Engineering at Illinois. The course, which is titled ECE 498: Programming Massively Parallel Processors, is meeting a growing industry need for better prepared students, as parallel processors are quickly becoming the computing standard. 

“We want to help students tap into the massive computing power of these processors to allow them to do work that was too computationally expensive to do before,” Hwu says. “We also want to help them design future massively parallel processors and programming tools.”

In parallel processing, multiple processors are employed simultaneously, or in parallel, to attack a problem, each processing a portion of the data. More processors equal greater speed. But using parallel processing also means a different approach for the programmer.

“It’s a different way of thinking to, in your mind, lay out a problem into thousands of pieces rather than laying out a problem simply as a recipe sequence of a single path. One of the tools we use in the class is NVIDIA’s new CUDA C-programming environment that gives students practical experience in developing applications that use 128 parallel processors and thousands of computing threads in the GPU,” Kirk explains.

When programmed correctly, massively parallel processors can work more than 100 times faster than current high-end personal computers for a fraction of the cost. According to Hwu, whenever such a dramatic jump in cost-performance occurs, many opportunities arise for new applications or new usage of existing applications.

“For example, a high-resolution MRI 3D image that used to take 10 hours to reconstruct now can take only about six minutes,” Hwu notes. “This makes it cost-effective to use MRI for real-time diagnosis. We want to make sure that our students are the ones that take advantage of these opportunities to revolutionize their own fields with these new capabilities.”

The shift to parallel processing presents a challenge, however, because most universities are not teaching students how to use the technology.

“Traditionally, computer science and computer engineering education has not really addressed parallel processing as an important part of programming. It’s a graduate course and it’s an elective,” Kirk says. “As we move forward with multi-core processors and highly-parallel GPUs, everybody will need to know how to program massively parallel processors because that’s all there will be. There won’t be any single core processors any more.”

Both Kirk and Hwu hope the class will eventually become a core part of the curriculum. They believe all computer engineering and computer science students should be required to learn about parallel processors early in their undergraduate education so they can use the knowledge during the course of their studies. 


Share this story

This story was published April 18, 2007.