Looking up: scaling up high-level synthesis for faster, more efficient chip design

9/7/2023

As large-scale AI models evolve, we need more advanced chips to support this technology. ECE graduate student Hanchen Ye is building a program to enable faster, more efficient chip design to power these AI systems. His work on high-level synthesis (HLS) received the First Place Winner Award at the 60th Design Automation Conference PhD Forum in July 2023.

Written by

As large-scale AI models evolve, we need more advanced chips to support this technology. ECE graduate student Hanchen Ye is building a program to enable faster, more efficient chip design to power these AI systems. His work on high-level synthesis (HLS) received the First Place Winner Award at the 60th Design Automation Conference PhD Forum in July 2023. Design Automation Conference is the premier conference in Electronic Design Automation (EDA) and has attracted more than 5000 attendees this year.  

Hanchen Ye
Hanchen Ye

High-level synthesis is a design methodology that enables chip designers to use “high-level” programming languages such as C++ or Python to design computer chips rapidly and efficiently. Once users create a high-level description of the circuit, the high-level synthesis tool can automatically explore the design space and generate the low-level circuit design.

While most high-level synthesis technology only covers small or mid-scale applications, Ye’s team is working on high-level synthesis that can be scaled up to build larger circuits, such as hardware accelerators for deep learning models defined in PyTorch. These tools are vital for today’s large-scale cloud computations, notably those driven by artificial intelligence.

The team’s first tool chain, ScaleHLS, is fully open source. It has already received around 3000 downloads and over 40,000 views from chip designers and researchers around the world. The outcomes of this project have been reported at the 28th IEEE International Symposium on High-Performance Computer Architecture (HPCA’22) and Design Automation Conference 2022 (DAC’22). The team are also working on a newer version of ScaleHLS, which will appear at the 2024 ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’24).

When you scale up high-level synthesis, Ye explains, “you need to think about how to handle a large number of small modules in your large application. We have a complicated hierarchy in large-scale applications, and traditional solutions cannot handle this.” Ye’s team worked out a multi-level solution, splitting the high-level synthesis process into three levels with growing complexity and performing suitable optimizations at each level of abstraction. They are the first group to use a multi-level intermediate representation (MLIR) framework for high-level synthesis. 

His team are currently collaborating with scientists who work on high energy physics as well as industrial collaborators from both AMD and Intel. The high-level synthesis tool will be able to create high-performance chips that can deal with the heavy workloads necessary for advanced scientific computations. Ye also emphasized that creating these efficient chips will help to reduce carbon emissions and energy costs in data centers. 

“Taking a long-term view, it’s exciting to work on these technologies that can really change something,” Ye comments. “And in my day-to-day research, I enjoy the balance of working with 50% familiar things and 50% unknown.”

Deming Chen
Professor Deming Chen

Ye’s advisor, Abel Bliss Professor of Engineering Deming Chen, comments: “ScaleHLS and its follow-up works represent an exciting new direction for high-level synthesis that raises the design abstraction level even higher than what conventional high-level languages can offer. For the first time, we can directly take large PyTorch models, go through multiple levels of optimizations, and generate high-quality hardware circuits. Hanchen’s work is groundbreaking from this aspect, and I am very proud of his achievements!”

This project is funded by the NSF Accelerated Artificial Intelligence Algorithms for Data-Driven Discovery (A3D3) Center and the Semiconductor Research Corporation. Ye’s team also received resources from the IBM-Illinois Discovery Accelerator Institute and the AMD-Xilinx Center of Excellence.

Hanchen Ye's award-winning poster for the PhD Forum at DAC 2023: 


Share this story

This story was published September 7, 2023.