Energy-efficient AI Hardware

Artificial Intelligent (AI) has been widely studied due to its high performance in various applications such as classification, translation and recognition. With the help of AI hardware platforms such as GPU, the inference speed of AI models achieves faster than CPU. However, in many power-limited applications such as edge computing, the power-hungry AI platforms face challenges in power consumption. Therefore, developing the energy-efficient AI hardware systems is very important. I’m very interested in this direction and my PhD research is about it.

Generally, an AI hardware system needs highly optimized Processing Elements (PEs) for computation, domain-specific architecture and an efficient compiler for scheduling data mapping and movement. In architecture level, the key optimization point is the connection and dataflow of PEs with hierarchical architecture. For a certain multi-level hardware architecture, there are thousands to millions of data mapping solutions and data moving strategies. Therefore, how to find the best optimized mapping and data flow solution for different goals like power and latency should be the task of compilers. The AI model is represented as loops, memory addresses, and computational functions by compilers. Compilation optimizes tasks to efficiently run on each PE of hardware. Although AI hardware has been deeply studied in the academic field, there are still some problems and bottlenecks that need to be solved, which are my objectives during PhD research.

This project is also related to Mixed-Precision AI Hardware and Architecture-Compiler Co-Design projects.

Jiajun Wu
Jiajun Wu
PhD Student

My research interests include Hardware accelerator, reconfigurable computing and computer architecture.

Related