VVIP Lab

Vertically-integrated VLSI Information Processing

VVIP Lab

Vertically-integrated VLSI Information Processing

Instruction-set Architecture (ISA) with Mixed-signal Computing

Project motivation: A mixed-signal processing provides a great opportunity to achieve aggressive energy and delay efficiency in AI system by exploiting the inherent error resiliency from the machine learning algorithms. The significant benefits naturally motivated us to develop programmable instruction set architecture with a mixed-signal computing (Fig. 1 and 2). However, it was crucial to provide user-friendly programming interface without exposing the mixed-signal hardware details to programmer. This was achieved by LLVM-compiler support based on the collaboration with professor Vikram Adve and his graduate student Prakalp Srivastava in the computer science department of UIUC.

Project description: Analog/mixed-signal machine learning (ML) accelerators exploit the unique computing capability of analog/mixed-signal circuits and inherent error tolerance of ML algorithms to obtain higher energy efficiencies than digital ML accelerators. Unfortunately, these analog/mixed-signal ML accelerators lack programmability, and even instruction set interfaces, to support diverse ML algorithms or to enable essential software control over the energy-vs-accuracy trade-offs. We proposed PROMISE, the first end-to-end design of a PROgrammable MIxed-Signal accElerator from Instruction Set Architecture (ISA) to high-level language compiler for acceleration of diverse ML algorithms. We first identified prevalent operations in widely-used ML algorithms and key constraints in supporting these operations for a programmable mixed-signal accelerator. Second, based on that analysis, we proposed an ISA with a PROMISE architecture built with silicon-validated components for mixed-signal operations. Third, we developed a compiler (Fig. 3 and 4) that can take a ML algorithm described in a high-level programming language (Julia) and generate PROMISE code, with an IR design that is both language-neutral and abstracts away unnecessary hardware details. Fourth, we show how the compiler can map an application-level error tolerance specification for neural network applications down to low-level hardware parameters (swing voltages for each application Task) to minimize energy consumption. Our experiments show that PROMISE can accelerate diverse ML algorithms with energy efficiency competitive even with fixed- function digital ASICs for specific ML algorithms, and the compiler optimization achieves significant additional energy savings even for only 1% extra errors (Fig. 5).

[1] Prakalp Srivastava*, M. Kang* (*two authors equally contributed), Sujan Gonugondla, Jungwook Choi, Namsung Kim, Vikram Adve, and N. R. Shanbhag, “PROMISE: An End-to-End Design of a Programmable Mixed-Signal Accelerator for Machine-Learning Algorithms,” IEEE Symposium on Computer Architecture (ISCA), Jun, 2018. (awarded in 2019 MICRO Top Pick Honorable Mention).

[2] M. Kang, P. Srivastava, V. Adve, Namsung Kim, and N. R. Shanbhag, “An Energy-Efficient Programmable Mixed-Signal Accelerator for Machine Learning Algorithms,” IEEE MICRO, Vol. 39, Issue. 5, pp. 64-72, July. 2019.

 

 

The contents are adopted from IEEE publications © 2014 - 2020 IEEE


UCSD Electrical and Computer Engineering (ECE) Department