Sparse Distributed Memory

Project description: The Sparse Distributed Memory (SDM) (Fig. 1 and 2) is one such computational model of the human brain. A SDM can be trained to remember sparse data vectors and retrieve these when presented with noisy or incomplete versions of the stored vectors. This is similar to human brain’s ability to associate related memory given noisy sensory input by conceptualizing/ categorizing incomplete information. On the other hand, the sparse and distributed nature of data processed and stored in a SDM provides inherent robustness to noise or imprecision in the input data. However, a straightforward SDM implementation will consume much energy and will be slow because the SDM operates in a high (hyper)-dimensional space requiring large amount of data processing. In addition, the conventional digital processing fails to exploit the inherent error resiliency from the high dimension vectors. A mixed-signal in-memory computing platform can be a possible solution to such memory-intensive algorithms/applications to achieve aggressive energy and throughput benefit by exploiting the inherent algorithmic robustness. The proposed architecture achieved 25× and 12× delay and energy reductions, respectively, without output accuracy degradation for hand-written number recognition with 25% input bad pixel ratios.

[1] M. Kang, and N. R. Shanbhag, “In-memory Computing Architectures for Sparse Distributed Memory,” IEEE Transactions on Biomedical Circuit and System, [Invited], Vol. 10, No. 4, pp. 855-863, Aug. 2016.

[2] M. Kang, E. P. Kim, M. S Keel, and N. R. Shanbhag, “Energy-efficient and High Throughput Sparse Distributed Memory Architecture,” IEEE International Symposium on Circuit and System (ISCAS), Best paper awarded in “Neural System and Application”, May, 2015, pp. 2505-2508.

Deep In-memory Computing for Convolutional Neural Network

Research description: This research employs deep in-memory computing (DIMA) (Fig. 1) to implement energy-efficient and high-throughput convolutional neural networks (CNNs). Specifically, the contributions of this research are: 1) a multi-bank DIMA (Fig. 2) to implement the LeNet- 5, 2) an efficient data storage format for parallel processing, 3) a mixed-signal multiplier (Fig. 3) to enable the reuse of functional READ outputs, 4) efficient data movement techniques for input and output FMs (Fig. 2 and 4), 5) energy, delay, and behavioral models validated in silicon for large-scale system simulations to estimate the application-level benefits, and 6) the use of CNN retraining to minimize the impact from non-ideal analog behavior. The proposed DIMA-based CNN architecture is shown to provide up to 4.9× energy saving, a 2.4× throughput improvement, and 11.9× EDP reduction as compared to a conventional architecture (SRAM + digital processor).

[1] M. Kang, S. Lim, S. Gonugondla, and N. R. Shanbhag, “An In-Memory VLSI Architecture for Convolutional Neural Networks,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, [Invited], Apr. 2018.

[2] M. Kang, S. K. Gonugondla, and N. R. Shanbhag, “An Energy-efficient Memory-based High-Throughput VLSI Architecture for Convolutional Networks,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Apr. 2015, pp. 1037-1041.

The contents are adopted from IEEE publications © 2014 - 2020 IEEE

UCSD Electrical and Computer Engineering (ECE) Department