VVIP Lab

Vertically-integrated VLSI Information Processing

VVIP Lab

Vertically-integrated VLSI Information Processing

Book

[1] M. Kang, S. K. Gonugondla, and N. R. Shanbhag, “Deep In-memory Architectures for Machine Learning” Vol. 1, pp. 1–197, Springer, first print in Dec. 2019.

E-book: https://link.springer.com/book/10.1007%2F978-3-030-35971-3

Journals

[1] M. Kang, S. Gonugondla, and N. R. Shanbhag, “Deep In-memory Architectures in SRAM: An Analog Approach to Approximate Computing,” Proceedings of the IEEE, [Invited], Vol. 108, Issue. 12, 2251-2275, Dec. 2020.

[2] S. Venkataramani, X. Sun, N. Wang, C.Y Chen, J. Choi, M. Kang, at al., “Efficient AI System Design with Cross-layer Approximate Computing,” Proceedings of the IEEE, [Invited], Vol. 108, Issue. 12, 2232-2250, Dec. 2020.

[3] M. Kang, Y. Kim, A. Patil, and N. R. Shanbhag, “Deep In-Memory Architectures for Machine Learning-Accuracy Versus Efficiency Trade-Offs,” IEEE Transactions on Circuits and Systems I (TCAS I), Vol. 67, Issue. 5, May. 2020.

[4] M. Kang, P. Srivastava, V. Adve, Namsung Kim, and N. R. Shanbhag, “An Energy-Efficient Programmable Mixed-Signal Accelerator for Machine Learning Algorithms,” IEEE MICRO, Vol. 39, Issue. 5, pp. 64-72, July. 2019.

[5] S. Gonugondla, M. Kang, and N. R. Shanbhag, “A Variation-Tolerant In-Memory Machine Learning Classifier via On-Chip Training,” IEEE Journal of Solid-State Circuits (JSSC), [Invited], Sept. 2018.

[6] M. Kang, S. Lim, S. Gonugondla, and N. R. Shanbhag, “An In-Memory VLSI Architecture for Convolutional Neural Networks,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, [Invited], Apr. 2018.

[7] M. Kang, S. Gonugondla, S. Lim, and N. R. Shanbhag, “A 19.4 nJ/decision, 364K decisions/s, In-memory Random Forest Multi-class Inference Accelerator,” IEEE Journal of Solid-State Circuits (JSSC), [Invited], July. 2018.

[8] Y. Kim, M. Kang, L. R. Varshney, and N. R. Shanbhag, “Generalized Water-filling for Source-Aware Energy-Efficient SRAMs,” IEEE Transactions on Communications (TCOM), May. 2018.

[9] M. Kang, S. Gonugondla, A. Patil, and N. R. Shanbhag, “A Multi-Functional In-Memory Inference Processor Using a Standard 6T SRAM Array,” IEEE Journal of Solid-State Circuits (JSSC), Vol. 53, Issue. 2, pp. 642-655, Jan. 2018.

[10] Y. Kim, M. Kang, L. R. Varshney, and N. R. Shanbhag, “Generalized Water-filling for Source-Aware Energy-Efficient SRAMs,” ArXiv, https://arxiv.org/pdf/1710.07153, Nov. 2017.

[11] M. Kang, and N. R. Shanbhag, “In-memory Computing Architectures for Sparse Distributed Memory,” IEEE Transactions on Biomedical Circuit and System, [Invited], Vol. 10, No. 4, pp. 855-863, Aug. 2016.

[12] S. Zhang, M. Kang, C. Sakr, and N. R. Shanbhag, “Reducing the Energy Cost of Inference via In-sensor Information Processing,” ArXiv, https://arxiv.org/abs/1607.00667, July. 2016.

[13] M. Kang, H. K. Park, J. Wang, G. Yeap, and S.O. Jung, “Asymmetric Independent-Gate MOSFET SRAM for High Stability,” IEEE Transactions on Electron Devices (TED), Vol. 58, No. 9, pp. 2959-2965, Sept. 2011.

[14] M. Kang, Abu-Rahma, L. Ge, B.M. Han, J. Wang, G. Yeap, and S.O. Jung, "FinFET SRAM Optimization with Fin Thickness and Surface Orientation,” IEEE Transactions on Electron Devices (TED), Vol. 57, No. 11, pp. 2785-2793, Nov. 2010.

[15] M. Kang and S.O. Jung, "Serial-Parallel Content Addressable Memory with A Conditional Driver (SPCwCD)," IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, Vol.E92-A, No.1, pp. 318-321. Nov. 2009.

[16] M. Kang, S.H. Woo, and S.O. Jung, “Dynamic Mixed Serial-Parallel Content Addressable Memory (DMSP CAM),” International Journal of Circuit Theory and Applications. Vol. 41, Issue. 7, pp. 721–731, July. 2013.

Conferences

[1] JinWon Joo, Minyong Yoon, Jungwook Choi, Mingu Kang, et al., "Understanding and Reducing Block-Load Overhead of Systolic Deep Learning Accelerators," International SoC Conference (ISOCC), Oct. 2021, 

[2] S. Venkataramani, et al., M. Kang, et al., Kailash Gopalakrishnan, “RaPiD: AI Accelerator for Ultra-low Precision Training and Inference,” IEEE Symposium on Computer Architecture (ISCA), Jun. 2021.

[3] A. Agrawal, S.K Lee, J. Silberman, M. Ziegler, M. Kang, et al., "A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training, 102.4TOPS INT4 Inference and Workload-Aware Throttling," IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), Feb. 2021.

[4] J. Oh, S. Lee, M. Kang, et al., “A 3.0 TFLOPS 0.62V Scalable Processor Core for High Compute Utilization AI Training and Inference,” IEEE Symposium on VLSI Circuits (VLSI Symposium), June, 2020.

[5] A. D. Patil, H. Hua, M. Kang, and N. R. Shanbhag, “An MRAM-Based Deep in-Memory Architecture for Deep Neural Networks,” IEEE International Symposium on Circuit and System (ISCAS), May. 2019.

[6] Y. Kim, M. Kang, L. R. Varshney, and N. R. Shanbhag, “SRAM Bit-line Swings Optimization using Generalized Waterfilling,” IEEE International Symposium on Information Theory (ISIT), Jun. 2018.

[7] Prakalp Srivastava*, M. Kang* (*two authors equally contributed), Sujan Gonugondla, Jungwook Choi, Namsung Kim, Vikram Adve, and N. R. Shanbhag, “PROMISE: An End-to-End Design of a Programmable Mixed-Signal Accelerator for Machine-Learning Algorithms,” IEEE Symposium on Computer Architecture (ISCA), Jun, 2018. (awarded in 2019 MICRO Top Pick Honorable Mention).

[8] S. K. Gonugondla, M. Kang, and N. R. Shanbhag, “Energy-Efficient Deep In-memory Architecture for NAND Flash Memories” IEEE International Symposium on Circuit and System (ISCAS), Best paper awarded in “Neural System and Application”, May, 2018.

[9] S. K. Gonugondla, M. Kang, and N. R. Shanbhag, “A 42pJ/Decision 3.12TOPS/W Robust In-Memory Machine Learning Classifier with On-Chip Training,” in IEEE International Solid-State Circuits Conference (ISSCC), Feb. 2018.

[10] M. Kang, S. K. Gonugondla, and N. R. Shanbhag, “A 19.4 nJ/decision 364 K decisions/s in-memory random forest classifier in 6T SRAM array,” IEEE European Solid-State Circuits Conference (ESSCIRC), Sep. 2017, pp. 263–266.

[11] M. Kang, S. Gonugondla, A. D. Patil, and N. R. Shanbhag, “A 481pJ/decision 3.4M decision/s Multifunctional Deep In-memory Inference Processor,” ArXiv, https://arxiv.org/abs/1610.07501, Oct. 2016.

[12] M. Kang, S. K. Gonugondla, and N. R. Shanbhag, “An Energy-efficient Memory-based High-Throughput VLSI Architecture for Convolutional Networks,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Apr. 2015, pp. 1037-1041.

[13] M. Kang, E. P. Kim, M. S Keel, and N. R. Shanbhag, “Energy-efficient and High Throughput Sparse Distributed Memory Architecture,” IEEE International Symposium on Circuit and System (ISCAS), Best paper awarded in “Neural System and Application”, May, 2015, pp. 2505-2508.

[14] M. Kang, M. S Keel, N. R. Shanbhag, S. Eilert, and K. Curewitz, “An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May, 2014, pp. 8326-8330.

[15] Y. Choi, et al., M. Kang, "A 20nm 1.8 V 8Gb PRAM with 40MB/s program bandwidth," IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), Feb. 2012, pp. 46-48.

[16] H.K. Park, S.C. Song, M.H. Abu-Rahma, L. Ge, M. Kang, B.M. Han, J. Wang, R. Choi, S.O. Jung and G. Yeap, “Accurate Projection of Vccmin by Modeling “Dual Slope” in FinFET based SRAM, and impact of Long Term Reliability on End of Life Vccmin,” IEEE International Reliability Physics Symposium, May, 2010, pp. 1008-1013.


UCSD Electrical and Computer Engineering (ECE) Department