Accelerator Architectures for Training Deep Neural Networks

Prof. Keshab K. Parhi, Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis

Event Organized By:

Circuits and Systems Society (CASS) of the IEEE Santa Clara Valley Section

Abstract:

Machine learning and data analytics continue to expand the fourth industrial revolution and affect many aspects of our lives. The talk will explore hardware accelerator architectures for deep neural networks (DNNs). I will present a brief review of history of neural networks. Training deep neural networks requires orders of magnitude more energy and latency than inference. Thus, there is a need to reduce latency and energy consumption of training deep neural networks. I will talk about reducing latency and memory access in accelerator architectures for training DNNs using InterGrad, an approach to interleave gradient computations using systolic arrays. Then I will present our recent work on LayerPipe, an approach for training deep neural networks that leads to simultaneous intra-layer and inter-layer pipelining. This approach can increase processor utilization efficiency and increase speed of training without increasing communication costs.

Bio:

Keshab K. Parhi received the B.Tech. degree from the Indian Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree from the University of Pennsylvania, Philadelphia, in 1984, and the Ph.D. degree from the University of California, Berkeley, in 1988. He has been with the University of Minnesota, Minneapolis, since 1988, where he is currently Distinguished McKnight University Professor and Edgar F. Johnson Professor of Electronic Communication in the Department of Electrical and Computer Engineering. He has published over 680 papers, is the inventor of 32 patents, and has authored the textbook VLSI Digital Signal Processing Systems (Wiley, 1999) and coedited the reference book Digital Signal Processing for Multimedia Systems (Marcel Dekker, 1999). His current research addresses VLSI architecture design of machine learning systems, hardware security, data-driven neuroscience and molecular/DNA computing. Dr. Parhi is the recipient of numerous awards including the 2017 Mac Van Valkenburg award and the 2012 Charles A. Desoer Technical Achievement award from the IEEE Circuits and Systems Society, the 2004 F. E. Terman award from the American Society of Engineering Education, and the 2003 IEEE Kiyo Tomiyasu Technical Field Award. He served as the Editor-in-Chief of the IEEE Trans. Circuits and Systems, Part-I during 2004 and 2005. He is a Fellow of IEEE, ACM, AAAS and the National Academy of Inventors.

Admission Fee:

All admissions free. Suggested donations:

Non-IEEE:  $5, Students (non-IEEE): $3, IEEE Members (not members of CASS or SSCS): $3

Date and time

Wed, Jan. 20, 2021, 6:00 PM – 7:00 PM PST

Add to calendar

Location

Online event

Register:

https://www.eventbrite.com/e/accelerator-architectures-for-training-deep-neural-networks-tickets-240701744397

 

Dr. Keshab K. Parhi