Case ID: M21-157P

Published: 2022-02-21 12:59:47

Last Updated: 1677136336


Inventor(s)

Sai Kiran Cherupally
Jae-Sun Seo
Deliang Fan
Shihui Yin
Jian Meng

Technology categories

Computing & Information TechnologyIntelligence & SecurityPhysical Science

Technology keywords

Algorithm Development
Artificial Intelligence
Electronics
Neural Computing


Licensing Contacts

Shen Yan
Director of Intellectual Property - PS
[email protected]

Hardware-Noise-Aware Training for Improved Accuracy of In-Memory-Computing-Based Deep Neural Networks

­Background
Deep neural networks (DNNs) have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, in-memory computing (IMC) designs have been presented to perform analog DNN computations inside the memory. Recent IMC designs have demonstrated high energy-efficiency, but this is achieved at the expense of the noise margin, which can degrade the DNN inference accuracy.

Invention Description
Researchers at Arizona State University have developed a novel hardware-noise-aware DNN training scheme to largely recover the accuracy loss of highly-parallel (e.g. 256 rows activated together) IMC hardware. Performance results were obtained using noise-aware training and inference with several DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. Furthermore, with noise data obtained from five different chips, the method’s effectiveness was also evaluated using individual chips’ noise data versus the ensemble noise from multiple chips. Across these various DNNs and IMC chip measurements, the proposed hardware-noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware by up to 17% for the CIFAR-10 dataset.

Potential Applications
•    Deep neural networks
•    In-memory computing

Research Homepage of Professor Jae-sun Seo