As AI processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform AI functions—like computer vision and voice recognition. Microchip Technology, via its Silicon Storage Technology subsidiary, is meeting this challenge by significantly decreasing power with its analog memory technology, the memBrain neuromorphic memory solution. Based on its Flash technology and optimised to perform VMM for neural networks, the company's analog flash memory solution improves system architecture implementation of VMM through an analog in-memory compute approach, enhancing AI inference at the edge.
As current neural net models may need 50M or more synapses (weights) for processing, it becomes challenging to have enough bandwidth for an off-chip DRAM, forming a bottleneck for neural net computing and an increase in overall compute power. In contrast, the memBrain solution stores synaptic weights in the on-chip floating gate, providing notable improvements in system latency. When compared to conventional digital DSP and SRAM/DRAM-based approaches, it achieves 10 to 20 times lower power and significantly decreased overall BOM.
“As technology providers for the automotive, industrial and consumer markets continue to implement VMM for neural networks, our architecture helps these forward-facing solutions realize power, cost and latency benefits,” said Mark Reiten, vice president of the license division at SST. “Microchip will continue to deliver highly reliable and versatile SuperFlash memory solutions for AI applications.”
“Microchip’s memBrain solution enables ultra-low-power in-memory computation for our forthcoming analog neural network processors,” said Kurt Busch, CEO of Syntiant Corp. “Our partnership with Microchip continues to offer Syntiant many critical advantages as we support pervasive machine learning for always-on applications in voice, image and other sensor modalities in edge devices.”