Developing an analog compute platform to accelerate edge AI/ML inferencing

21-09-2023 | Microchip Technology | Semiconductors

Microchip Technology, through its Silicon Storage Technology subsidiary, is helping with the development of this platform by supplying an evaluation system for its SuperFlash memBrain neuromorphic memory solution. The solution is based on Microchip’s industry-proven NVM SuperFlash technology and optimised to perform VMM for neural networks via an analog in-memory compute approach.

The memBrain technology evaluation kit is created to allow IHWK to demonstrate the absolute power efficiency of its neuromorphic computing platform for running inferencing algorithms at the edge. The goal is to develop an ultra-low-power APU for applications such as generative AI models, autonomous cars, medical diagnosis, voice processing, security/surveillance and commercial drones.

As existing neural net models for edge inference may require 50 million or more synapses (weights) for processing, having enough bandwidth for the off-chip DRAM needed by purely digital solutions becomes demanding, producing a bottleneck for neural net computing that throttles overall compute power. In contrast, the memBrain solution stores synaptic weights in the on-chip floating gate in ultra-low-power sub-threshold mode and utilises the same memory cells to perform the computations – offering considerable improvements in power efficiency and system latency. Compared to conventional digital DSP and SRAM/DRAM-based approaches, it delivers 10 to 20 times lower power usage per inference decision and can greatly decrease the overall BoM.

To develop the APU, IHWK is also working with the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, for device development and Yonsei University, Seoul, for device design assistance.

The final APU is anticipated to optimise system-level algorithms for inferencing and operate between 20-80 TeraOPS per Watt, the best performance obtainable for a computing-in-memory solution designed for battery-powered devices.

“By using proven NVM rather than alternative off-chip memory solutions to perform neural network computation and store weights, Microchip’s memBrain computing-in-memory technology is poised to eliminate the massive data communications bottlenecks otherwise associated with performing AI processing at the network’s edge,” said Mark Reiten, vice president of SST, Microchip’s licensing business unit. “Working with IHWK, the universities, and early adopter customers is a great opportunity to further prove our technology for neural processing and advance our involvement in the AI space by engaging with a leading R&D company in Korea.”

“Korea is an important hotspot for AI semiconductor development,” said Sanghoon Yoon, IHWK branch manager. “Our experts on nonvolatile and emerging memory have validated that Microchip’s memBrain product based on proven NVM technology is the best option when it comes to creating in-memory computing systems.”

Permanently storing neural models inside the solution’s processing element also helps instant-on functionality for real-time neural network processing. IHWK is leveraging SuperFlash memory’s floating gate cells’ nonvolatility to attain a new benchmark in low-power edge computing devices supporting ML inference employing advanced ML models.


By Seb Springall