NXP Semiconductors has released its eIQ Machine Learning software support for Glow neural network compiler, providing what is claimed to be the industry’s first NN compiler implementation for higher performance with low memory footprint on the company's i.MX RT crossover MCUs. As developed by Facebook, Glow can combine target-specific optimisations, and the company leveraged this capability using NN operator libraries for Arm Cortex-M cores and the Cadence Tensilica HiFi 4 DSP, maximising the inferencing performance of its i.MX RT685 and i.MX RT1050 and RT1060. Moreover, this ability is merged into the eIQ Machine Learning Software Development Environment, freely available within the company’s MCUXpresso SDK.
“The standard, out-of-the-box version of Glow from GitHub is device agnostic to give users the flexibility to compile neural network models for basic architectures of interest, including the Arm Cortex-A and Cortex-M cores, as well as RISC-V architectures,” said Dwarak Rajagopal, software engineering manager at Facebook. “By using purpose-built software libraries that exploit the compute elements of their MCUs and delivering a 2-3x performance increase, NXP has demonstrated the wide-ranging benefits of using the Glow NN compiler for machine learning applications, from high-end cloud-based machines to low-cost embedded platforms.”
“NXP is driving the enablement of machine learning capabilities on edge devices, leveraging the robust capabilities of our highly integrated i.MX application processors and high performance i.MX RT crossover MCUs with our eIQ ML software framework,” said Ron Martino, senior vice president and general manager, NXP Semiconductors. “The addition of Glow support for our i.MX RT series of crossover MCUs allows our customers to compile deep neural network models and give their applications a competitive advantage.”