What is TinyML?

20-01-2022 | By Robin Mitchell

Machine learning is an extremely powerful tool, but it is seldom found on microcontrollers. Why is executing AI on microcontrollers a difficult task, what is TinyML, and could future microcontrollers see AI capabilities?


Why is executing AI on microcontrollers a difficult task?


AI has been one of the fastest-growing industries for the past decade, and its abilities have well exceeded basic scientific curiosity. AI is being used daily to monitor industrial systems for predictive maintenance. Traffic systems are becoming more intelligent, and facial recognition systems can identify individuals from an entire crowd.

The computing and memory demands of AI have rarely been an issue when deploying AI solutions to the cloud or on large computing devices, but deploying AI onto smaller devices presents some major challenges. These smaller devices could benefit the most from machine learning algorithms to make matters more desperate.


So, what makes executing ML on microcontrollers challenging?


By far, the most significant two factors that limit ML on microcontrollers are CPU power and memory. Al algorithms require multiple steps of convolution, which in itself consumes large amounts of memory, and this memory needs to be quickly accessed. Even if an AI algorithm only requires 10MB of memory storage, this must be in the form of RAM, and thus the use of an external SD card with a microcontroller is not an option. This memory requirement instantly disqualifies the vast majority of microcontrollers on the market whose total RAM rarely exceeds 256KB.

AI algorithms are also highly reliant on the complex computation of matrices which is why graphics processors are often used with AI (they contain hardware designed to compute on matrices). CPUs, however, do not contain matrix-specific computational hardware, meaning that they instead rely on repetitive instructions that break down matrix calculations into a series of basic steps. Thus, AI running on CPUs greatly benefit from increased CPU execution speed, and it is this fact that makes microcontrollers unsuitable for AI algorithms where core speeds can struggle to go beyond 100MHz.


What is TinyML?


TinyML is a field of study concerned with putting machine learning into microcontrollers with power, processing, and memory constraints. The use of TinyML on a microcontroller allows for low-latency, low-power, and low-bandwidth AI algorithms that can work alongside other activities needed to be run by a microcontroller to create a low-cost system that can respond intelligently. In other words, TinyML is a buzzword like “IoT” and “cloud”, which simply means making AI smaller.

What makes TinyML interesting is that it already exists in multiple products and applications worldwide, including predictive maintenance sensors and home automation systems. But such algorithms are either proprietary or are unable to run on smaller systems commonly available to the masses, which is why so many are still yet to implement AI into microcontroller projects.

However, the release of TensorFlow lite combined with its compatibility with the Arduino Nano 33 BLE Sense means that users can now start to experiment with AI algorithms on microcontrollers that use a fraction of the power of larger computer systems.

While there are many advantages to running AI on microcontrollers, one of the biggest by far is protecting privacy. Smart cameras and home management systems have often been the source of criticism due to the use of remote cloud computing to compute potentially private data (such as images of individuals and recorded conversations). This remote computing was needed as AI cannot run locally on a microcontroller. But the use of on-board AI, also called “edge computing”, could be used to prevent private data from ever leaving a device.



Will future microcontrollers be able to natively run AI?


There is no doubt that future microcontrollers will integrate dedicated hardware circuitry for the sole purpose of executing AI algorithms efficiently. This can already be seen with the development of every other technology that has potential in microcontrollers, with hardware security and peripherals being an example. Eventually, the demand for AI on microcontrollers will become so great that integrating dedicated hardware circuitry will become a major selling point.

Of course, AI that runs on future microcontrollers will not be able to compete with those that run on dedicated computers and cloud servers, but this is irrelevant as AI on microcontrollers will be geared towards real-time data and real-time responses. For example, AI on microcontrollers could be used to allow IoT devices to intelligently control home systems without the need for remote computing. On-board AI on microcontrollers could also be useful for drones which can use AI to provide better navigation in all kinds of conditions.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.