“We see a new world with trillions of intelligent devices enabled by tinyML technologies that sense, analyze, and autonomously act together to create a healthier and more sustainable environment for all.”
This is a quote by Evgeni Gousev, senior director at Qualcomm and co-chair of the tinyML Foundation, in his opening remarks at a recent conference.
Machine learning is a subset of artificial intelligence. tinyML aka tiny ml is an abbreviation for tiny machine learning and means that machine learning algorithms are processed locally on embedded devices.
TinyML is very similar with Edge AI, but tinyML takes Edge AI one step further, making it possible to run machine learning models on the smallest microcontrollers (MCU’s).
Sign up and download our platform for free!
Embedded systems normally include a microcontroller. A microcontroller is a compact integrated circuit designed to govern a specific operation in an embedded system. A typical microcontroller includes a processor, memory and input/output (I/O) peripherals on a single chip.
Microcontrollers are cheap, with average sales prices reaching under $0.50, and they’re everywhere, embedded in consumer and industrial devices. At the same time, they don’t have the resources found in generic computing devices. Most of them don’t have an operating system.
They have a small CPU, are limited to a few hundred kilobytes of low-power memory (SRAM) and a few megabytes of storage, and don’t have any networking gear. They mostly don’t have a mains electricity source and must run on cell and coin batteries for years.
There are a number of advantages with running tinyML machine learning models on embedded devices.
Low Latency: Since the machine learning model runs on the edge, the data doesn't have to be sent to a server to run inference. This reduces the latency of the output.
Low Power Consumption: Microcontrollers are ultra low power which means that they consume very little power. This enables them to run without being charged for a really long time.
Low Bandwidth: As the data doesn’t have to be sent to the server constantly, less internet bandwidth is used.
Privacy: Since the machine learning model is running on the edge, the data is not stored in the cloud.
Download DEEPCRAFT™ Studio
Deep learning models require a lot of memory and consumes a lot of power. There have been multiple efforts to shrink deep machine learning models to a size that fits on tiny devices and embedded systems. Most of these efforts are focused on reducing the number of parameters in the deep learning model. For example, “pruning,” a popular class of optimization algorithms, compress neural networks by removing the parameters that are less significant in the model’s output.
The problem with pruning methods is that they don’t address the memory bottleneck of the neural networks. Standard implementations of embedded machine learning require an entire network layer and activation maps to be loaded into memory. Unfortunately, classic optimization methods don’t make any big changes to the early layers of the network, especially in convolutional networks.
Quantization is a branch of computer science and data science, and has to do with the actual implementation of the neural network on a digital computer, eg tiny devices. Conceptually we can think of it as approximating a continuous function with a discrete one.
Depending on the bit width of chosen integers (64, 32, 16 or 8 bits) and the implementation this can be done with more or less quantization errors. All modern PCs, smartphones, tablets, and more powerful microcontrollers (MCUs) for embedded systems have a so-called floating-point unit or an FPU.
This is a piece of hardware next to or is integrated with the main processor, with the purpose to make floating-point operations fast. But for the tiniest MCUs there are no FPUs and instead, one must resort to either of two options: Transform the data to integer numbers and perform all the calculations with integer arithmetic or perform the floating-point calculations in software.
The latter approach takes considerably more processor time, so go get a good performance on these chips (e.g., ARM’s M0-M3 cores), the former approach is the preferred one. If memory is sparse, which usually is the case on these devices, one can specify the integers to be 16 or 8-bit, thereby reducing the RAM and Flash memory used by the program to half, or a quarter.
tinyML software are typically ultra low power tinyML applications that run on a physical hardware device. There are a large number of machine learning algorithms that are used, but the trend is that deep learning is becoming more and more popular. People that develop tinyML applications are normally data scientists, machine learning engineers or embedded developers. Normally the tinyML models are developed and trained in a cloud system. When the tinyML application runs on the hardware device, the tinyML model can then understand what it was trained for. This is called inference.
tinyML can run on a range of different hardware platforms from Arm Cortex M-series processors to advanced neural processing devices. Deep learning normally requires more powerful hardware platforms than other types of machine learning algorithms. IoT devices are an example of tinyML hardware devices. Many people are using an Arduino board to demonstrate machine learning applications.
A tinyML framework is a software platform that makes it easier for developers to develop tinyML applications. Tensorflow Lite for microcontrollers is one of the most popular embedded machine learning frameworks.
A tinyML development platform is an end-to-end software platform, where users can develop tinyML applications. They start with collecting data, they can use AutoML to develop the machine learning models and eventually they can deploy the machine learning model on a tiny device. The main benefit with a tinyML platform is that it reduces time-to-market and lower the costs for the developer.
The benefits of tinyML are huge, and the number of use cases for tinyML technology are almost unlimited. Popular use cases include Computer vision, visual wake words, keyword spotters, predictive maintenance, gesture recognition, maintenance of industrial machines and many more.
tinyML Foundation is a non-profit organisation that coordinates a lot of activities in the tinyML market. Learn more about tinyML Foundation here.
Imagimob offers DEEPCRAFT™ Studio, a tinyML development platform for machine learning on edge devices, and ready-to-go AI models.
Imagimob will continue to support various hardware components from various vendors while now providing an even stronger portfolio based on Infineon’s advanced hardware with its PSOC™ , XMC™, AURIX™ and software (Modus Toolbox) offering. This will provide our existing and new customers the best customer experience by complementing Imagimob's advanced AI platform and AI models with Infineon’s hardware.
Download DEEPCRAFT™ Studio via the button below.