TinyML delivers machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Machine learning is a subset of Artificial Intelligence, and tiny machine learning (TinyML) uses machine learning algorithms that are processed locally on embedded devices.
TinyML makes it possible to run machine learning models on the smallest microcontrollers (MCU’s). By embedding ML, each microcontroller gains substantial intelligence without the need for transferring data over the cloud to make decisions.
TinyML is designed to solve the power and space aspects of embedding AI into these devices. By embedding it into small units of hardware, deep learning algorithms can train networks on the devices, shrinking device size and eliminating the latency of sending data to the cloud.
TinyML also eradicates the need to recharge the devices manually or change batteries because of power constraints. Instead, you have a device that runs at less than one milliwatt, operates on a battery for years, or uses energy harvesting. The idea behind TinyML is to make it accessible, foster mass proliferation, and scale it to virtually trillions of inexpensive and independent sensors, using 32-bit microcontrollers that go for $0.50 or less.
Another TinyML advantage is the blending of voice interfaces and visual signals, allowing devices to understand when you are looking at a machine and eliminating background noises such as people or equipment in industrial settings.
What exactly is TinyML? The tiny machines used in TinyML are task-specific MCUs. They run on ultra-low power, provide almost immediate analysis (very low latency), feature embedded machine-learning algorithms, and are pretty inexpensive.
TinyML delivers artificial intelligence to ubiquitous MCUs and IoT devices, performing on-device analytics on the huge amount of data they collect, exactly where they reside. TinyML optimizes ML models on edge devices. When data is kept on an edge device, it minimizes the risk of being compromised. TinyML smart edge devices make inferences without an internet connection.
What happens when we embed TinyML algorithms?
- A less resource-intensive inference of a pre-trained model is used rather than full model training.
- Neural networks behind TinyML models are pruned, removing some synapses and neurons.
- Quantization reduces the bit size so that the model takes up less memory, requires less power, and runs faster—with minimal impact on accuracy.
TinyML is bringing deep learning models to microcontrollers. Deep learning in the cloud is already successful, and many applications require device-based inference. Internet availability is not necessarily a given for other apps, such as drone rescue missions. In healthcare, HIPPA regulations add to the difficulty of safely sending data to the cloud. Delays (latency) caused by the roundtrip to the cloud are game stoppers for applications that require real-time ML inference.
Where is TinyML Used?
TinyML aims to bring machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Applications include:
- Keyword spotting
- Object recognition and classification
- Audio detection
- Gesture recognition
- Machine monitoring
- Machine predictive maintenance
- Retail inventory
- Real-time monitoring of crops or livestock
- Personalized patient care
- Hearing aid hardware
TinyML is making its way into billions of microcontrollers, enabling previously impossible applications.
Today, TinyML represents a fast-growing field of machine learning technologies and applications, including hardware, algorithms, and software capable of performing on-device sensor data analytics and enabling “always-on” battery-operated edge devices. TinyML brings enhanced capabilities to already established edge-computing and IoT systems with low cost, low latency, small power, and minimal connectivity requirements.
While conventional machine learning continues to become more sophisticated and resource-intensive systems, TinyML addresses the other end of the spectrum. It represents an exact and current opportunity for developers to be involved.
Check out the two opportunities at Embedded World, and learn how you can capitalize on TinyML now.
Bringing TinyML to RISC-V With Specialized Kernels and a Static Code Generator Approach on June 21 at 11:00 – 12:45 and An Introduction to TinyML: Bringing Deep Learning to Ultra-Low Power Micro-Controllers, part of Session 8.1—Autonomous & Intelligent Systems—Embedded Machine Learning Hardware on June 22, 10:00 – 13:00.
Then add to your knowledge FREE by going to:
- Hands-on Google Codelabs.
- Harvard University and Edx have partnered to deliver a series of free courses that range from basic to advanced, such as:
To say TinyML is catching on is an understatement. It’s blowing up headlines, including these that appeared within two weeks:
- Imagimob Announces tinyML for Sound Event Detection Applications on Synaptics AI Chip
- Renesas to Buy Reality AI for Embedded and TinyML Products in Non-visual Sensing
- Edge Processing with Embedded Artificial Intelligence
- Embedded News: First Demo of ARM Cortex-M85 Automotive IMU with ML
Given the ease of access to the technology, the power to capitalize on TinyML is here and now. Implementing such technology on MCUs and IoT devices changes people’s lives for the better.