Date 09/06/19

The New Path to Better Edge AI Applications

The fact that the number of IoT devices is expected to jump from 26.66 billion in 2019 to 75 billion in 2025 has driven many product developers to take the leap beyond the cloud to the edge. However, for many, the work involved in actually getting an Edge AI development project off the ground is enough to stop them in their tracks—or at least generate months of frustrating delays.

Thankfully, a new and better development path has already been forged. If you want to make the most out of your next Edge AI project, it’s time to abandon the old method and embrace the new.


The old way of doing Edge AI
The typical Edge AI development process entails a constant struggle. First, you need to get hold of a vast amount of the right kind of data. Once captured, so begins the tedious and time-consuming task of manually labeling and organizing it. Then comes the verification stage, which typically involves generating statistics from large datasets—but even this does not reveal the whole picture. True performance is not evident until you perform integration testing. And if performance isn’t good enough for Edge deployment, a new project has to be set up involving Firmware and Machine Learning Engineers.

Step 1: Data capture
Collect raw time-series data from several devices, struggle with synchronization problems between devices, then build custom solutions for organizing data and metadata collection.

Step 2: Data labeling
Label vast amounts of captured data, either by hand or by writing custom scripts with simple rules. Then struggle with trying to label raw data without proper visualization.

Step 3: Verification
Generate statistics from large datasets and hope that you measured the right things. Manually go through data samples to look at edge cases.

Step 4: Edge optimization and deployment
Set up a separate optimization project involving Firmware and Machine Learning Engineers. Wait months for a first result.


The new way of doing Edge AI
The process of developing an Edge AI product does not have to be so costly and impractical or frustrating. We believe it can and should be a whole lot faster and more fun. Now, imagine the same development framework but with a lot less friction:

Step 1: Data capture
Capture perfectly synchronized data from multiple devices together with metadata, such as video and sound. This will help you in the next steps.

Step 2: Data labeling
Label only a subset of your data, then let your AI models label the rest as they learn and improve. 
Labeling is much easier thanks to synchronized playback of time-series data and metadata.

Step 3: Verification
Automatically visualize all of your models and their predictions superimposed on top of one another. Now you’ll know not only what they predict, but when and with what level of confidence.

Step 4: Edge optimization and deployment
Happy with the accuracy of your AI application? Just press a button to optimize it for the Edge and get a final application that can be integrated in minutes.

How? The Imagimob AI Software Suite 
Our software suite covers the entire development process—from data collection to the deployment of an application in an edge device. To make this possible, we’ve developed two products: Imagimob Capture and Imagimob Studio.

IMAGIMOB CAPTURE
For fast and painless data collection. For years, Imagimob AI Engineers have used this tool to collect data from battery-powered sensors in the field. Now, it’s available to you as a mobile app.

By supporting data collection over WiFi and Bluetooth, Imagimob Capture is the perfect tool for data collection from Edge/IoT devices. Imagimob Capture comprises a mobile app and several capture devices. It’s used to capture and label synchronized sensor data and videos (or other metadata) during the data-capture phase. Highly accurate, Imagimob Capture even works for collecting sensor data from high-speed sensors such as radars or very high-frequency accelerometers. The mobile app enables immediate labeling in the field. Once the data is ready, it’s sent to a dedicated cloud service for further processing. Simply start the app, connect to a sensor device using WiFi or Bluetooth, and press record.

What you can do with it:

• Connect to and record sensor data from any edge device 
• Record data over WiFi or Bluetooth
• Record video and sound for visual reference
• Label data live using your phone or a remote control
• Generate time-synchronized data, videos, and labels 
• Upload capture sessions to the Cloud

Results:

• Faster and more accurate data collection
• Significantly less development time
• A data stream of videos, sensor data, and labels—perfectly synched down to a millisecond

IMAGIMOB STUDIO
For faster development and better performance. Most AI development tools let you visualize a wide range of performance metrics. Very few let you visualize all of your data, all of your predictions, plus the confidence levels of every one of your models—all on a single timeline.

Imagimob Studio speeds up the entire process of building an AI application. Import and organize your data. Label part of your data, then let your AI models label the rest— down to the millisecond level. Build new AI models or import existing ones. Then get direct feedback on their performance in parallel. Imagimob Studio lets you see, hear, and understand how your AI models are performing at every step in time. Quickly identify and correct errors, and visualize the improved results in an instant. Happy with what you see? Package it for your Edge platform with the press of a button.

What you can do with it:

• Build Edge AI applications for time-series/sensor data input
• Access all of your data in one place
• Automatically split and manage different data sets
• Efficiently label all or parts of your data
• Generate new AI models or import models built with other tools (TensorFlow, Caffe, etc.)
• Output optimized C models for Edge AI applications (battery-powered hardware, etc.)
• Evaluate the performance of all your models in parallel
• Visualize the predictions of all models in real-time on top of the input data
• Visualize the confidence of predictions
• Display all the standard metrics (Confusion matrices, F1 Score, Recall, Prediction, etc.)
• Import/Export AI models to/from other tools

Results:

• Faster overall development time 
• Significantly less labeling time
• Faster performance verification
• Lower power use
• Reduced bandwidth
• Higher speeds 
• Understand how your AI models are performing out in the real world at all times 
• More time to direct your efforts where they matter most

What can Imagimob do for your next Edge AI project?

Sign up for a Free Trial of Imagimob AI here.

LATEST ARTICLES
arrow_forward
Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down