Date 12/07/21

Don’t build your embedded AI pipeline from scratch

Developing embedded real-time applications is on its own one of the most complex and time consuming software disciplines.

However tempting it may be to start incorporating artificial intelligence (AI) models and embedded AI into your products, there are many hurdles along the way and a large risk of failure for the project.

As an alternative there are software platforms available that can help embedded developers with the data science, machine learning model creation, training and evaluation parts, which lets the embedded developers focus on their areas of expertise. This will vastly speed up putting applications on embedded systems.
 

Artificial intelligence is here to stay

It’s now a bit more than 5 years since the Google DeepMind computer AlphaGo beat the human Go world champion player Lee Sedol. This marked an important breakthrough for computers to take on much more challenging cognitive tasks than ever before.

Less than a year later, the Asilomar AI Principles for “Beneficial Intelligence” were established and signed by more than 1000 AI researchers, to prevent AI research from leading to harmful superintelligence. There was no doubt that the expectations on AI and the related field of machine learning were high.

Between then and now, a lot has happened. We don’t have general artificial intelligence, but that was never the expectation in such a short time perspective. 

One significant change is mental: Algorithms will become much more present and have a larger role in our lives, and they need data to work efficiently.

This is now common knowledge among all of us. Secondly, some of the most mature machine learning techniques such as classification and anomaly detection are now here generating value and revenue.

Challenges with embedded AI

Still, most of the value is generated in online businesses, where the algorithms can run on big servers with a lot of compute power. 

Numerous reports indicate that there is an undiscovered treasure hiding in the ten billion IoT, or edge, devices out there that are rarely using anything else than simple thresholding or some rudimentary signal processing. 

Not only are these devices numerous, their number is also expected to more than double, to 25 billion by 2025. Many companies see the opportunity of making their IoT devices smarter to act more autonomously, thereby cutting field personnel costs and reducing downtime. 

However, we see that the more rapid adoptions of machine learning that happens in the connected cloud businesses is not present at the edge yet. 

There is something there that slows down the progress. To get the machine learning models and software to the edge devices we must first bridge a gap: 

These companies have their main expertise in embedded software development, a field that requires knowledge that spans many complex areas. 

The teams are composed of firmware developers, hardware engineers with sensor and microcontroller knowledge, testers, and sometimes software engineers with signal processing knowledge. They may have 30+ years of knowledge. They are good, really good, at what they are doing.

So, what happens when a large corporate decides that the company will launch a new digital initiative where automation, insight and value will be created from embedded AI? 

They now have the option to to build all of this inhouse, or  hire a team of consultants work a few months to implement a solution or to buy a solution off the shelf. Let’s take a closer look at these alternatives.

Embedded AI - do it yourself

Doing it on your own from scratch may seem like a tempting approach. You own the software, have full control over it, and you have all the knowledge in your team. 

But let’s look at what it actually means. First, the group needs to hire data scientists and/or machine learning engineers. 

Among the around 25 million developers in the world, machine learning engineers are counted in hundreds of thousands woorldwide, so finding the right people is hard. 

As soon as you have your team in place, with all what that means, you can start building the model building infrastructure. 

This involves writing software for the data collection pipeline, data management, labelling or annotation of data, model building, model training and model deployment. 

These are many steps and a lot of software to write. Consider also that machine learning in general and “tinyML” (the branch of ML for resource constrained devices) in particular, is a lively research field where you constantly need to be on top of things to get the most out of your system.


Embedded AI - hire consultants

To hire consultants that work for a few months or half a year to set up the system can look like a nice short cut. Some of the typical hurdles involve, as before, finding good people for the work. 

Second, it is important that the consultants get a deep understanding of the specific problem, which often is not included in the time frame of the project. 

Lastly, when the project is over and the experts have left, there is usually no one at the company that deeply understands what was done and how to make changes to it. 

 

Embedded AI - subscribe to a service

To buy a solution or service off the shelf means that you will get a head start in your project by utilizing a platform that experts in the field have developed during years and put a lot of time, energy and clever ideas into. Let’s look at such a solution below.

 

Let the embedded developers do what they are best at

To find a tool that does the job of the machine learning expert lets the team of embedded developers continue to do what they are best at. 

At Imagimob we have worked with embedded AI for many years, and in numerous real customer projects we have found solutions that can speed up the model development process, from data collection to deployment. 

We have gone through all the painful steps of creating code for data management, labelling, training, and iterating on models and last but not least translating the model to C code that can easily be deployed on an edge device. We know where people spend time, since we have been there ourselves. 

After a few years we decided to make this embedded AI platform – Imagimob AI – available to the public. What we have in mind is the team of embedded developers that are curious to start deploying AI models on embedded systems but don’t have the expertise in building it all from scratch. 

Imagimob AI takes the user through a few steps of data management, labelling, model generation and training, where the end result is plain, well written C code – easy-to-read and make modifications to. 

We believe that this transparency and flexibility is something that embedded developers appreciate. Tutorials with step-by-step instructions and example data are included for a range of different projects such as human motion recognition, radar gesture detection and audio projects with sound event detection.

Users can get started quickly and generate the embedded code with a click of a button.

What does Imagimob AI offer the embedded developer?

With Imagimob AI the workflow of building embedded AI models is greatly simplified and embedded developers can quickly iterate over the process and receive C-code as output that is easily integrated into their embedded projects. 


Benefits for embedded developers with no previous experience in embedded AI 

  • A starter project will help you be up and running in minutes
  • Suggesting what model architecture to use based on the current type of project and data.
  • Suggesting the type of preprocessor that best fits the data. 
  • No need to understand the frameworks for model training or to log in and use cloud services or other training clusters.
  • No need to implement software for data collection and labelling.
  • No need to be on top of the latest in machine learning.

 

Benefits for embedded developers with experience in machine learning

  • Get suggestions for model building and the ability to edit the architecture and add your own model layers and preprocessor code.
  • Import models (trained or untrained) from other machine learning frameworks, train if necessary, and convert them into C code.
  • Focus on data quality and careful labelling instead of tracking bugs in your data management scripts.
  • Spend time on integrating efficient code on the edge device instead of being on top of and implementing the latest AI research.
LATEST ARTICLES
arrow_forward
Date 11/26/24

Delivering world class edge AI - watch the video

DEEPCRAFT™ Edge AI software solutions make it easier and fas...

Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down