Date 10/19/22

Edge ML Project time-estimates

Getting the time estimates right is one of the most difficult things in the world, in my experience, says Alexander Samuelsson, CTO and co-founder at Imagimob, author of this blog.

The most important factor in determining the budget for your ML project is the amount of data needed to create a model fulfilling your performance requirements.

If the required data doesn’t already exist, and believe me, it almost never does, getting hold of and refining this data will cover most of your work. Hopefully you can automate a lot of the collection process, but then you still need calendar time to collect it, and you always need to clean it, analyze it and annotate it.


Anyway. Figuring out exactly what, and how much data you need, before a project starts, is almost impossible if you haven’t built a similar model, in a similar domain before.


However, there is some simple rules/frameworks which we can use to figure out the ballpark/order of magnitude of data that we will be dealing with.


You can think of a good ML model as a model that classifies real world data well enough to fulfill your accuracy requirements. The issue that the model is facing is that it has been trained on a training set.


This training set of data is a subset of whatever data the model will face outside of the lab/in the real world. If this subset accurately captures the properties of most of the real world data, the model will perform well.


Here lies the key to estimating/budgeting for your data collection!



Model A, a wakeword detector. Photo by Lazar Gugleta on Unsplash

Let’s consider two example ML models,

Model A is a wakeword detector. It is constantly listening for the phrase ”Hey Alexa!” and wakes up to receive further commands if it captures this phrase.

Such a model is always on/active, and is targeting the general consumer. This means that this model will be subject to a huge variance of real world data.

It will pick up audio from different acoustic environments, with different background noises and it has to understand many different kind of voices and accents.

Model A will be very expensive to build if you have to collect the data yourself. You will need to collect data from 1000s of people, in many different environments and you also need a huge dataset of background noise and other utterances and normal conversation, to separate from the ”Hey Alexa!” phrase we are looking for.

Consider instead, model B.

Model B is listening to hear if the assembly of two parts in a factory is correct. Model B lives in a very different reality. It is deployed in one or a few factories. It is placed in a known location in that factory. Let’s say that it can even be protected from alot of background noise through some clever placement and shielding.

The variety of sounds and soundscapes that Model B will experience is miniscule in comparison to Model A.

This snow fling is miniscule, like the data needed by Model B. Photo by Aaron Burden on Unsplash

Collecting sufficient data to build Model B, will be orders of magnitude faster and cheaper.

When budgeting for a ML project, reasoning about the ”life” of the model like this will seriously help to place you in the right ballpark for your budget estimate…

Happy Machine learning!

This blog was originally posted on  https://alexsamuelsson.com/2022/09/15/ml-project-time-estimates/

Please contact me at alex@imagimob.com if you have any thoughts or experience on this subject!

LATEST ARTICLES
arrow_forward
Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down