Date 01/27/23

Deploying Quality SED models in a week

Imagimob has the expertise, software tools and methods to deploy a good quality sound event detection (SED) tinyML model on a target device in one week with good performance in real-life. This is a huge improvement compared with how this used to be. Before this could take many weeks and months even when qualified personnel were involved and advanced software tools were used. This means that customers now quickly can come to a point where they can evaluate the SED application and make a decision for going to production.

Imagimob utilizes high-quality, pre-classified sound libraries covering a broad variety of categorized audio classes, which contributes to rapidly creating sound event detection machine learning models. In order to achieve high accuracy in the real-world, various augmentation techniques are used, such as mixing with different background locations, simulating different distances, and other techniques. These greatly improve the ability to detect various sounds in real-life or realistic environments.

The SED model can be deployed on a range of devices, such as Synaptics DBM10L, Syntiant NDP120, Texas Instruments Sitara AM243X, Infineon PSoC 6, Renesas RA, STM32 or a Raspberry Pi.

This concept is a result of our ongoing efforts in rapid prototyping and productification.

Contact us to learn more.

Background on Sound Event Detection (SED)

In society today, the use of sound event detection and classification is commonplace in many application fields; from speech recognition and noise reduction to security systems and wildlife monitoring. In recent years, the use of artificial intelligence (AI) and machine learning (ML) models has become increasingly important in sound detection, allowing for more accurate and efficient results. 

Most people already know of sound and speech detection in commercial applications including voice-controlled devices such as Amazon Alexa and Google Home, noise reduction built into in video and audio recording, and recording equipment for wildlife monitoring for conservation efforts. In public spaces, such as airports, malls, and parks, sound detection systems are used continuously for the monitoring for security threats, detecting gunshots and other unusual noise, as well as identifying specific sounds like a baby crying or a person shouting for help. Additionally, sound detection technology can be used to monitor noise pollution in public spaces, such as measuring the decibel levels in a park to make sure they stay within safe limits, or in cities to measure the noise level to identify the sources of the noise and take actions accordingly.

In the private sector, these systems can be used to detect the sound of breaking glass in a store, which could indicate a break-in or theft. It can also be used to detect the sound of a car crash, which could indicate a traffic accident, or the sound of a fire alarm, which could indicate a fire. 

Another application of sound detection technology is monitoring for industrial accidents or malfunctions. In a factory or industrial plant, sound detection can be used to detect unusual sounds or vibrations that could indicate a problem with machinery, such as a broken conveyor belt or a malfunctioning pump. Several of the big industry players are fitting sound and vibration detection functionality into their offerings, as a support system for preventive maintenance capabilities. 

Sound detection technology can also be used outside of factories and cities, e.g. to detect the sounds of animals, such as birds or bats, in order to monitor and protect wildlife. This can be particularly important in conservation efforts, as it allows for accurate and continuous monitoring of the population and the habitat of the animals.

Most people agree that the use of sound detection is helping to revolutionize many fields and industries, with Edge AI and tinyML making it more accurate and efficient than ever before.  One of the issues preventing sound detection to be as straight-forward as measuring other environmental data like temperature, pressure, illumination or humidity is the need for post-processing and analysis, which requires good, continuous connectivity and traditionally takes place on a server in a cloud, because sophisticated sound detection wasn’t available in edge computing in the past at reasonable cost. 

The availability of low-cost, high-volume edge devices and on-board Edge AI capabilities has changed all that, allowing the roll-out of distributed systems for sound detection hardware at competitive prices.



LATEST ARTICLES
arrow_forward
Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down