Date 02/17/25

February 2025 Studio Release

Imagimob's System Application Engineer Lukas Wirkestrand shares how the latest release of DEEPCRAFT™ Studio enables companies to develop Edge AI solutions for their products.

Click here to skip ahead to the full release notes.


The last couple of years have provided powerful AI tools, limited not only for governments and large companies, but even in the handheld devices of the everyday person. Impressive generative agents like ChatGPT and adaptable image classifiers like YOLO provide a route for small companies and entrepreneurs with dreams of start-up stardom. My name is Lukas Wirkestrand, and I am an engineer at Imagimob with a background in Machine Learning. In this technical blog I will cover how the latest release of DEEPCRAFT™ Studios provides that same opportunity - for companies big and small - to bring their Edge AI products into development.

The February release of DEEPCRAFT™ Studio introduces support for the AI Evaluation Kit: a super lightweight PSOC™ 6 board that allows for direct streaming of data into DEEPCRAFT™ Studio from a myriad of sensors. 

Fits in your hand

The board contains a microphone, accelerometer, magnetometer, gyroscope, pressure sensor, thermometer, and radar, allowing you to easily collect whatever data you need for your use case. As a Machine Learning (ML) engineer, I know the challenge of finding solid data and the importance of becoming intimately familiar with your dataset, a process that often takes weeks. Collecting your own data allows you supreme knowledge of what you’re training your model with, but often comes with the limitation of quantity. However, by utilizing the streamlined process provided by DEEPCRAFT™ Studio and the AI Evaluation Kit, data collection can be done by even those without experience in ML - and in a much more timely manner! Below is a short video of me recording data and applying the labels, using DEEPCRAFT™ Studio and the AI Evaluation kit.

Having one MCU simultaneously collecting multiple sensor data opens the door for many projects, and also lowers the barrier for creating sensor fusion models. Different modalities are capable of capturing different details and trends, but by combining them we are able to create models that harness the benefits of both. Below you can see a Graph UX visualization of the data flow I used to collect data for a dual sensor model using both the audio (16000Hz) and IMU (50Hz) data. This required preprocessing the data to get the sampling frequencies to match. Because DEEPCRAFT™ Studio already has a built-in feature for audio preprocessing, the ‘Mel Spectrogram', I used that to downsample the audio to 50Hz. After scaling and concatenating, the multimodal data can be saved, now prepared for training a sensor fusion model. This example is just one of many that can be done; theoretically, any combination of sensors could be supported, although some are more useful than others.

The benefit of using the AI Evaluation Kit for data acquisition also lies in the ease of on-device model evaluation since it uses the PSOC™ 6. Once you’re satisfied with your model in DEEPCRAFT™ Studio and have generated the hardware-optimized C code, you can flash and test the model directly on the board. You’re never going to have to worry about the potential sensor mismatch of collecting data using one microphone and then placing the model on a board that might have a different one. In fact, you don’t even need to plug in any additional device; you can collect data, train a model, and deploy it onto a PSOC™ 6 all with just one USB-C connection.
 
The new streaming protocol isn’t just a boon for PSOC™ 6; it’s been developed agnostically to hardware and can just as easily stream data from any sensor providing time-series data! If you’re already committed to a sensor or another board, this essentially means that you only need to develop firmware in order to stream data from it. This can be done by following this guide.

I was able to traverse all the steps of creating a brand new model: from data collection to model building and hyperparameter optimization in just 8 hours. Using just DEEPCRAFT™ Studio and an AI Evaluation Kit, I could forge a proof of concept. Using the new streaming support for the AI Evaluation kit is a game changer in efficiently creating proofs of concept for Edge AI models. More guides to get you started can be found our Youtube. If you’re interested and want to schedule a technical training covering data collection, model creation, and evaluation with DEEPCRAFT™ Studio, contact us here, and we’ll get you rolling! 

Release notes

Here are the full release notes for the February release of DEEPCRAFT™ Studio:

New Features and Enhancements

Improved Tensor Streaming Protocol for Real-Time Data Collection

The Streaming Protocol is a comprehensive set of guidelines designed to facilitate the streaming of data from any sensor or development kit into DEEPCRAFT™ Studio. We have introduced an enhanced version of this protocol, known as Tensor Streaming Protocol version 2, to simplify the implementation of custom firmware for data collection and model evaluation in real-time.

Tensor Streaming Protocol version 2 defines a streaming mechanism used for communication between a client and a board. The protocol is intended to work over TCP, UDP, serial port, and Bluetooth communication. This protocol is designed to handle multiple data streams from sensors, models, and playback devices, enabling efficient data transfer and processing in embedded systems. This protcol is based on protobuf3 and provide the DotNetCli test client to evaluate the implementation. Refer to Tensor Streaming Protocol for Real-Time Data Collection to know more.

Tensor Streaming Protocol version 1 is deprecated and we recommend using Tensor Streaming Protocol version 2 for improved performance and easy implementation. However, we continue to support the backward compatibility. This means that if your firmware is currently implemented using Tensor Streaming Protocol version 1, you will still be able to stream data into Studio without any issues.

Improved and Enhanced Streaming Firmware for Infineon PSOC™ 6 Artificial Intelligence Evaluation Kit

The PSOC™ 6 AI Evaluation Kit comes pre-programmed with the streaming firmware. Using Tensor Streaming Protocol version 2, we have developed new streaming firmware for the AI Kit to simplify data collection and address known issues. This new streaming firmware offers enhanced flexibility in collecting data from various sensors on the kit at different rates and allows for simultaneous data collection from multiple sensors. To know how to collect data using the new streaming firmware, refer to Real-Time Data Streaming with PSOC™ 6 AI Evaluation Kit using the new streaming firmware.

We recommend flashing the kits with the new streaming firmware to take advantage of these improvements. Note that kits manufactured after February 2025 will come pre-programmed with the new streaming firmware. For instructions on how to flash the new streaming firmware onto the kit, refer to the Infineon PSOC™ 6 Artificial Intelligence Evaluation Kit.

Starter Models

DEEPCRAFT™ Starter Models are designed to kickstart your Edge AI journey. They are deep learning-based projects that cover various use cases and serve as starting points for building custom applications. The DEEPCRAFT™ Starter Models are open-source and include all the necessary datasets, preprocessing steps, model architectures, and instructions to help you develop production-ready Edge AI models.

You can download DEEPCRAFT™ Starter Models from DEEPCRAFT™ Studio and start fine-tuning them to suit your specific needs. DEEPCRAFT™ Studio offers 3000 minutes of compute time per month, free for development, evaluation, and testing purposes. This provides a valuable opportunity to gain hands-on experience in creating and deploying machine learning models from start to finish. To know how to get started, refer to DEEPCRAFT™ Starter Models section.

Formerly known as Starter projects, these are now referred to as DEEPCRAFT™ Starter Models. We have expanded our DEEPCRAFT™ Starter Models portfolio by adding a number of new models. Refer to Starter Models to know more.

Fixes

Overall bug fixes and increased stability

LATEST ARTICLES
arrow_forward
Date 02/17/25

February 2025 Studio Release

Imagimob's System Application Engineer Lukas Wirkestrand sha...

Date 01/14/25

4 Ways to Leverage Generative AI on the Edge

This is Part 1 in our series on Generative AI for Edge Compu...

Date 11/26/24

Delivering world class edge AI - watch the video

Date 11/05/24

November release of DEEPCRAFT™ Studio

Date 09/13/24

New research on data quality's role in model effic...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

DEEPCRAFT™ Studio (formerly Imagimob Studio) integ...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down