Date 01/16/24

Introducing Graph UX: A new way to visualize your workflow and explore more possibilities on the Edge

Gone are the days of top-to-bottom, fixed-pipeline workflows. Our latest Imagimob Studio release contains a major user experience upgrade we call Graph UX. 

Graph UX is a visual interface designed to give you a better overview of your machine learning (ML) modeling process while offering exciting new capabilities such as built-in data collection and real-time model evaluation for edge devices   

More freedom to operate 

The new Graph UX allows you and your ML engineering team to gain a complete visual overview of your modeling canvas, with the ability to zoom in and out and work at different levels of complexity. 

In addition to making it easier to work more efficiently as a team, Graph UX makes ML modeling projects more accessible to team members with varying skill sets and experience levels. This is thanks to the ability to work at different levels of complexity and abstraction all in the same workspace. 

 

The Graph UX difference:

  • Zoom in and out through abstraction levels to support beginners and experienced users
  • Multiple teams and even end-users can contribute to the platform by implementing nodes
  • Far more advanced preprocessing, models, post-processing, and data processing  
  • Supports more complex hardware, advanced models, and more use cases
  • Branches in graphs allow: 
    • Flexible/more  advanced preprocessing and post-processing
    • Combining output from multiple models (voting and other concepts)
    • Using models as pre-processors 
    • and more!

 

Brand new capabilities to support your ML modeling process   

Imagimob Studio’s Graph UX update not only enhances user-friendliness but introduces a collection of new capabilities to the ML design process. Let’s take a closer look. 

Built-in data collection

Skip the tedious work of collecting data from your device. Now you can just connect it to your PC with a USB cable, record data, and see and annotate it live.    


Real-time model evaluation

No more "black box” dilemmas that made it difficult to evaluate and debunk models already launched on your device. Now you can connect your device to your PC and run the ML models to understand what is happening live—anywhere in the model. 

 

Ability to evaluate and run multiple models in parallel or in sequence

We’re leaving rigid modeling procedures in the past. Now you are free to explore deeper levels of complexity and experiment with new possibilities in a fraction of the time. Run multiple models in parallel, process data in different ways, combine them however you want, and reuse things you’ve already done without having to rewrite the same code twice. 

 

More capable processing of data and model outputs

Machine learning is about more than the trained model. It is about the complete model—a combination of data processing, modeling, and model output filtering. Graph UX simplifies these parts of the modeling workflow and gives users greater insight into how to get more performance out of their models on the edge.

Future-proof 

Be ready for what is coming. Graph UX supplies a vital foundation for cutting-edge industry developments such as advanced models and hardware with multiple ML accelerators and processing units, like Infineon’s PSOC™ Edge.  

Soon supporting your entire ML modeling workflow 

This first release of Graph UX covers Data collection, Evaluation, and Code Generation. In future releases, we plan to cover the entire end-to-end workflow step-by-step including Data management, Data cleaning, Augmentation, and Training.  

Imagimob Studio’s Graph UX update is now live. The update is automatic for existing users. 

Haven’t discovered Imagimob Studio yet? Click here to download

Or get in touch with us at .

LATEST ARTICLES
arrow_forward
Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down