Date 03/05/22

The Future is Touchless: Radical Gesture Control Powered by Radar and Edge AI

Radical Gesture control

As the pandemic has swept over the world, companies has searched for ideas on how to reducing the risk. Gesture control (aka gesture recognition) and touchless user interfaces have become a hot topic and meant a revolution in terms of technology.

Both provide the ability to interact with devices without physically touching them. In addition to rising hygiene awareness, another primary driver of the touchless gesture control market today is a demand for lower maintenance costs. Gesture control can be used in various amazing applications in automotive, industry or gaming.

Here, we’ll explore the different types of gesture control technologies, provide some examples of how they are used in specific cases, and explain why, in our opinion, radar powered by Edge AI stands out above the rest for certain use cases.

What is gesture control ?

Gesture control is both a topic of computer science and language technology, where the primary goal is the interpretation of human gestures via algorithms.

Gesture control devices have the ability to recognize and interpret movements of the human body, allowing users to interact with and control a system without direct physical contact. Gestures can originate from any bodily motion or state, but normally originate from the hand.

Types of touchless gesture control technologies

There are several different types of touchless technologies used today to enable devices to recognize and respond to gestures and movements. These range from cameras to radar, and they each come with pros and cons depending on the application.

Cameras (2D/3D)

Many gesture control applications use a camera as input. In fact, there are already a number of products available on the market that use smartphone cameras to build mobile apps with gesture control features.

In the automotive sector, BMW has led the way, featuring gesture control in some of their latest models. Their solution allows drivers to control select functions in the infotainment (iDrive) system by using hand gestures which are captured by a 3D camera.

Located in the roof lining, the camera scans an area in front of the dashboard to detect any gestures performed. Various functions can be operated, depending on the equipment.

Infrared sensors

An infrared (IR) sensor is an electronic device that measures and detects infrared radiation in the surrounding environment. Anything that emits heat gives off infrared radiation. IR is invisible to the human eye, since its wavelength is longer than that of visible light.

There are two types of infrared sensors: active and passive. Active infrared sensors both emit and detect infrared radiation, while passive infrared sensors only detect infrared radiation.

Active IR sensors have two parts: a light emitting diode (LED) and a receiver. When an object gets close to the sensor, the infrared light from the LED reflects off of the object and is detected by the receiver. Active IR sensors make excellent proximity sensors, and are commonly used in obstacle detection systems.

Due to the ability to detect a variety of simple gestures at a low cost, infrared sensing technology can be a good match for many industrial, consumer, and automotive applications.

Radar

Of all the touchless gesture control technologies in the world, radar is the most robust. Radar has some unique properties. One is that it is extremely accurate, even the tiniest of motions or gestures can be detected by a radar device.

Another unique property of radar is that it works through materials, such as plastic and glass. Furthermore, with radar, there are no lenses that can become dirty—which is not the case with cameras or infrared technologies. The business possibilities are endless.

If a camera or infrared sensor become dirty, it doesn’t work. In fact, cameras often face many of the same limitations we find with the human eye. They require a clear lens in order to see properly, limiting where you can position them, and they don’t always provide a crisp or reliable picture in bad weather, particularly in heavy rain or snow. 

We believe that gesture control using radar is a great solution that can be applied in many use cases. In-ear headphones are one great example, and there are many other excellent application examples in consumer electronics, automotive, and industry 4.0.

Other gesture control projects

There are a couple of known projects that use radar for gesture control. For instance, Project Soli by Google was announced at the Google I/O developers’ conference back in 2015. Project Soli includes a radar sensor and gesture control software, and was launched commercially in the Google Pixel 4 smartphone in 2019.

The role of Edge AI in gesture control applications

Sensors capture the data, but, of course, in order for gesture control to work, that data needs to be decoded. Today, the vast majority of software for processing and interpreting sensor data is based on traditional methods which include transformation, filtering, and statistical analysis.

Such methods are designed by humans who, referencing their personal domain knowledge, are looking for some kind of “fingerprint” in the data.

Quite often, this fingerprint is a complex combination of events in the data and machine learning is needed to successfully resolve the problem.

To be able to process sensor data in real time, the machine learning model needs to run locally on the chip, close to the sensor itself—usually called “the edge.” This concept is called Edge AI or tinyML.

Combining Edge AI with Acconeer radar for an exciting new class of embedded gesture control applications

In the pursuit of creating radically new and creative gesture control embedded applications, we found a great match in Acconeer, a leading innovator in radar sensors.

Both Imagimob and Acconeer share the mission of supplying solutions for small battery-powered devices, with extreme requirements on energy efficiency, processing capacity, and cost.

Acconeer produces the world’s smallest and most energy efficient radar sensor, the A1. The data from the sensor contains a lot of information and, for advanced use cases, such gesture control, complex interpretation is needed—a perfect task for Imagimob’s ultra-efficient Edge AI software. The most radical gesture definately requires Edge AI.

With its small size, the sensor could be placed inside a pair of headphones, and the gestures could function as virtual buttons to steer the functionality, which is usually programmed into physical buttons.

Gesture recognition in TWS earbuds

In 2020, we decided to create something that was not yet seen in the market. We decided to build a fully functional prototype of gesture-controlled in-ear headphones together with Acconeer and OSM Group (an ODM).

Gesture control is a perfect fit for in-ear headphones, since the earbud is small and invisible to the user, which makes physical buttons difficult to use.

This project elaborated on our original concept to include a selection of MCU and developing firmware, while still keeping it so small that it will fit into the form factor of in-ear headphones.

The result? The end product—and the great step forward in the exciting future of touchless technology it represents is now available as fully working prototypes that can be tested by customers running Spotify on a mobile phone. The prototypes are convenient to use in people's everyday life.


Combining tinyML with Texas Instruments IWR6843AOP/ AWR6843AOP mmWave Radar 


Different radars have different properties

Different radars have different properties, where we successfully demonstrated a full functional true wireless in-ear earbuds with the Acconeer radar, we pursued a more industrial and automotive applications using the Texas Instruments radar. There we took on a new challenge, that of comparing and building models for opposing gesture pairs.

We began working on the TI radar and completing an integration with Imagimob AI in Q2 of 2021. We created a data pipeline solution that streamlines and simplifies the process of data collection.

Using this we built a simple model that identifies one of four classes (2 gesture pairs). Clockwise and counterclockwise finger rotation and left and right hand swipes.

TI radar starter project

The end result is a starter project that allows for new users to get up and running, building and testing models in minutes. The starter project features a lightweight model that can run efficiently on the TI radar firmware and differentiate between opposing gestures that would typically be difficult to solve.

Download the starter project and you get the python tools for swift collection of data as well as pre-existing data and models so that you experiment with building and testing existing models. Finally you get a testing tool that allows you to access test the python models in real-time on your device.

Imagimob in gesture control

Imagimob has integrated the Acconeer radar and the TI radar in Imagimob AI, which is a development platform for machine learning on edge devices. We have developed content packs for both radars that makes it possible to get started with gesture control applications in minutes.  Learn more about Imagimob AI and sign up for a free account here.




LATEST ARTICLES
arrow_forward
Date 11/05/24

November release of DEEPCRAFT™ Studio

Imagimob Studio is now DEEPCRAFT™ Studio! Just last wee...

Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down