Machine learning (ML) engineers working with Edge AI face several challenges today, from getting lost in the modeling process to wading through enormous datasets in search of the right training data. One tool that can ease your workflow and help you get your model up and running faster is Generative Artificial Intelligence (Generative AI).
While many associate Generative AI with large language models and the ability to ask questions and get responses back, according to our Head of Machine Learning Development Sam Al-Attiyah, the possibilities for ML modeling workflow are even greater than this.
“Generative AI's ability to effectively process and learn from data extends beyond text processing,” says Sam. “It can be applied to various modalities such as image, sound, radar, and other signals. Being able to understand the world through any lens is extremely valuable in the data molding process.”
Although Generative AI is still evolving, there are several ways ML engineers can start taking advantage of its capabilities already today. This article will explore some of the possibilities.
1. Let it assist your development journey
How should you formulate your Edge AI project? Ask a large language model (LLM) like ChatGPT or LAMA. Not only will they offer some answers, but they can even guide you through the entire process and help you make the right choices as a participant in your feedback loop. You can also ask LLMs to provide different kinds of architectures that work under specific constraints, asking them to make the architecture smaller and more efficient or to limit the number of layers.
“I think of it like having an assistant or a responsive blackboard you can bounce ideas around with,” says Sam. “Of course, keep in mind, it’s an assistant, and you need to treat it like one. Don’t go down the dangerous path of thinking a LLM knows all of the answers. In the end, you will still need to filter through what it gives you and use your own good judgment.”
2. Find relevant data in big datasets
Sifting through a huge sample set in search of the specific data needed to train a model is one of the biggest speed bumps ML engineers face in the modeling process. Handing the task to a Generative AI model can spare you countless tedious hours. The multi-modal Generative AI models available today can process and identify what you’re looking for from texts as well as films, images, and sounds.
“A great use case for this technology is enhancing safety monitoring on a factory floor,” says Sam. “You can interact with Generative AI and instruct it to, for example, ‘Analyze all of this extensive video footage and identify images where someone isn't wearing their safety equipment or where a person is too close to a machine.’ By doing this, you can quickly extract crucial data from a massive dataset that would have otherwise taken an enormous amount of time to review.”
3. Generate training data for a scenario you don’t have
A key to building a robust and accurate ML model is training it on a wide variety of scenarios. However, data for the scenarios you need is not always readily available or resource-efficient to source. Generative AI can help solve this problem, offering you the possibility to train models on synthetic data. You can ask it to, for example, generate specific images or sounds that can help fill in the gaps.
“We had a thesis student at Imagimob who gave an audio model prompts and hints to see if it is possible to improve the model only using synthetic data, and the results were exciting—they showed us that you can,” says Sam.
4. Increase model accuracy with advanced computational blocks
Although a less direct way to enjoy the Generative AI advantage, it is noteworthy to mention that advancements in Generative AI models have led to the development of hardware that can accelerate computational blocks. Such hardware advancements have allowed for the building of large models and the development of hardware capable of training and running them. Engineers have also developed new algorithms and layers within these models that are now being integrated into Edge AI and embedded hardware, allowing for the acceleration of computational blocks.
“Lately, we have been experimenting with adding support that would enable these building blocks to be used in smaller models,” says Sam. “It has been proven that, even in a small model, they will improve model accuracy compared to previous generations of Edge AI models.”
Generative AI on the Edge is yet to reach its full potential
Working smarter and faster, as a ML engineer you can leverage Generative AI to streamline your development processes, extract data from vast datasets, generate training data for scenarios that are not readily available, and integrate advanced computational blocks into your models to achieve greater accuracy.
While there are many valuable ways to enjoy the benefits of Generative AI today, the future holds even more exciting possibilities once Generative AI can be run on Edge devices. At Imagimob, we are eager to explore them all.
Subscribe to our monthly newsletter to stay up-to-date on all the latest blogs, news, events, webinars and more from Imagimob.