IBM’s research technology makes cutting-edge artificial intelligence applications scalable

In context: Edge computing is one of the most intriguing topics driving the evolution of the world of technology. After all, how can you not be excited about a concept that promises to bring distributed intelligence across a multitude of interconnected computing resources all working together to achieve a single goal?

Trying to distribute IT tasks across multiple sites and then coordinate those different efforts into a cohesive and meaningful whole is a lot harder than it looks. This is especially true when trying to turn small proof-of-concept projects into full-scale production.

Problems such as moving huge amounts of data from place to place – which ironically was supposed to be unnecessary with edge computing – as well as overwhelming demands to label that data are just two of the many factors that have conspired to make successful edge computing deployments the exception as opposed to the rule.

IBM’s research group has been working for several years to help overcome some of these challenges. Recently they have started to see success in industrial settings like automotive manufacturing by taking a different approach to the problem. In particular, the company has redesigned how data is analyzed at various edge locations and how AI models are shared with other sites.

In car manufacturing plants, for example, most companies have started using AI-powered visual inspection models that help spot manufacturing defects that may be difficult or too expensive for humans to recognize. . Appropriate use of tools such as IBM Maximo Applications Suite’s zero-D (defects or downtime) visual inspection solution, for example, can both help automakers save significant amounts of money in avoiding defects and maintaining production lines as quickly as possible. Given the supply chain constraints that many automotive companies have been facing recently, this point has become particularly critical lately.

The real trick, however, is to get to the Zero D aspect of the solution, because inconsistent results based on misinterpreted data can actually have the opposite effect, especially if that erroneous data ends up being promulgated across multiple manufacturing sites via inaccurate AI models. To avoid costly and unnecessary production line downtime, it is essential to ensure that only the appropriate data is used to generate the AI ​​models and that the models themselves are regularly checked for accuracy in order to avoid any flaws that could mislabel the data. could create.

This “recalibration” of AI models is the essential secret sauce that IBM Research brings to manufacturers and in particular to a large American automotive supplier. IBM is working on something they call out-of-distribution (OOD) detection algorithms that can help determine if the data used to refine visual models is outside of an acceptable range and could, therefore, cause the model to perform inaccurate inference on incoming data. . Most importantly, it does this work in an automated fashion to avoid potential slowdowns that would result from time-consuming human labeling efforts, as well as allowing the work to span multiple manufacturing sites.

A by-product of OOD detection, called data summarization, is the ability to select data for manual inspection, labeling, and model updating. In fact, IBM is working on a 10 to 100x reduction in the amount of data traffic that currently occurs with many edge computing deployments. Additionally, this approach enables 10x greater utilization of man-hours spent on manual inspection and labeling by eliminating redundant data (nearly identical images).

IBM's search tech makes artificial intelligence apps

Combined with state-of-the-art techniques such as OFA (Once For All) model architecture exploration, the company also hopes to reduce model size by up to 100 times. This enables more efficient edge computing deployments. Additionally, in conjunction with automation technologies designed to more easily and accurately distribute these models and datasets, it enables companies to build cutting-edge AI-powered solutions that can successfully scale from smaller POCs to full production deployments.

Efforts like the one being explored at a major US automotive supplier are an important step in the viability of these solutions for markets like manufacturing. However, IBM also sees an opportunity to apply these AI model refinement concepts to many other industries, including telecommunications, retail, industrial automation and even autonomous driving. The trick is to create solutions that work despite the inevitable heterogeneity that occurs with edge computing and to take advantage of the unique value that each edge computing site can produce on its own.

As edge computing evolves, it’s clear that it’s not necessarily about collecting and analyzing as much data as possible, but rather about finding the right data and using it as wisely as possible.

Leave a Comment