IBM Research Technology Makes Cutting Edge Artificial Intelligence Applications Scalable

In context: Edge computing is one of the most intriguing topics driving the evolution of the world of technology. After all, how not to be excited about a concept that promises to bring distributed intelligence across a multitude of interconnected computing resources all working together to achieve a single goal?

Trying to distribute IT tasks across multiple sites and then coordinate those various efforts into a cohesive and meaningful whole is a lot harder than it looks. This is especially true when trying to turn small proof-of-concept projects into full-scale production.

Problems such as moving huge amounts of data from place to place – which ironically was supposed to be unnecessary with edge computing – as well as overwhelming demands to label that data are just two of the many factors that have conspired to make successful edge computing deployments the exception as opposed to the rule.

Advertising

IBM’s research group has been working for several years to help overcome some of these challenges. Recently, they have started to see success in industrial settings like automotive manufacturing by taking a different approach to the problem. In particular, the company has redesigned how data is analyzed at various edge locations and how AI models are shared with other sites.

In car manufacturing plants, for example, most companies have started using AI-powered visual inspection models that help spot manufacturing defects that may be difficult or too expensive for humans to recognize. . Appropriate use of tools such as IBM’s Maximo suite of applications Visual inspection solution with zero D (defects or downtime), for example, can both help automakers save significant amounts of money avoiding defects and keeping production lines running as quickly as possible. Given the supply chain constraints that many automotive companies have been facing recently, this point has become particularly critical lately.

The real trick, however, is to get to the Zero D aspect of the solution, because inconsistent results based on misinterpreted data can actually have the opposite effect, especially if that erroneous data ends up being promulgated across multiple manufacturing sites via inaccurate AI models. To avoid costly and unnecessary production line downtime, it is essential to ensure that only the appropriate data is used to generate the AI ​​models and that the models themselves are regularly checked for accuracy in order to avoid any flaws that could mislabel the data. could create.

This “recalibration” of AI models is the essential secret sauce that IBM Research brings to manufacturers and in particular to a large American automotive supplier. IBM is working on something they call out-of-distribution (OOD) detection algorithms that can help determine if the data used to refine visual models is outside an acceptable range and could, therefore, cause the model to perform inaccurate inference on incoming data. . Most importantly, it does this work in an automated fashion to avoid potential slowdowns that would result from time-consuming human labeling efforts, as well as allowing the work to span multiple manufacturing sites.

A by-product of OOD detection, called data summarization, is the ability to select data for manual inspection, labeling, and model updating. In fact, IBM is working on a 10 to 100x reduction in the amount of data traffic that currently occurs with many edge computing deployments. Additionally, this approach enables 10x greater utilization of man-hours spent on manual inspection and labeling by eliminating redundant data (nearly identical images).

2022 08 10 picture

Combined with state-of-the-art techniques such as OFA (Once For All) model architecture exploration, the company also hopes to reduce model size by up to 100 times. This enables more efficient edge computing deployments. Additionally, in conjunction with automation technologies designed to more easily and accurately distribute these models and datasets, it enables companies to create cutting-edge AI-powered solutions that can successfully scale from smaller POCs to full production deployments.

Efforts like the one being explored at a major US automotive supplier are an important step in making these solutions viable for markets like manufacturing. However, IBM also sees the opportunity to apply these AI model refinement concepts to many other industries, including telecommunications, retail, industrial automation and even autonomous driving. The trick is to create solutions that work despite the inevitable heterogeneity that occurs with edge computing and to take advantage of the unique value that each edge computing site can produce on its own.

As edge computing evolves, it’s clear that it’s not necessarily about collecting and analyzing as much data as possible, but rather about finding the right data and using it as wisely as possible.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and the professional financial community. You can follow him on Twitter @bobodtech.

Leave a Comment