IoT Technology Designers Need Not Be Fooled on AI and Edge Computing

Keep your head out of the Cloud. It isn’t perfect.

The Internet of Things (IoT) community has been touting the benefits of the Cloud for a long time. Its ability to crunch, store and distribute large amounts of data is unparalleled. However, the Cloud suffers from one significant limitation that bottlenecks the world’s ever-growing flow of bits and bytes — the high latency created by upload and download bandwidths.

Consider an engineer designing automated industrial systems or an autonomous vehicle. These Industrial IoT (IIoT) systems create a vast amount of data, while their work environments demand seamless communication and fast reaction times. These products can’t wait hundreds of milliseconds for data to be sent to the cloud, processed, computed, interpreted, and then sent back. Any delay could spell disaster.

However, if most of the data could be processed via edge computing, near or on the device, it could be interpreted 10 times faster. This low latency conundrum is one reason that Dimension Market Research estimates that the edge computing market is expected to reach $702.8 Billion by 2033.

The edge computing market is set to reach $702.8 billion by 2033, at a 40.0% CAGR. (Image source: Dimension Market Research.)

Though consumer electronics experience work environments with lower stakes, engineers can still expect consumer demands to push these devices along a similar trend.

In short, as AI, machine learning (ML), augmented reality (AR) and virtual reality (VR) become more popular among consumers and industry alike, less and less data — per device — will be sent to the cloud and more processing will be done at the edge.

What engineers need to get ahead of edge computing trends

Clearly, engineers designing IoT devices can no longer sleep on edge computing. Though it will remain optimal for some devices to send all of their data to the cloud, edge computing will become a necessity, or at least a consideration, for most product designs.

An EdgeBox-RPI-200 from Seeed Technology is a single board computer that offers 1.5Ghz quad-core with 4GB of RAM. It is based on the Raspberry Pi CM4 ARM Cortex-A72. (Image source: Seeed Technology.)

This means that engineers will need to consider suppliers like Seeed Technology, which offers its EdgeBox-RPI-200. According to the device’s datasheet, the single board computer is optimized for cloud and IoT applications within harsh industrial environments. It’s based on the Raspberry Pi CM4 ARM Cortex and offers a 1.5 GHz quad-core and 4 GB of RAM.

So, this device can crunch numbers at the edge, make its own decisions based on that data, and then inform the cloud what happened.

How engineers develop the future of AI, IoT, and edge computing?

This all begs the question: What might a smart, AI enabled, IIoT system with edge computing capabilities look like? Well, it might resemble our own brains when we encounter a spider.

The edge computing can act similar to the inner parts of the brain: it receives information and responds immediately, without the rest of the brain needing to think about the spider in too much detail. The person yelps, their heart pumps faster and they are on high alert. On the IoT device, this resembles an AI or ML algorithm responding to current data. It sees a part on the production floor is out of spec, informs the equipment to take it off the line and a human can then inspect the part to see if it’s as bad as the algorithm says.

Now consider the brain again. After the initial scare and response, the outer–thinking-parts of the brain will soon start to respond. It will evaluate what happened, the response and then instruct the inner brain to calm down — because it’s just a house spider — or continue to panic — because it looks venomous. For the IoT device, this could resemble the processed data being sent to the Cloud. The Cloud could then use the computed response from the algorithm, the eventual outcome of the part taken off the line (good or bad), and use this information to refine the AI algorithm.

Now the only data being sent to the Cloud is the minimal amount of data it needs to update the AI model. Meanwhile, all the Cloud needs to send back to the device is the newly optimized algorithm. This setup therefore optimizes the limited amount of bandwidth IoT engineers have to work with.

About this author

Image of Shawn Wasserman

For over 10 years, Shawn Wasserman has informed, inspired and engaged the engineering community through online content. As a senior writer at WTWH media, he produces branded content to help engineers streamline their operations via new tools, technologies and software. While a senior editor at Engineering.com, Shawn wrote stories about CAE, simulation, PLM, CAD, IoT, AI and more. During his time as the blog manager at Ansys, Shawn produced content featuring stories, tips, tricks and interesting use cases for CAE technologies. Shawn holds a master’s degree in Bioengineering from the University of Guelph and an undergraduate degree in Chemical Engineering from the University of Waterloo.

More posts by Shawn Wasserman
 TechForum

Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.

Visit TechForum