Tech Convergence Will Spur Demand for New ADAS Technology

The Paradigm-Changing Effects Of AI Innovation At The Edge



The Paradigm-Changing Effects Of AI Innovation At The Edge

By Jason Compton
W
henever an innovation catches on in the data center, it’s just a matter of time before it branches out to devices at the edge of computing. Business applications, storage and data processing have all grown in power and popularity on edge devices even as their cloud and data center counterparts have continued to evolve.
The latest wave is the emergence of on-device artificial intelligence (AI). Instead of relying entirely on the cloud for AI insights, a new wave of specialized algorithms and chips is delivering deep insights wherever work is done. According to ABI Research, shipments of devices with edge AI capabilities will grow fifteenfold by 2023, to 1.2 billion units. The share of AI tasks that take place on edge devices instead of in the cloud will grow more than sevenfold, from 6 percent in 2017 to 43 percent in 2023.
Why Edge AI Is So Crucial 
The case for edge AI is intuitive if you’ve ever wondered why consumer AI agents take a while to answer requests. When AI relies entirely on the cloud, it can be bottlenecked by network connections and availability.
Internet traffic, whether wired, Wi-Fi or cellular, is always subject to some amount of latency, typically at least 10 milliseconds. On a congested cellular network the delays can be much longer, and handshaking and authentication between the edge device and cloud AI might slow things down even more.
At the edge, AI can access unfiltered, full-fidelity data. A single internet of things (IoT) sensor, such as a camera or thermometer, can generate several gigabytes of data daily. Those high volumes are often impractical to share and store in the cloud, so a cloud AI may have access only to stripped-down, compressed or aggregated reports. When that’s the case, important nuances can be lost, and that limits the potential to improve AI training in the cloud or data center.
When AI workloads are processed on edge devices using edge data inputs, they avoid network delays entirely. Tasks can start in a matter of microseconds (one microsecond equals one millionth of a second), and the device can act as soon as the answer is ready instead of waiting for that answer to come back from the cloud. That means robots, cameras, computers, and edge servers and gateways can make better and more informed judgments without having to phone home for every inquiry.
Taking AI Everywhere 
AI at the edge makes sense particularly when data is being generated at the same physical location where the decision needs to be made. 
In a security system, the same camera that captures a visitor’s image can use on-device deep learning techniques to validate his or her access or request additional credentials. The cameras can coordinate with local smart home or smart office controllers to share their experiences and register all access attempts. And the entire process happens at the edge, rather than at a far-off data center.
Striking A Cloud-Edge Balance 
AI will always have a home in the data center. But by emerging on the edge, AI plays a more active role in reshaping the way we live and work, with in-the-moment insights and timely interventions. AI on the edge can improve safety, flag a defect or boost productivity at the exact moment of need. By responding quickly to patterns and anomalies that might take humans much longer to notice, edge AI can enhance and reconfigure our surroundings to avoid stresses, breakdowns and interruptions. The cloud-edge difference is much like the difference between having an expert on-call and having an expert on-site.
In the end, AI is largely about maximizing our relationship with the world that surrounds us. Locked in its mutually reinforcing relationship with the cloud, AI on the edge will let us interact with our environment in a more natural, flexible and powerful way.





Comments