Moving AI Processing to the Edge Will Shake Up the Semiconductor Industry

Revenue from the sale of Artificial Intelligence (AI) chipsets for edge inference and inference training will grow at 65 percent and 137 percent respectively between 2018 and 2023, creating massive new potential revenue streams for chip vendors, according to ABI Research.

In 2018, shipment revenues from edge AI processing was $1.3 billion, by 2023 this figure will grow to $23 billion, a massive increase, but one that doesn’t necessarily favor current market leaders Intel and NVIDIA. There will be intense competition to capture this revenue between established players and several prominent startups.

2000px-ABI_Research_logo.svg.png

“Companies are looking to the edge because it allows them to perform AI inference without transferring their data. The act of transferring data is inherently costly and in business-critical use cases where latency and accuracy are key, and constant connectivity is lacking, applications can’t be fulfilled,” said Jack Vernon, Industry Analyst at ABI Research. “Locating AI inference processing at the edge also means that companies don’t have to share private or sensitive data with cloud providers, something that is problematic in the healthcare and consumer sectors.”

What is clear from ABI Research’s latest Artificial Intelligence and Machine Learning Market Data is that edge AI is going to have a significant impact on the semiconductor industry. The biggest winners from the growth in edge AI are going to be those vendors that either own or are currently building intellectual properties for AI-related Application-Specific Integrated Circuits (ASICs).

Traditional processing architectures based on the Skallar approach to processing, like CPUs, are set to lose out to Tensor-based processing architecture in fulfilling the demand for edge AI processing, as they are far more efficient and scalable at performing Deep Learning (DL) tasks. ASICs by 2023, will overtake even GPUs as the architecture supporting AI inference at the edge, both in terms of annual shipments and revenues.

ASICs are already used by smartphone manufacturers like Apple and Huawei for image recognition processing in their devices. Other ASICs such as those produced by Intel’s Movidius division are used widely for image recognition inferencing. Unmanned Aerial Vehicle (UAV) vendor DJI uses Movidius chips to help support flight and the tracking of objects and people.

Security camera vendor Hikvision is also using Movidius’s AI chips in its security cameras to support facial recognition and tracking. ASICs are also being adopted by companies developing autonomous driving systems, industrial automation, and robotics.

In terms of market competition, on the AI inferencing side, Intel will be competing with several prominent AI start-ups such as Cambricon Technology, Horizon Robotics, Hailo Technologies, and Habana Labs for dominance of this segment. NVIDIA with its GPU-based AGX platform has also been gaining momentum in industrial automation and robotics.

While FPGA leader Xilinx can also expect an uptick in revenues on the back of companies using FPGAs to perform inference at the edge, Intel as an FPGA vendor is also pushing its Movidius and Mobileye chipset. For AI training, NVIDIA will hold on to its position as the market leader. “Cloud vendors are deploying GPUs for AI training in the cloud due to their high performance,” Vernon said. “However, NVIDIA will see its market share chipped away by AI training focused ASIC vendors like Graphcore, who are building high-performance and use-case specific chipsets.”

(For more information visit https://www.abiresearch.com).

Airrion Andrews