In two years, 100% of enterprise PC purchases will be AI computers

Global revenue from AI semiconductors is expected to grow to $71 billion this year, an increase of 33% over 2023, according to the latest forecast from Gartner.

By the end of 2025, AI chip industry revenue is projected to top $91.5 billion, and that revenue will continue to see double-digit growth through at least 2028, according to the report released today.

By the end of 2026, 100% of enterprise PC purchases will be AI PCs, which are computers that include a neural processing unit (NPU) enabling on-computer AI operations. Those PCs run longer, quieter, and cooler and have AI tasks running continually in the background, creating new opportunities for leveraging AI in everyday activities, according to Gartner’s report. The firm predicts that AI PC shipments will reach 22% of total PC shipments in 2024.

This year, nearly half of all AI chips revenue is expected to come from the sale of AI-enabled personal computers. By the end of this year, AI chips revenue from computer electronics is projected to total $33.4 billion, which will account for 47% of total AI semiconductors revenue, according to Gartner.

“Today, generative AI (genAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, a vice president analyst at Gartner.

This year, AI chips revenue from automotive electronics is also expected to reach $7.1 billion, and $1.8 billion from consumer electronics.

Sixty-six percent of enterprises worldwide said they would be investing in genAI over the next 18 months, according to IDC research. Among organizations indicating they will increase IT spending for genAI in 2024, infrastructure will account for 46% of the total spend.

The problem: a key piece of hardware needed to build out that AI infrastructure is in short supply. While GPUs are in high demand to run the most massive large language models (LLMs) behind genAI, the market still needs high-performance memory chips for AI apps. The market is tight for both — for now.

GPUs used for training and inference tasks on LLMs can consume vast amounts of processor cycles and be costly to use. Smaller, more industry- or business-focused models can often provide better results tailored to business needs, and they can use common x86 processors with NPUs.

“While much of the focus is on the use of high-performance GPUs for new AI workloads, the major hyperscalers (AWS, Google, Meta and Microsoft) are all investing in developing their own chips optimized for AI,” Priestley said.

While chip development is expensive, using custom-designed chips can improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications, according to Priestley.

“As the market shifts from development to deployment we expect to see this trend continue,” Priestley said.

Last month, Intel CEO Pat Gelsinger said he sees the company’s future embedded in an AI-everywhere concept, with NPUs bolstering its new family of Intel Core Ultra processors. The chipmaker expects to ship 40 million AI PC processors in 2024 and 100 million next year.

Partly driving the uptick in AI on edge devices is the fact that the average lifespan for mobile phones is shortening, with consumers and enterprises replacing mobile phones earlier.

“This change allows device spending to achieve $688 billion during 2024, up from 2023 spending lows of $664 billion, which will represent a 3.6% growth rate,” the report stated. “The integration of genAI capabilities in premium and basic phones sustains, more than drives, this change.”