Generative AI 2024 – Impact on Processors, Memory, Advanced Packaging and Substrates

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

  • Deep dive in supply chain and its bottleneck and resilience. Which company takes benefits of generative AI and who is missing?

  • Datacenter GPU and AI ASIC revenue could reach $156B by 2025, and $233B by 2029.

In 2023, datacenter processor shipments for AI acceleration experienced strong growth, a trend expected to continue through 2024 and 2025. Both flagship GPUs and AI ASIC are expected to experience a strong growth and the associated scenarios are discussed in the report. The combined datacenter GPU and AI ASIC revenue is expected to grow from 50 billion in 2023 to more than 200 billion in 2029.

Moreover, the entire supply chain is anticipated to experience the repercussions of this expanding market, encompassing wafer and memory production, as well as substrate and 2.5D/3D packaging. Notably, the HBM market for AI accelerators is projected to experience significant growth, expanding by a factor of 8 between 2023 and 2029. Similarly, IC substrate revenues are anticipated to increase by a factor of 10 during the same period.

Nvidia, SK hynix, and TSMC were leading in 2023 – but what about the coming years?

Nvidia is the frontrunner in the generative AI market with its flagship GPUs, witnessing substantial growth in its datacenter business line, while AMD’s MI300 is gaining momentum. Hyperscalers like Google, Amazon, and Chinese BATX are developing AI ASICs as custom chips for internal use and cloud services, aiming to reduce reliance on datacenter GPUs from fabless companies, lower costs, and utilize processors tailored to their needs. These AI ASICs present the primary competition for Nvidia. Intel’s Gaudi and several startups with diverse approaches are also entering the market.

The foundry market is dominated by TSMC, with Samsung, Intel Foundry Services, and SMIC striving to capture a share of this vast opportunity. These foundries cater to various clients and offer different technologies to meet the growing demand for AI accelerators.Samsung, SK Hynix, and Micron are expanding their wafer capacity for HBM production to seize the opportunities in the AI market. SK Hynix is currently at the forefront of the HBM market, but competition is heating up with Samsung.

In the realm of advanced packaging, Intel, Samsung, and TSMC are prominent leaders, providing distinctive 2.5D and 3D technologies for high-performance applications. These companies are driving innovation in the high-end packaging market. Despite a challenging year for IC substrate makers, the AI hype is expected to positively impact the industry in the long term, driven by recent investments, capacity expansions, and glass core substrate developments.

Technology innovation at all the supply chain levels to support the growing need for computing.

The launch of ChatGPT in November 2022 sparked significant interest in AI accelerators, which are specialized chips designed for highly parallelizable calculations. As AI models require extensive vector and matrix calculations, GPUs and AI ASICs have become increasingly important. AI accelerators have since diverged into two categories: those specialized for training and those for inference.

AI models in data centers are becoming more complex, with increasing parameters and sample data, driving the evolution of chip architecture. Training chips require higher computing power, memory, and bandwidth, while inference chips prioritize high throughput, I/O HBM bandwidth, and sufficient memory.

Memory technology, such as HBM, is essential for rapid data transfer in AI accelerators. HBM3 was introduced in 2022, with HBM3E and HBM4 expected to follow in 2024 and 2026, respectively. AI accelerator market trends include larger and more diverse form factors with minimal layer count, enabling the use of chiplets for custom AI accelerators. Glass core substrates are expected to become the preferred choice due to their flexibility, cost-effectiveness, and mechanical stability. Advanced packaging technologies, such as 2.5D and 3D platforms, are crucial for meeting the performance and efficiency requirements of AI accelerators in data center applications. These platforms ensure low latency, high speed, and low power consumption while evolving with denser integration in future generations.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top