SEOUL: Samsung Electronics said it is on track to begin delivering its next-generation HBM4 high-bandwidth memory products in the first quarter of 2026. The company said the HBM4 lineup will include products with 11.7 gigabits-per-second performance, as it expands sales of memory used in artificial intelligence servers and accelerators. Samsung did not name customers for the initial HBM4 deliveries, and it did not disclose shipment volumes in its earnings materials.

In its fourth-quarter and full-year 2025 results, Samsung said its memory business posted record highs in quarterly revenue and operating profit, supported by higher sales of high-value products including HBM, server DDR5 and enterprise solid-state drives. The company said limited supply availability remained a factor even as demand for AI computing continued to lift consumption of advanced memory and storage used in data centers.
High-bandwidth memory is a vertically stacked DRAM technology designed to increase data throughput compared with conventional DRAM, and HBM4 is the newest generation following HBM3E. In an investor presentation accompanying the earnings release, Samsung said it plans to start delivering HBM4 “mass products,” including an 11.7 Gbps version, and cited “timely shipment” of HBM4 as part of its near-term outlook for AI-related products.
Nvidia, the largest supplier of AI data center accelerators, has introduced its Rubin platform, which it says uses HBM4 across multiple system configurations. On Nvidia’s product specifications page for the Vera Rubin NVL72 rack-scale system, the company lists 20.7 terabytes of HBM4 with 1,580 terabytes per second of bandwidth, and 288 gigabytes of HBM4 with 22 terabytes per second of bandwidth for a single Rubin GPU, noting the figures are preliminary and subject to change.
Rubin platform memory requirements
Samsung’s results statement also pointed to broader work across advanced semiconductor manufacturing and packaging linked to AI computing. It said its foundry business commenced mass production of first-generation 2-nanometer products and began shipments of 4-nanometer HBM base-die products, components used in the logic layer of high-bandwidth memory stacks. Samsung said it plans to provide optimized solutions through integration of logic, memory and advanced packaging technologies.
Other major memory makers have also published timelines for their HBM4 readiness. SK hynix said in September 2025 that it completed development and finished preparation of HBM4, and that it had readied a mass production system. Micron said in a December 2025 investor presentation that its HBM4, with speeds over 11 Gbps, is on track to ramp with high yields in the second calendar quarter of 2026, consistent with customers’ platform ramp plans.
Competing HBM4 road maps
In describing its own HBM4 program, Micron said its HBM4 uses advanced CMOS and metallization technologies on the base logic die and DRAM dies, designed and manufactured in-house, and pointed to packaging and test capability as critical to performance and power targets. SK hynix has described HBM4 as part of a generational progression in stacked memory built for ultra-high performance AI, where bandwidth and power efficiency are central requirements for data center operation.
Samsung’s earnings materials did not link its HBM4 delivery schedule to any specific AI processor program or customer deployment. Nvidia’s Rubin announcements and published specifications do not identify HBM4 suppliers, and Nvidia has not disclosed vendor allocations for the HBM4 used in Rubin systems. Samsung’s confirmed timeline, as stated in its results release, is that HBM4 deliveries are expected to begin within the first quarter of 2026. – By Content Syndication Services.
