Samsung Details HBM4 and AI Hardware at NVIDIA GTC 2026 | The Pickool

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Samsung Details HBM4 and AI Hardware at NVIDIA GTC 2026
Source: Samsung Electronics

Samsung Details HBM4 and AI Hardware at NVIDIA GTC 2026

Samsung Electronics showcases mass-produced HBM4 memory, PCIe 6.0 SSDs, and digital twin AI factory integration at NVIDIA GTC 2026.

Philip Lee profile image
by Philip Lee

SAN JOSE, CA — Samsung Electronics unveiled its latest High Bandwidth Memory (HBM) products and artificial intelligence infrastructure components at the NVIDIA GTC 2026 conference, providing specifications for data-center, manufacturing, and on-device applications.

The company announced that its sixth-generation HBM4 memory is in mass production for NVIDIA’s Vera Rubin platform.

HBM4, built on Samsung’s 10-nanometer-class (1c) DRAM process, operates at 11.7 gigabits per second (Gbps), exceeding the industry standard of 8 Gbps, and is designed to scale up to 13 Gbps.

Samsung also disclosed specifications for its HBM4E technology, which is designed to deliver 16 Gbps per pin and a total bandwidth of 4.0 terabytes per second (TB/s).

To address thermal limitations in high-density memory stacking, the company highlighted its hybrid copper bonding (HCB) technology.

According to Samsung, HCB enables the stacking of 16 or more memory layers and reduces heat resistance by more than 20 percent compared to thermal compression bonding (TCB).

For server and storage applications, Samsung introduced its SOCAMM2 low-power DRAM memory module, which is currently in mass production.

The company also showcased its PCIe 6.0-based PM1763 solid-state drive (SSD), demonstrating it on servers running NVIDIA’s SCADA software.

The PM1753 SSD was presented as part of NVIDIA’s BlueField-4 STX reference architecture, with a focus on energy efficiency for inference workloads.

In manufacturing, Samsung stated that it plans to adopt NVIDIA Omniverse libraries and accelerated computing to advance its “AI Factory” digital twin model.

Yong Ho Song, executive vice president and head of the AI Center at Samsung Electronics, discussed the integration, including the use of agentic AI in electronic design automation (EDA) and computational lithography.

For on-device and local AI processing, the company presented its PM9E3 and PM9E1 NAND solutions developed for NVIDIA’s DGX Spark.

Samsung also detailed its mobile DRAM roadmap, including LPDDR5X, which provides up to 25 Gbps per pin and reduces power consumption by up to 15 percent.

Its successor, LPDDR6, is designed to scale bandwidth to 30–35 Gbps per pin and introduces adaptive voltage scaling and dynamic refresh control to manage power in edge-AI workloads for mobile and wearable devices.

Philip Lee profile image
by Philip Lee

Subscribe to The Pickool

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More