Updated
Updated · Wccftech · May 3
Intel and SAIMEMORY reveal ZAM memory with double HBM4 bandwidth
Updated
Updated · Wccftech · May 3

Intel and SAIMEMORY reveal ZAM memory with double HBM4 bandwidth

10 articles · Updated · Wccftech · May 3
  • The design, previewed ahead of VLSI Symposium 2026, targets 10GB per stack, 30GB per package and 5.3TB/s bandwidth from a nine-layer stacked architecture.
  • Intel says ZAM is being developed with SoftBank-backed SAIMEMORY as a lower-power, higher-density alternative to HBM for AI accelerators, with improved thermal performance through its vertical structure.
  • The technology is aimed at a 2028-2030 production window and seeks to challenge HBM as demand for high-bandwidth memory rises with generative AI workloads.
With ZAM promising double HBM4's bandwidth and lower power, will hyperscalers and DRAM giants actually adopt it for future data centers?
Is advanced memory packaging like ZAM the true key to AI acceleration, or will emerging memory cell technologies soon make it obsolete?