Updated
Updated · Tom's Hardware · Apr 24
NEO Semiconductor 3D X-DRAM passes proof-of-concept validation for high-density AI memory
Updated
Updated · Tom's Hardware · Apr 24

NEO Semiconductor 3D X-DRAM passes proof-of-concept validation for high-density AI memory

10 articles · Updated · Tom's Hardware · Apr 24
  • The POC chips, produced at Taiwan's NIAR-TSRI with NYCU, achieved under 10 ns latency and over 1-second retention at 85°C, with endurance exceeding 10¹⁴ cycles.
  • A strategic investment led by Acer founder Stan Shih was announced alongside the milestone, highlighting industry interest in scalable, energy-efficient AI memory using mature 3D NAND processes.
  • This development addresses AI memory bottlenecks as conventional DRAM scaling nears physical limits, offering a potential cost-effective alternative to HBM and underscoring the importance of industry–academia collaboration in memory innovation.
How does this revolutionary memory change the design of future AI chips and supercomputers?
How might this Taiwanese innovation shift the global balance in the semiconductor race?
Could a single memory innovation truly end the AI industry's biggest performance bottleneck?
If this tech uses old factories, why haven't the big memory makers done it already?
What are the hidden costs of converting a NAND factory to produce this new DRAM?

Breakthrough in 3D X-DRAM: Sub-10ns Latency and 15x Retention Validated for Next-Gen AI Computing

Overview

In April 2026, NEO Semiconductor achieved a major breakthrough by validating its 3D X-DRAM technology, delivering sub-10ns latency, 15 times longer data retention, and exceptional endurance. This success was driven by a strong collaboration with leading academic and research institutes and supported by a strategic investment from industry veteran Stan Shih. The innovative vertical stacking architecture, combined with advanced cell designs and manufacturing techniques, enables high performance and significantly lower production costs by leveraging existing 3D NAND infrastructure. This technology addresses critical memory bottlenecks in AI systems, promising faster training and inference, and is set for sampling in 2027 and volume production in 2028, despite some manufacturing challenges.

...