//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
The void left by 3D XPoint technology (later branded as Optane by Intel) is a literal one. But is the gap in the memory/storage hierarchy one that needs be filled?
Micron Technology, which co-developed the phase-change memory (PCM) technology with Intel Corp., never ramped up 3D XPoint production, opting to exit the business to focus on opportunities around the Compute Express Link (CXL) protocol. Intel, however, remained bullish for a time with its Optane offerings—both solid-state drives (SSDs) and dual in-line memory modules (DIMMs)—but ultimately closed the book on the technology in the summer of 2022.
As noted by Jim Handy, principal analyst with Objective Analysis, the lifespan of 3D XPoint from date of launch to Intel’s Q2 2022 earnings call at the end of July last year was exactly seven years to the date.
Intel positioned Optane as filling the gap between NAND flash—a persistent storage class memory—and volatile dynamic random-access memory (DRAM), which remains the fastest conventional memory option. In an exclusive interview with EE Times, Handy indicated that the gap between NAND flash and DRAM does exist, explaining that “it’s just a question of whether or not there’s something appropriate to fill it.” He does not believe that the intent of 3D XPoint was ever to replace DRAM or that the gaps necessarily needed filling.
Before the NAND–DRAM gap, there was an even larger divergence between hard-disk drives and DRAM, which is where NAND flash–based SSDs now reside. “DRAM was speeding up more than hard drives were,” Handy said. “When NAND flash pricing fell below DRAM pricing in 2004, it became natural to try to stuff NAND flash into that gap.”
The other gap that has been growing is between DRAM and processor bandwidth needs, which has been partly addressed by using static random-access memory (SRAM)-based caches on the processor chip.
Smarter SSDs, CXL could narrow the gap
Memory semantic SSDs, including those offered by Samsung Electronics and Kioxia, are aiming to fill the gap between NAND flash storage and DRAM by augmenting the SSD with a DRAM cache. By adding data-processing capabilities onboard the SSD, they have the effect of filling the gap between the SSD and DRAM layers.

Handy expects what would have happened in the longer term had Optane continued is that it would have been used to reduce the amount of DRAM needed in a system—similar to what SSDs did as flash prices came down. But that scenario assumed the economics of Optane made sense, and Intel could not make the numbers work.
As Handy noted, Intel was in a position to sell Optane with its processors. In the meantime, there were companies that were looking to help build the ecosystem necessary for Optane to succeed. For a while, it looked as though important hurdles had been passed, the same way 3D NAND had a shakeout period to prove that it could be as cost-effective as its planar predecessor—it took about three years, according to Handy.
In the summer of 2020, Intermolecular Inc., a wholly owned subsidiary of Merck KGaA based in Darmstadt, Germany, announced that it developed what it believed was the first quaternary atomic layer deposition (ALD) GeAsSeTe OTS device for 3D vertical memory arrays. The new combination of materials offered the potential of enabling 3D vertical non-volatile memory architecture for chip design, as well as even shaping the second iteration of 3D XPoint technology.

MemVerge, a provider of memory-converged infrastructure solutions, is another company that hitched itself to the Optane wagon. Back in the summer of 2021, the company expected a coming tsunami of opportunities to enhance a wide range of persistent memories, such as Optane, as more and more applications—especially data-centric applications—were expected to be run from the memory.
With the shuttering of Intel’s Optane business, memVerge was able to pivot its focus toward CXL opportunities, memVerge CEO Charles Fan told EE Times in an exclusive interview. This change in focus has led to the emergence of a new memory performance pyramid driven by applications demanding that larger amounts of data be processed faster. Fan explained that it does not matter whether the innovation is Optane or CXL; a more scalable, feature-rich memory system is required to improve the performance of these applications.
Filling the gap between NAND flash and DRAM is one way to improve the performance, and CXL is another because it is additive on the bandwidth side, Fan said. It aggregates memory and allows the entire infrastructure to be composable because the memory is no longer just a tightly coupled resource for the CPU. “A whole server can be disaggregated and composable,” he said.

The CXL ecosystem is much broader and vibrant than Optane’s, Fan added, in large part because it is not a single vendor technology, and it enables memory expansion, pooling and sharing through an open standard. “The transition the industry is going through is from Optane to CXL.”
CXL does not fill the gap in the same way as Optane, but it does help to optimize data movement and placement at a slower speed than DRAM but faster than NAND flash SSDs.
“CXL introduces architectural change that covers this gap in other ways,” Fan said, and the transition for memVerge from Optane to CXL was natural and smooth. The company’s main product, Memory Machine, enables tiering for memory expansion, which aligns well with the concept of a memory/storage hierarchy that had included Optane.
“Tiering actually lends itself really well [to] CXL, and the performance of CXL is similar to Optane in terms of latency,” Fan said. “It’s a pretty natural transition for us.”
In the meantime, Fan noted he is aware of several companies working on persistent memory technologies that are complementary to the CXL interconnect, which raises the question: Do other persistent memories other than PCM have a shot at filling the gap?
Smaller transistors might enable emerging memory candidates
Magnetoresistive random-access memory (MRAM) could be a candidate to fill the NAND–DRAM gap, and Atomera’s Mears Silicon Technology (MST) could help it get there. Atomera was founded in 2001 by Robert Mears, who also serves as CTO, with a vision to develop a platform of materials technologies for use across multiple industries. MST was spurred by the slowdown in advancement of Moore’s Law and uses atomic-level materials science to deliver multiple power, performance, area and cost (PPAC) benefits.
MST’s ability to improve PPAC is showing promise in advancing MRAM to a point at which it will finally be able to transition from a niche memory to mainstream and potentially fill the gap between NAND and DRAM left by 3D XPoint, Jeff Lewis, Atomera’s SVP for business development and marketing, told EE Times in a briefing. The company is enhancing transistors to get more capability from them by applying a quantum engineering film. “It applies to all nodes, so it’s not just memory,” Lewis said, adding that Atomera licenses this technology out to device makers to incorporate into their products.

MST does not just apply to memory; the underlying technology could also be applied to analog power switches, for example. In general, Lewis said, MST enables increased carrier mobility and drive current, which is applicable to a variety of integrated circuit types. Improved mobility at high and low fields have already been demonstrated during third-party evaluations, he said, as well as drive and effective current increases of 10% to 20%.

MST also enhances the reliability of the device, Lewis added. “You can actually overdrive it without wearing it out.” The many benefits add up to the point where it is possible to increase the current drive of the same-sized transistor by as much as 52%. In addition, an MRAM array could be significantly shrunk if the transistor is smaller, and shrinking is a well-understood value proposition because it reduces cost, according to Lewis.
The benefits of MST all play into addressing the key challenge for an emerging memory if it is to displace an incumbent technology, such as DRAM—cost per bit. “People have talked about MRAM as being a potential replacement, but from a cost-per-bit perspective, it still hasn’t approached near-DRAM levels,” Lewis said, adding that he thinks the non-volatility and low-power profile of MRAM is undervalued. “We are focused on MRAM because that seems to be taking over as the de facto standard, certainly for embedded memories.”
Handy, meanwhile, does not see either resistive random-access memory (ReRAM) or MRAM as being potential candidates for filling the gap left by Optane. The best that could happen is if the wafer volume for one of those technologies gets high enough when designers are making them for embedded memories—that would potentially make it price-competitive with DRAM, he said. “But that’s a big, big if.”

The work done by Intel and Micron on 3D XPoint is still out there, but for now, their abandonment of the technology means there is no commercial PCM technology on the market. Handy said it is a shame that Intel produced more PCM wafers than everybody else combined, which means the company has a lot of knowledge concerning not only how to make 3D XPoint work but also produce yield.
Intel also has lot of proprietary knowledge about how to make an XPoint array using PCM. “I have not seen anything that indicates that anybody is going take that knowledge and use it for some other application,” Handy said.
But filling the NAND–DRAM gap with another layer is tough because it is really difficult to make something cheaper than DRAM, he added. “And if isn’t cheaper, there’s no reason to put a layer in between memory and NAND flash.”