What Is Persistent Memory — and Why Do We Need It?

February 26, 2018

What Is Persistent Memory — and Why Do We Need It?


By Jean S. Bozman

Some may think it’s a bit too soon, but Silicon Valley is already thinking about persistent memory (PM) and its use cases. What is persistent memory – and why is it needed? Persistent memory is silicon-based solid-state memory that stores data – even if there is a power failure – ensuring ongoing data access for high-performance computing, Big Data analytics and data transfer for large datasets.

Why is it important? Persistent memory addresses some of the new issues that are cropping up in a world of scale-out, software-defined infrastructure where containers are “spun up” and “spun down”– accessing data in hybrid cloud and multi-cloud deployments. Chief benefits include the ability to keep working data closer to the CPU, and to move toward wider addressability of data across network fabrics, increasing the size of addressable data namespaces.

PM is a niche product today, and it is just entering the marketplace. Nevertheless, many vendors see persistent memory as a building block for next-generation in-memory computing, high-speed data transfers and for scale-out clusters. It has the potential to affect the way data flows work in scale-out architecture – and as such its evolution should be watched closely.

The PM technology is in active development by semiconductor companies that today manufacture DRAM memory. Early use cases include memory-as-storage NVDIMMs, which are already being shipped by some systems vendors; in-memory databases; fast processing for large datasets; and support for super-fast networking interconnects. We expect in-memory databases and fast interconnects to be among the leading uses for PM in coming years.

Vendors and customers have seen new technologies emerge, year after year. For persistent memory, the ecosystem cannot grow rapidly until vendors and customers become convinced of its long-term utility. Advocates must show how persistent memory enables next-generation capabilities for Big Data analytics that require support for stateful workloads that need a “persistent” data-set for extended computation and fast data access and retrieval. Many of today’s software developers, working in the Java programming world, are looking for on-ramps to develop applications that leverage PM. Some experts have suggested that developers “try” building PM-friendly apps on the cloud, using public cloud work-spaces to write proof-of-concept (PoC) applications.


The Persistent Memory Summit

The Storage Networking Industry Association (SNIA) held the one-day Persistent Memory Summit conference on Jan. 24, 2018, in San Jose, California, to allow vendors to get on the “same page” about next steps as the technology evolves.

The conference explored the impact of persistent memory. Presentations centered on new memory technologies related to digital transformation of traditional data centers into scale-out pools of compute, storage and networking. Presentations included speakers from (in alphabetical order) ARM, Cray, HPE, IBM, Intel, Mellanox, Micron, Microsoft, NetApp, Oracle, VMware, Western Digital, and others.

To make end-to-end solutions work, updates will have to be made in the following technology areas, speakers from a series of high-profile tech companies said:

  • Hardware features for in-memory processing (e.g., NVDIMM memory-as-storage devices)
  • Network interconnect standards (e.g., NVMe and RDMA), with wider use of standardized APIs to these fast data-transfer technologies.
  • Software support by operating systems and hypervisors (e.g. Linux distributions and Microsoft Windows).
  • Many of today’s software developers, working in the Java programming world, are looking for on-ramps to develop applications that leverage persistent memory.


Systems infrastructure will have to evolve over time to support persistent memory – and at first the most compelling use-cases will be a fraction of all storage devices shipped. That means that PM will likely be a fast-growing market segment, but will lag SSDs and HDDs in terms of unit shipments and worldwide revenue.

A growth pattern like that is typical of new and emerging technologies in the IT marketplace. Older storage technologies will likely continue to be widely deployed, even in sites using PM for some applications. It’s important to note that solid-state disks (SSDs), now seeing rapid growth in 2018, have not stopped hard-disk drive (HDD) shipments, demonstrating how companies leverage multiple storage technologies, as needed. Organizations will continue to use hard-disk drives (HDDs) and solid-state drives (SSDs), even as they add PM for some applications.

As for adoption, PM will likely take off in some sectors first – including high performance computing (e.g., oil/gas, defense applications, financial services); those that use in-memory computing (e.g., Big Data analytics) and those that combine high-speed networking with extremely large data-sets (e.g., media/entertainment’s large files, high-frequency trading in financial markets).


How to Grow the PM Ecosystem

In our view, three keys to the emerging ecosystem being built around persistent memory in the next few years (2018-2020) are:

  • Understanding PM’s value, and how it works. Unlike so may of the early apps in the cloud, which were stateless (such as search), stateful workloads (based on transaction updates and serial calculations) need data to persist, even if queries are re-started due to a power outage or workload migration. Stateful workloads need persistent storage even though the containers and VMs are transient, continually being spun up and spun down as they traverse the software-defined infrastructure. The need for fast memory for access to large datasets will become self-evident – even though the early adopters will try it first.
  • Building to standards is essential. New forms of storage, including persistent memory, NVMe and Fibre Channel over Ethernet (FCoE), all depend on the development of industry standards. Leveraging standards allows an expanding ecosystem of technology providers to build interoperable components, including interconnects, memory and software interfaces. The emergence of “memory fabrics” supporting the RDMA standard is another impetus to adopt PM, as is the use of NVDIMMs. (NOTE: SNIA develops and supports worldwide industry standards for storage and networking – and many of the Summit speakers are SNIA members).
  • Increasing awareness among ISV software developers. More of the industry’s software applications will have to be adapted to learn how to work with persistent memory. One way to do this is to have software developers work with PM capabilities on the large public clouds (e.g. AWS, Microsoft Azure and Google Cloud Platform) to work on proof-of-concepts (PoCs) and fail-fast initiatives as they climb the learning curve for leveraging PM. Persistent memory will become more widely adopted when more flagship or lighthouse use-cases are proven and socialized to IT managers – and to the business managers who will fund PM-based memory-as-storage installations. That’s why customers in universities and corporate developers will be important early-adopters for PM.



Persistent memory is seen as an important building-block for next-generation in-memory computing, high-speed data transfers and scale-out clusters for computation and storage. As “hyperscale” technologies move beyond stateless apps, it’s clear that fast, persistent memory will be an important addition to the portfolio of next-generation technologies. The use of PM will likely bubble up in more generalized discussions about evolving data center infrastructure – and those discussions will pick up steam as the need for faster memory grows over the next three years.










Cloud Computing, Jean Bozman , , , , ,
About Jean Bozman

Jean is a senior industry analyst focusing her research on server technology, storage technology, database software and the emerging market for Software Defined Infrastructure (SDI).