3D Xpoint Memory (Intel Optane)

hoom

Veteran
Some of us have been waiting on various promising fast & more durable non-volatile memory techs which have been 'on the way real soon now™' like MRAM & various followup 'Spintronics' techs for several years.

I understand certain commercial uses have eventuated but not real consumer level stuff.

MRAM suffered from a quite large minimum cell size by my understanding & the various spintronics follow ups don't seem to have managed to make it to volume production.

The latest is Intel & Micron '3D Xpoint'
eg http://www.anandtech.com/show/9470/...-1000x-higher-performance-endurance-than-nand

On the face of it this appears to solve a lot of those scalability issues & be one of those incredibly simple solutions 'why did nobody think of it before' that may finally enable this tech to come to real consumer grade mass availability?

What are peoples' thoughts on this?
 
I wonder if we may need to see Xpoint proliferate in the high performance computing field and server markets before scales of economy set in and prices come down enough for even high end systembuilders/gamers to swallow.

I think well need new motherboards. To take advantage of the speed and low latency of this technology it cannot be hindered by SATA6/PCI-E or anything on the motherboard chipset and will probably require a faster and lower latency more direct to cpu connection like RAM to the integrated memory controller on your cpu chip. Perhaps something akin to additional dimmslots on high end motherboards.

I'd love 128GB worth of Xpoints dimms in my 2017 motherboard and have Windows and my most used apps installed on it. A few seconds from powering up to running those those apps.
 
Last edited:
I've still haven't gone down the SSD route because my concerns over write endurance, when I first read about this I was enthused. If this comes to market within the next 6-7 years it'll be my first outing for SS mass storage. The speed improvement is nice too, so that's always nice. The only question is price and density... isn't it less dense then NAND flash?
 
I've still haven't gone down the SSD route because my concerns over write endurance, when I first read about this I was enthused. If this comes to market within the next 6-7 years it'll be my first outing for SS mass storage. The speed improvement is nice too, so that's always nice. The only question is price and density... isn't it less dense then NAND flash?
Anandtech estimated the die size for the 128gbit (16GB) Xpoint dies at 210-220mm^2 from 300mm wafer photos.
http://images.anandtech.com/doci/9470/128Gbit-3D-XPoint-Wafer.jpg

That wouldn't pose a density problem for laptops or desktops though it would be an issue for the mobile devices as a mass storage solution. However Xpoint could offer power savings if used to complement DRAM for some portable devices given it doesn't need to be constantly refreshed like DRAM does, and like SRAM/DRAM but unlike NAND is byte level addressable which which is important for for memory functionality and direct cpu access.
 
Yeah, I had only very briefly scanned that article before posting.
That die size density does seem like an issue for mass adoption & they talk about 'materials' being an issue with manufacturing.
Since the die in question is 2 layer I'm hoping it'll be possible to migrate to a 3D layer type solution fairly rapidly.
Also being able to produce this tech on 20nm now is a big advantage over some of the other techs.

I've still haven't gone down the SSD route because my concerns over write endurance, when I first read about this I was enthused. I
MRAM type tech was already on the cards back when the first SSDs came out.
I was also concerned about write endurance but if I recall correctly Intel officially claims something like a 10yr lifespan for their consumer level SSDs and thats the sort of thing they couldn't state publically without serious technical evidence to back it up.

Perhaps something akin to additional dimmslots on high end motherboards.
Its definitely clear that it needs to be close to CPU/GPU to make maximum use.
Where max performance is not required it has been suggested that these techs can potentially operate in place of conventional DRAM with possibility to freeze & resume current state near instantly since the cells are non-volatile.

But I believe there are other factors that mean DRAM is likely to be still needed.
Something like a stack or two across an HBM bus/in place of Crystalwell eDRAM seems like the kind of implementation that would be likely at least initially.


I want to believe but what bothers me is there have been a whole heap of these breathless 'gonna revolutionise computing with new fast, high durability non-volatile tech' announcements over the years & none has actually made it to affordable mass consumer type implementations yet.
Like with durability of their SSDs (& the success of their entry to that previously niche market) I feel the claim carries more weight when its Intel saying it.
 
Its definitely clear that it needs to be close to CPU/GPU to make maximum use.
Where max performance is not required it has been suggested that these techs can potentially operate in place of conventional DRAM with possibility to freeze & resume current state near instantly since the cells are non-volatile.

But I believe there are other factors that mean DRAM is likely to be still needed.
Oh for sure, I mentioned in my 2nd post it would certainly need to complement DRAM in PCs, mobile phones, and most other devices. While density is an issue for mass storage for mobile devices I don't see density being an issue with desktop or laptops. Currently 16GB per die, and 8 dies in 8 chips on a small DIMM like circuit board just like current DRAM DIMM configurations and you have 128GB.
 
Last edited:
10x times density, 1000x speed over SSD => could be used as the sole memory on your system (price withstanding), therefore replacing RAM, SSD and HDD, leaving only optical for long term storage and data exchange...
That would be interesting, you wouldn't load anything to memory since that stores everything and you don't have any other data store (just caches).
That would change programming quite a bit...

[Not saying it will, I have no real information yet regarding latency, bandwidth, format, cost...]
 
Its endurance is not good enough though (only 1000x times NAND). To be used as main memory, it needs to have practically infinite endurance.
 
Its endurance is not good enough though (only 1000x times NAND). To be used as main memory, it needs to have practically infinite endurance.
Indeed, we don't have much information regarding that though...
As I said it was more of an interesting prospect than a realistic approach.
 
I think there were some discussions somewhere on this topic when everyone was talking about MRAM. :p

However, one interesting possibility right now is, with SSD already pretty mature and affordable, should there be a new way to make permanent storage? Most filesystems right now are basically designed with HDD in mind, where seek times are very high and random access should be minimized.

With SSD, which allows access pattern more similar to random access memory, it'd be possible to make some kind of object based filesystem. Right now, relations between files in a traditional file system must be in the form of a path, but that can't be the most efficient way when dealing with SSD. In theory, it should be possible to have a "pointer" in an object which points to another object on the same SSD, and the CPU should be able to access it like reading from main memory. Now, one still have to use memory mapped file to do something remotely similar to this in an smartphone app. That's just not the most efficient way. Ironically, in the old days of ROM cartridges, programs do read data from ROM like reading from main memory...

Once we have something like that, it'll be much easier to migrate to the "NVRAM as main memory" model, and it could open a lot new opportunities.
 
With ROM cartridge consoles you don't have to think about access permissions, writes, backups, having more than one SSD in a system (or being able to share that SSD between multiple systems), configuration changes, wear levelling, allocation/deallocation, fragmentation, or garbage collection.

With non-volatile memory that is fine-grain addressable (i.e. cache line addressable) you could certainly imagine foregoing a file system altogether and mapping it directly into process virtual address spaces, in the same way volatile memory is handled now. But the non-volatile nature of the memory means there are new questions that need to be answered. How does a process gain access to memory that another process (possibly the same executable) allocated? What happens if you switch off your PC and move the non-volatile memory around? Will you have to re-map pointers in that memory because of conflicting base addresses? How do you clean up this non-volatile memory?
 
Well, most of the problems you mentioned are not present in smartphones, but yet smartphones all use traditional filesystems. Of course, it's more difficult for traditional PC to take this route.

Access permission is not a hard problem to solve either. Normal OS already have different access permission for main memory (and some for memory mapped I/O), it shouldn't be too hard to have a similar set up for a memory-like permanent storage system.

Currently the biggest problem with flash SSD is that they are not really fine-grain writable, as flash needs to be erased before writing. However, MRAM does not have this problem, and I think maybe with some clever controller design it's possible to make a flash SSD looks like fine-grain accessible.
 
However, one interesting possibility right now is, with SSD already pretty mature and affordable, should there be a new way to make permanent storage? Most filesystems right now are basically designed with HDD in mind, where seek times are very high and random access should be minimized.

With SSD, which allows access pattern more similar to random access memory, it'd be possible to make some kind of object based filesystem. Right now, relations between files in a traditional file system must be in the form of a path, but that can't be the most efficient way when dealing with SSD. In theory, it should be possible to have a "pointer" in an object which points to another object on the same SSD, and the CPU should be able to access it like reading from main memory. Now, one still have to use memory mapped file to do something remotely similar to this in an smartphone app. That's just not the most efficient way. Ironically, in the old days of ROM cartridges, programs do read data from ROM like reading from main memory...

Paths are just human-understandable identifiers that the filesystem somehow maps into locations in storage. You almost certainly want to keep some path-like mechanism even in a system where all storage is directly addressable, just to keep the system understandable by normal human beings. On Unix the normal way to do IO is generally to memory-map all the files you access. I expect that in the future of single-level storage, the only thing that changes for software is that memory-mapping files becomes faster.
 
Well, most of the problems you mentioned are not present in smartphones
Which problems aren't present? You can't move the internal flash around, but you can add SD cards or storage via USB, and you can connect the phone itself as storage to another device.

Access permission is not a hard problem to solve either. Normal OS already have different access permission for main memory (and some for memory mapped I/O), it shouldn't be too hard to have a similar set up for a memory-like permanent storage system.
Main memory is allocated to a process and usually not shared between processes, and the user has no control over it. File systems are based on the premise that persistent data belongs to the user, as processes are not persistent (and even with non-volatile memory they won't be), and the user can copy and share files as well as feed them to different processes. Arguably the latter is very useful and I would hope that we can continue along that line, even if that means using pointers to objects in non-volatile memory is problematic. (and even though I think the concept of files as generic unstructured blob containers is stupid).

What is it you'd like to do that memory mapped files don't let you?
 
In some ways, what's old is new again.

I've worked with a number of proprietary database software types, some with a legacy in an era that predates the ready availability of large amounts of volatile working memory. Some map their structures to parallel namespaces (edit: volatile and non-volatile versions), which can allow for better performance and provides some tools for maintaining transactional behavior.

I would say I'm interested in seeing what mindset developers and designers would need to take with single-layer storage if unlimited endurance non-volatile memory became available.
Most of the documented proposals for non-volatile memory have a much higher resistance to upsets since they do not rely on charge, so sources of error would probably be more consistent with gradual drift or other forms of bit rot or cell failure. That might encourage some level of block tracking or eventual writeback to a lower level of storage.

The shift from an environment where execution is handled in a transient and private format to one where state can persist effectively forever can reveal a lot of assumptions made in programs across individual processes (or instantiations of a program or service), across crashes, across (not-really atomic) transactions, across epochs of a system's growth, and across design changes.
GIGO has additional meanings when a lot more garbage becomes archived for a lot of naive programs or new designs months or years later fail to handle all the corner cases, or what was once valid becomes garbage due to similar gaps in time or thinking.
Like how it is not that common for humans to handle concurrency well, they don't handle thinking outside of the here and now particularly well, either.
Every patch that breaks game saves is a small example of this.

I'm also rather cautious, but I would expect that either volatile memory remains, or on-die or in-stack non-volatile stores along with pervasive encryption and error-checking comes into play.
Non-volatile main memory is a much, much bigger attack vector in the temporal dimension, and some of the actions taken that can leak data to ephemeral memory that were already dangerous become cemented.
Things like aggressively overwriting deallocated memory, cycling encrypted data, and securing some of the error-recovery measures posited for non-volatile storage might be necessary, since the data is physically persistent in a way that survives disruptions that DRAM data most likely could not.
 
Last edited:
Which problems aren't present? You can't move the internal flash around, but you can add SD cards or storage via USB, and you can connect the phone itself as storage to another device.

And these can and are managed as external storage. You don't really want your apps randomly write its internal data on SD cards or USB connected storage anyway.

Main memory is allocated to a process and usually not shared between processes, and the user has no control over it. File systems are based on the premise that persistent data belongs to the user, as processes are not persistent (and even with non-volatile memory they won't be), and the user can copy and share files as well as feed them to different processes. Arguably the latter is very useful and I would hope that we can continue along that line, even if that means using pointers to objects in non-volatile memory is problematic. (and even though I think the concept of files as generic unstructured blob containers is stupid).

What is it you'd like to do that memory mapped files don't let you?

If someone wants to "copy and share" files, they can still do that. SD cards and USB flash drives will still be using traditional filesystem for compatibility reasons (and for interface reasons... they are not fine-grain accessible on the interface level). It just does not have to do with how the applications manage its data internally.

The problem of memory mapped file is that it's not efficient enough on a SSD. After all, all it does is translating memory access into SATA commands, which is really not necessary if you have something like MRAM, or a SSD with some smart controller.
 
And these can and are managed as external storage. You don't really want your apps randomly write its internal data on SD cards or USB connected storage anyway.
They're not accessed differently from an application perspective, though. External storage is mounted into the same file system tree as internal storage.

The problem of memory mapped file is that it's not efficient enough on a SSD. After all, all it does is translating memory access into SATA commands, which is really not necessary if you have something like MRAM, or a SSD with some smart controller.
But that's not a problem of the memory mapped file API. If your SSD could map its entire contents into physical memory address space, you'd still need an API to govern access to those contents and map physical addresses into process virtual address space.
What particular limitations of memory mapped file APIs do you see?
 
They're not accessed differently from an application perspective, though. External storage is mounted into the same file system tree as internal storage.

But that's not a problem of the memory mapped file API. If your SSD could map its entire contents into physical memory address space, you'd still need an API to govern access to those contents and map physical addresses into process virtual address space.
What particular limitations of memory mapped file APIs do you see?

What I am thinking about is more on how an app accessing its own data. For example, a game app needs to load a lot of static image data which never change. Non-game apps generally also have quite a few static image data.
Using memory mapped files, it's possible to reduce some overhead loading texture data into GPU. However, if the flash memory is directly accessible to the SOC, it's even possible to let the GPU directly access the texture data without even touching the CPU, and that'll be much more efficient than ever. Non-game apps also benefit because many apps have a lot of images going around. Performance of switching between apps should also improve quite a lot and there's much less need to load apps from storage.

Of course, I understand that with 32 bits OS, the lack of addressing space is a serious problem, but now 64 bits CPU are everywhere, and I think it's about time that at least some mobile OS should try this approach.
 
I don't think there's anything in memory mapped file APIs that would prevent these use cases.
 
The problem of memory mapped file is that it's not efficient enough on a SSD. After all, all it does is translating memory access into SATA commands

I don't think you understand what memory-mapping a file does. On a SATA device, it allocates some ram and copies the content of the device into that ram when accessed. On any device that allows direct random access into it (such as NOR flash currently), memory mapping just attaches that memory at that point in your address space. When you memory map a NOR (or XPoint in the future) page into memory and access it, there is no translation of access beyond the one that is always done for RAM, it's just a direct bus access into that device.

As I said before, the only thing that the visible VFS layer of filesystems provide is a translation of human readable paths into pointers in devices. All the other features of filesystems are completely orthogonal to this. I never see this system going away, it's just too practical.
 
Back
Top