3D Xpoint Memory (Intel Optane)

In what situation do you find data centers with extremely low write loads AND a gigantic amount of CPU sockets?

Sounds like a very niche combination to support, especially as if you fuck up you can wreck your memory modules permanently. Also, as it would seem, this tech has ~10x DRAM access latency, so it wouldn't be great as main memory anyway in a high-read scenario I would think.
 
Looks like this memory will still be used primarily for long-term storage, only hooked up differently to the system for vastly higher performance than traditional storage subsystems.

Using it directly as CPU/GPU random access memory or cache seems extremely fail, as you could wear it out in days at most in a write-intensive application even if you have a very big pool of memory to work from...
Also the latency and Memory bandwidth would cripple videocards. Just looking at memory bandwidth 6GB/s for 1st generation Xpoint memory dimms, and eventual goal of 25GB/s for future 3D stacked Xpoint memory.
 
It would be great for consumer level motherboards. Boot up much faster than with SSD.
Even with regular SATA SSDs, boot time is not the major component of system startup time, rather, it's the POST... Or well, it is with my mobo anyway, maybe more consumer-oriented, mass-market mobos than my ASUS RoG-series board have UEFI firmware which boots quicker, I'm not sure. :)

My older nehalem-based socket 1366 board has a BIOS with a quite substantial POST. Windows bootup, even windows 7, is relatively quick in comparison from that box's quite old Intel X25-E SSD (although it now runs win10, and thus boots even quicker.)
 
Also the latency and Memory bandwidth would cripple videocards. Just looking at memory bandwidth 6GB/s for 1st generation Xpoint memory dimms, and eventual goal of 25GB/s for future 3D stacked Xpoint memory.

I was thinking more in-addition to vs replacing current gddr implementations. For instance used as a closer to GDDR memory texture cache much faster than getting data from the HDD?

Also the 6GB number appears to be per dimm, with multiple dimm's/memory banks wouldn't it be possible to have much more memory/bandwidth available?

http://www.kitguru.net/components/m...nt-ssds-will-feature-up-to-6gbs-of-bandwidth/
 
I was thinking more in-addition to vs replacing current gddr implementations. For instance used as a closer to GDDR memory texture cache much faster than getting data from the HDD?

Also the 6GB number appears to be per dimm, with multiple dimm's/memory banks wouldn't it be possible to have much more memory/bandwidth available?

http://www.kitguru.net/components/m...nt-ssds-will-feature-up-to-6gbs-of-bandwidth/
Nice link I've never seen this photo before.

intel_3d_xpoint_projections.png


Hmm I think there is little reason to place xpoint physically on cards for texture cache at least for videogames. Games generally designed around X1/ps4 and thus cards don't need that much more memory than they currentky have. Right now many games cache textures in system memory or directly in vram if the card has alot of vram. Our current pcie 3.0 is good enough for streaming textures from all these X1/ps4 ports, PCIE 4.0 is coming in 2017. HBM2 will further increase the amount of memory on cards as the technology gets cheaper. Why not just have the xpoint memory elsewhere in the system and not burden cardmakers with additional costs?
 
Last edited:
The ideal for a gaming PC as far as I can see is 3D Xpoint SSD (main storage) -> DDR4 (system memory) -> HBM2 (graphics). That's one crazy fast memory pipeline.
 
Why not HMC or HBM RAM for the CPU as well? Solder on 16GB, and you won't have to upgrade - there's totally no need to for regular schmoes, or even hardcore gamers. :)
 
Nice link I've never seen this photo before.

intel_3d_xpoint_projections.png


Hmm I think there is little reason to place xpoint physically on cards for texture cache at least for videogames. Games generally designed around X1/ps4 and thus cards don't need that much more memory than they currentky have. Right now many games cache textures in system memory or directly in vram if the card has alot of vram. Our current pcie 3.0 is good enough for streaming textures from all these X1/ps4 ports, PCIE 4.0 is coming in 2017. HBM2 will further increase the amount of memory on cards as the technology gets cheaper. Why not just have the xpoint memory elsewhere in the system and not burden cardmakers with additional costs?

Xpoint would provide much larger quantities of pretty fast memory compared to HBM (Very fast memory), at a price point that is much lower.
Throwing aside the XB1/PS4 idea would having an order of magnitude more pretty fast memory with HBM allow for the PC to do anything it couldn't do easily today? (much larger totally destructible worlds, much higher texture quality? etc?)
 
Why not HMC or HBM RAM for the CPU as well? Solder on 16GB, and you won't have to upgrade
Indeed.
That rumored 250W APU on 14nm finfet with HBM2, new gen GPU & Zen CPU cores could be absolutely epic if execution is good.
 
While cool, that would still be much weaker than a 95w CPU + 250w GPU discrete alternative.
Why not compare it to a combined 250w for CPU+GPU? I think the performance should differ to much. The current gen APU problem is mostly the memory bandwidth and also the fact that it is targeting sub 100w. If someone were actually make 250w APU with HBM, I believe it would compare favorably vs a combined CPU + GPU @250w.
For comparison, a R7 250 with 6CU@1GHz have 75w max TDP, while Kaveri A10-7850K 8CU@720MHz have 95w max TDP. Of course straight comparison would be hard because one has on board RAM (thus faster in memory bandwidth) and the other one not only have CPU, but also the north bridge. Also it will adjust the CPU clock under a high GPU load, which a CPU + GPU combination wouldn't do. But my point is that overall an APU should at least perform at a similar level vs CPU+GPU within the same power budget. Heck, it would probably perform better at the expense of flexibility in the power budget.
 
Why not compare it to a combined 250w for CPU+GPU?

The point was to illustrate that discrete CPU+GPU will always be able to afford a higher power budget than an APU. Thus no matter how awesome your APU is, there's always going to be a more powerful discrete option.
 
Or you could compare with a Quad crossfire I guess :p

I have been kinda hoping that top end discrete may go under 250W on 14nm.
 
Thus no matter how awesome your APU is, there's always going to be a more powerful discrete option.

I wouldn't be so sure. The graphics AIB market has been eroded from below by integrated graphics to the point where Intel now has almost three quarters of the market (in units). HBM/HMC will allow the CPU/APUs to kill the next tier of AIBs.

Cheers
 
I thought Intel had over 80% back in the day when they had the really really bad onboard graphics, so arguably 75% is down.
 
I wouldn't be so sure. The graphics AIB market has been eroded from below by integrated graphics to the point where Intel now has almost three quarters of the market (in units). HBM/HMC will allow the CPU/APUs to kill the next tier of AIBs.

Cheers
I think there will always be system builders who spend +$250 on videocards, how far can apu's and integrated gpu push their performance into these higher price brackets?
 
I thought Intel had over 80% back in the day when they had the really really bad onboard graphics, so arguably 75% is down.

That was also before AMD had integrated graphics. AMD likely ate into some portion of that integrated market share while at the same time losing discrete market share to Nvidia with the end result of them having lower market share than in the past.

That was also when Intel's integrated basically destroyed most of the IHVs making cheap graphics chips for notebook computers (like Trident Microsystems, for example). Those low end graphics chip makers dominated the mobile space with extremely cheap chips until Intel came along. The relatively more expensive and robust ATI chips and later Nvidia chips were never a challenge to either those low end chip makers or Intel's integrated for the vast majority of the notebook market (where cost and not performance was the key determining factor).

I'm also fairly certain it was never as high as 80% of all PC graphics. 80% of mobile graphics might have happened, but not overall PC graphics shipments. But I could be wrong.

Regards,
SB
 
Last edited:
I'm a bit late to see this as I've been a bit busy but the technology looks to be really promising.

Enterprise Drive - http://www.anandtech.com/show/11209...ep-dive-into-3d-xpoint-enterprise-performance
Low capacity consumer "caching" drive - http://www.anandtech.com/show/11210/the-intel-optane-memory-ssd-review-32gb-of-kaby-lake-caching

Obviously first generation products (no idle power states on the consumer drive, for example) and quite expensive compared to mature NAND based drives.

Performance, however, is impressive for a first generation implementation, especially access latency. It'll be interesting to see if this can mature to the point where it would be suitable (price per GB) for the consumer market. And if it can, how long it'll take Intel to get there.

Also interesting, is that it doesn't suffer from NAND's dependence on multiple channels for high performance. Thus low capacity drives with only a few chips don't suffer large performance penalties compared to larger capacity drives with more chips.

Would like to see a hybrid HDD with integrated Optane cache similar to Seagates SSHDDs.

Regards,
SB
 
Intel Optane SSD DC P4800X is apparently released now; the 375GB 2.5" U.2 version is found at one retailer by the search site I'm using and it costs 'only' about €2200. :runaway:

Jesus! Wasn't optane supposed to be roughly price parity to flash, or am I entirely mistaken here?
 
Back
Top