Blazing Fast NVMEs and Direct Storage API for PCs *spawn*

I just remembered a theory I had from the console world. If I pretend that the 6GB of lower speed RAM is 3 channels. Maybe that is dedicated to the OS and a 'bounce buffer' for the gpu to output to the high speed memory. Maybe PC GPU/OS drivers can do the same thing?

edit - xbox series x. 16GB RAM - 10 high speed 6 lower speed.
 
Last edited:
So overprovisioned pool/chunk of memory is out the question?
edit - as in pool allocators and for anything that doesn't fit into that a power of two malloc that runs off pools.

No.

I can imagine that PS5 style "continuously streaming textures" actually reduces the pressure on VRAM. With ultra-low latency texture streaming the problem becomes how much space is there on the disk for the game install, not VRAM.

For PC games I doubt latencies will ever allow for PS5 style ultra-streaming game engines, because the lowest common denominator of a PC with 300GB/s max disk bandwidth is unavoidable.

Why 'No.' it seems to me that it is the 'temp' data of decompression that is susceptable to memory fragmentation.
 
So is this going to be the first time with some mass cut-off or exodus because games would run horribly on HDD/slow SSDs? I still remember playing Crysis on a Pentium.

I think a lot of multi-plats might still target those much slower 560mb/s SSD next gen if those are the majority, next step up from HDDs.
 
A collection on ideas and thoughts about DirectStorage (DS) made just by reading and trying to understand.
Waiting for your corrections, points out, explanations, etc.

- (DS) is an API based on the capabilities of NVME that are underused at this moment and by a packing of assets in a new way.
- The MVME side is universal. The possiblities are there, but legacy APIs ignore them, consoles are free of this burden.
- The assets treatment can by fully hardware (XboxSerie), hardware assited (RTX IO, hope AMD IO too) or pure software (CPU) unpacking nd installing like todays HDD games.
- (DS) is part of DirectX. It can be detached and used alone Direct3D+(DS)Vulkan+(DS) or is (DS) so embeded in Direct3D, that the combo Direct3D+(DS) is needed??

The way I see it, (DS) is the DX12/Vulkan of disk drive storage access. A low level I/O Api controled by game engine to access a clearly define and structured assets packaged.
 
I was agreeing that an overprovisioned pool/chunk of memory is a potential solution :)

Oh well there goes my confidence in my reading comprehension. Any opinion on my stupid theory that I re-quoted below?

I just remembered a theory I had from the console world. If I pretend that the 6GB of lower speed RAM is 3 channels. Maybe that is dedicated to the OS and a 'bounce buffer' for the gpu to output to the high speed memory. Maybe PC GPU/OS drivers can do the same thing?
edit - xbox series x. 16GB RAM - 10 high speed 6 lower speed.

Oh and since we're talking about moving fast moving did anyone suggest mip chain compression as a piece of what they are doing? With a special case potential importance on finding commonalities between all textures in the game for the further away mip slices.
 
So it looks like they've brought GPUDirect Storage to the desktop. Awesome.
May I remind you that GPUDirect Storage / RDMA uses peer-to-peer DMA, which requires a PCIe switch in between NIC/SSD and GPU.

So far such setup is only found on Radeon Pro SSG (Solid State Graphics) cards and NVidia DGX-2 supercomputers - nothing like this would be possible on regular PCs without proper support from chipset/CPU, and it's certainly not how DirectStorage actually works on the Xbox Series X, because its custom Ryzen APU only has system memory (though very fast one) and no dedicated video memory.

So unless upcoming mainstream desktop platforms from AMD and Intel support P2P DMA across separate PCIe root ports (which I really doubt), data from the NVMe SSD would still have to travel through the system RAM before it ends up in the video RAM.

The following quotes from the article really make it sound like you'll need an nvme that supports DirectStorage, and maybe other components as well.
https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/
My thoughts too - supported/recommended SSDs would probaly need to implement some block-size related NVMe 1.3 features in their firmware, like LBA Data Size (LBADS) for native 4 KByte sectors (4Kn instead of 512e) and maybe even larger sector sizes, and I/O block alignment/granularity hints fort the OS storage driver, like Namespace Optimal I/O Boundary (NOIOB), Write Size (NOWS), Namespace Preferred Write Granularity (NPWG), Write Alignment (NPWA), Deallocate Granularity (NPDG), and Deallocate Alignment (NPDA).
 
Last edited:
Why does that image have a box labelled "NIC"? WTF does networking have to do with this?
I assume the "NIC" should be more like the Storage Interface Controller, like Northbridge or SouthBridge that connects to the PCI-Express layer?
The entire diagram is an error. Actual RTX IO news article only talks about GPU decompression, with no mention of direct data transfer to the video memory as in GPUDirect.


I guess they had to present some visuals for product unveiling, and they've just reused the diagrams from the GPUDirect Storage presentation without giving it much thought.

https://devblogs.nvidia.com/gpudirect-storage/#attachment_15420
https://devblogs.nvidia.com/gpudirect-storage/#attachment_15426

Note how the RTX IO slide copies the first GPUDirect slide, where a box in the same position is actually labeled 'PCIe Switch', and note how the left part of the second GPUDirect slide presents the same connection flow from SSD through the NIC - because it's actually networked storage that uses NVM Express over Fabrics (NVMe-oF).
The RTX IO slide looks like an improper synthesis of these two GPUDirect slides.


Actual API implementation is still in early stages, since PC version of DirectStorage has to co-exist with the driver stack in Windows IO Manager which handles filesystems and disk devices.


GPUDirect-Fig-1-New.png


GPUDirect-Fig-2_new.png
 
Last edited:
May I remind you that GPUDirect Storage / RDMA use peer-to-peer DMA , which requires a PCIe switch between NIC/SSD and GPU.

So far such setup is only found on Radeon Pro SSG (Solid State Graphics) cards and NVidia DGX-2 supercomputers - nothing like this would be possible on regular PCs without proper support from chipset/CPU, and it's certainly not how DirectStorage actually works on the Xbox Series X because it only has system memory (though very fast one) and no dedicated video memory.

Yes I suspect you may be right. I speculated further up the thread that the diagram may merely be representative of the reduction in CPU overhead rather than specifically representing a P2P DMA. I've certainly not seen any mention of DMA in any of the literature around this, nor mention of any additional hardware requirements beyond an RTX GPU and Windows 10. Even the NVMe requirement is questionable as I've seen mention of SATA SSD's and even mechanical drives, although those reports may be inaccurate.

Unless upcoming mainstream desktop platforms from AMD and Intel support P2P DMA across separate PCIe slots, which I really doubt, all data from the NVMe SSD would still have to go through the system RAM before it ends in the video RAM.

Well per our previous discussion it does seem that AMD has supported this since Zen, Intel is likely another matter though.

Regardless of whether the data still needs to go via system memory or not though, the reduction is CPU overhead outside of the decompression requirements is still very significant. From 2 cores down to .5 according to Nvidia. But that could easily be the Direct Storage effect.
 
Well per our previous discussion it does seem that AMD has supported this since Zen, Intel is likely another matter though.
No, it's only supported for chipset ports that share the same CPU root port. It's not really possible to initiate P2P between different CPU root ports in the PCIe hierarchy, such as NVMe M.2 SSD on an x4 link and GPU on a different x16 link.


To recap, P2P support has to be manifested with a PCIe Switch in the topology - a collection of one Upstream Port with multiple Downstrean ports. This typically concerns devices connected to the same root port, such as chipset sharing an x4 link from the CPU to multiple PCIe devices like NICs/audio/M.2 SSDs and additional PCIe slots.

AMD x470/x570 chipsets have two-level hierarhcy of switches for NIC ports, but the Linux driver only walks through the nearest upstream port and never reaches the upper level upstream port.

So when the Linux driver detects several devices on the same root port, but can't find a connection between them through the hierarchy of upstream/downstream ports, it would enable P2P using the whitelist of recent CPUs.

EDIT: Actually it is possible to initiate P2P DMA transfers between root ports on the same PCIe Root Complex (PCI Host Bridge).

the diagram may merely be representative of the reduction in CPU overhead rather than specifically representing a P2P DMA
Then why a Network Interface Card is pictured in the data path? I'd think they just needed pretty graphics with some vague message of great things to come, so they've reused the GPUDirect slide.

 
Last edited:
The entire diagram is an error. Actual RTX IO news article only talks about GPU decompression, with no mention of direct data transfer to the video memory as in GPUDirect.


I guess they had to present some visuals for product unveiling, and they've just reused the diagrams from the GPUDirect Storage presentation without giving it much thought.

https://devblogs.nvidia.com/gpudirect-storage/#attachment_15420
https://devblogs.nvidia.com/gpudirect-storage/#attachment_15426

Note how the RTX IO slide copies the first GPUDirect slide, where a box in the same position is actually labeled 'PCIe Switch', and note how the left part of the second GPUDirect slide presents the same connection flow from SSD through the NIC - because it's actually networked storage that uses NVM Express over Fabrics (NVMe-oF).
The RTX IO slide looks like an improper synthesis of these two GPUDirect slides.


Actual API implementation is still in early stages, since PC version of DirectStorage has to co-exist with the driver stack in Windows IO Manager which handles filesystems and disk devices.
Jensen specifically states that there are 3 new advances with RTX I/O. "New I/O APIs for direct transfer from SSD to GPU memory" being one of them.

I can't believe Nvidia would be so lazy and inept to simply use some slides that aren't even remotely representative of what is actually possible.

You bring up a good point about them not mentioning it in their blogs on their site, but I think that's because they aren't wanting to speak about hardware requirements at this time, because IF it does require new hardware, people might be inclined to not upgrade until motherboards are out which support it.

I mean, the fact that they've mentioned "certain NVMe drives" are required, should tell you that there's going to be some other requirements as well. If this was simply about GPU decompression, then there's no reason why certain NVMe drives would work, and others not.

I think they'll be ready to talk about hardware requirements sometime mid next year.
 
Jensen specifically states that there are 3 new advances with RTX I/O. "New I/O APIs for direct transfer from SSD to GPU memory" being one of them.
It does not mean peer-to-peer DMA between SSD and GPU is involved. I was specifically answering to the suggestion that RTX IO is similar to GPUDirect Storage as found on the DGX-2 with a two-level PCIe switch complex.

I can't believe Nvidia would be so lazy and inept to simply use some slides that aren't even remotely representative of what is actually possible.
I can find no other viable explanation of how the NIC block ended up on the RTX IO diagram.
 
I just gave you one. New hardware will be required.. and they aren't ready to talk about that yet.
How exactly the network controller is going to be required for DirectStorage / RTX IO data path on the PC, other than having been taken from a different GPUDirect RDMA drawing where it belongs?

the fact that they've mentioned "certain NVMe drives" are required, should tell you that there's going to be some other requirements as well
Sorry, I don't follow the logic.

If this was simply about GPU decompression, then there's no reason why certain NVMe drives would work, and others not.
They may require firmware support for certain NVMe 1.3 features, like 4K sector size and optimal I/O boundary hints. Or they may certify certain NVMe drives with specific minimum read/write/IOPS performance to match the Xbox Series X.
 
How exactly the network controller is going to be required for DirectStorage / RTX IO data path on the PC, other than having been taken from a different GPUDirect RDMA drawing where it belongs?
You're looking at the picture too literally. "NIC" could refer to a new controller on the motherboard designed for the specific purpose of routing data from the SSD to GPU. If they aren't ready to talk about it... of course they're going to use the same terminology as their GPUDirect implementation. Stating anything else at the moment would clue people in that they're going to need a new motherboard.. and they have reasons at the moment in which they most definitely wouldn't want to do that.

Sorry, I don't follow the logic.

They may require firmware support for certain NVMe 1.3 features, like 4K sector size and optimal I/O boundary hints. Or they may certify certain NVMe drives with specific minimum read/write/IOPS performance to match the Xbox Series X.

From the DirectStorage blog:
That’s where DirectStorage for PC comes in. This API is the response to an evolving storage and IO landscape in PC gaming. DirectStorage will be supported on certain systems with NVMe drives and work to bring your gaming experience to the next level. If your system doesn’t support DirectStorage, don’t fret; games will continue to work just as well as they always have.

"Certain systems with NVMe drives"...not "systems with certain NVMe drives" It implies something more is changing.
 
No, it's only supported for chipset ports that share the same CPU root port. It's not really possible to initiate P2P between different CPU root ports in the PCIe hierarchy, such as NVMe M.2 SSD on an x4 link and GPU on a different x16 link.


To recap, P2P support has to be manifested with a PCIe Switch in the topology - a collection of one Upstream Port with multiple Downstrean ports. This typically concerns devices connected to the same root port, such as chipset sharing an x4 link from the CPU to multiple PCIe devices like NICs/audio/M.2 SSDs and additional PCIe slots.

AMD x470/x570 chipsets have two-level hierarhcy of switches for NIC ports, but the Linux driver only walks through the nearest upstream port and never reaches the upper level upstream port.

So when the Linux driver detects several devices on the same root port, but can't find a connection between them through the hierarchy of upstream/downstream ports, it would enable P2P using the whitelist of recent CPUs.

But that quote specifically states P2P DMA can be enabled using a whitelist. So the hardware does support that functionality. It's Linux and the Linux driver that don't natively support it. AMD also states the hardware supports it here: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.2-AMD-Zen-P2P-DMA

If Microsoft wanted to support that hardware capability natively in Windows 10 through something like Direct Storage, then I'm not sure why they'd be unable to.

Then why a Network Interface Card is pictured in the data path? I'd think they just needed pretty graphics with some vague message of great things to come, so they've reused the GPUDirect slide.

Even if the slide doesn't accurately reflect the data flow, it can still be representative of the vastly reduced CPU load resulting from RTX IO.
 
"NIC" could refer to a new controller on the motherboard
The chances for this are close to zero.

NIC is an established acronym for "Network Interface Card/Controller", the term has been widely used since early 1980s. There are no alternative meanings for this acronym.

Storage controllers are typically referred to as "NVMe/SATA/SAS/SCSI controller".

"New Interface Controller" would be a terrible name for any device.


"Certain systems with NVMe drives"...not "systems with certain NVMe drives"
It makes no meaningful difference. NVMe drive is a part of the system, so NVMe drive requirements make a part of system requirements.


They further elaborate on these "certain systems" in the the blog post: With a supported NVMe drive and properly configured gaming machine, DirectStorage etc...

Thais wouid probaly require a certain class of SSDs, but is unlikely to require an entirely new motherboard.

If they aren't ready to talk about it... of course they're going to use the same terminology as their GPUDirect implementation.
They are not using "the same terminology", in fact the GPUDirect Storage news and the RTX IO news have almost nothing in common textually.
 
Last edited:
But that quote specifically states P2P DMA can be enabled using a whitelist. So the hardware does support that functionality.
Only for devices on the same CPU root port. SSD and GPU are typicaly connected to the CPU on two different root ports, so the driver won't enable peer-to-peer for them.

EDIT: Actualy P2P DMA is enabled for devices on the same PCIe Root Complex (PCI Host Bridge in Linux terms) and even between different Root Complexes (in multiprocessor systems and NUMA nodes).

it can still be representative of the vastly reduced CPU load resulting from RTX IO.
Can't really see how this diagram could be interpreted in terms of reduced CPU load.
 
Last edited:
The chances for this are close to zero.

NIC is an established acronym for "Network Interface Card/Controller", the term has been widely used since early 1980s. There are no alternative meanings for this acronym.

Storage controllers are typically referred to as "NVMe/SATA/SAS/SCSI controller".

"New Interface Controller" would be a terrible name for any device.


It makes no meaningful difference. NVMe drive is a part of the system, so NVMe drive requirements make a part of system requirements.


They further elaborate on these "certain systems" in the the blog post: With a supported NVMe drive and properly configured gaming machine, DirectStorage etc...

Thais wouid probaly require a certain class of SSDs, but is unlikely to require an entirely new motherboard.

They are not using "the same terminology", in fact the GPUDirect Storage news and the RTX IO news have almost nothing in common textually.
Ok... but Jensen specifically stating during the presentation "new APIs for fast loading and streaming directly from SSD to GPU memory" ... you can explain some slides being copied (which I still don't buy) but Jensen specifically stating that? Nah.
 
I think for now we don't have enough specificity to have a clear idea of what we'll be getting. The only clear aspect was that Microsoft would have some sort of internal checks done to know if they can safely bypass several layers of legacy in order to optimize operations.
 
Back
Top