Xbox Series X [XBSX] [Release November 10 2020]

Please tell me you don't mean the control flow integrity layer that MS developped for azure. Which happens to be called griffin. It's like flowguard, but made by microsoft.
 
The return of the ROM cartridge?

One important difference between MS and Sony's I/O solution is MS's claim to be able to transfer data directly from the SSD to the GPU. The claim of 100 GB of NAND SSD being instantly available is brought to mind. The questions are then:
(1) What does the qualifier "instantly" means in this context?
(2) What is exactly being made "available" ?
(3) For what purpose?

The careless observer will just wave it away by saying that this is just good old virtual memory paging; that is not in fact the direct transfer of data from the SSD to the GPU . However, the idea of virtual paging does not stand up to scrutiny in this case.
Reason 1: There is nothing particularly instantaneous about virtual memory paging. It describes a tortuous circuit whereby the CPU will have to acknowledge a page fault, look through the filesytem to find the requested page on the SSD, find an empty frame in main memory or evict a stale page to create one and then swap in the correct page from the SSD. Yea, nothing to brag about in terms of instantaneousness.
Reason 2: Phil Spencer, in an otherwise mundane interview in December 2019 drops an absolute bombshell: the SSD of the upcoming Xbox can be used as virtual RAM. Now this can either mean a matching of a page on the SSD to an address in the physical memory address space which remains unchanged (virtual memory paging) OR the memory mapping of a portion of the SSD (100 GB) of it and its addition to the physical memory address space contiguous with system RAM. Phil Spencer specifically mentions that the SSD will act virtually as RAM by significantly increasing the physical memory address space, comparing it to the 32 to 64 bit transition for good measure. Thus, it becomes highly probable that MS has succeeded into making a part of an NVME SSD byte-addressable which cuts down significantly on the CPU overhead associated with virtual memory as the CPU likely can't differentiate between system RAM and the SSD.

This type of technology is not unprecedented in consoles, that's how the ROM cartridge of the good old NES functioned. Nowadays it finds an echo in a field far removed from gaming: big data and AI systems. The addressable SSD is what can be described as persistent memory, a technology now ubiquitous with dual socket servers being used for RDMA. Tom Talpey of Microsoft is actually a good source for the ongoing effort to develop a new filesystem API for presistent memory when in memory mode. This is it for the term of art 'instantaneous'.
Now what is this data available for? I speculate that it is available to be duplicated back to another portion of the physical memory space which is system RAM (the CPU will view it just as a duplication of data from one RAM address to another) and/or streaming of textures from the SSD to the GPU as part of SFS. One interesting result of this aspect of the XVA is that it doesn't actually requires the use of coherency engines or GPU scrubbers.
 
I'll help one more time since you guys didn't read my follow up post. Its not tied to a gen
Ah, it's the backwards compatibility tech they have developed, using ML and such to uprez and improve game assets.
Another one of my fine predictions. :p
 
The return of the ROM cartridge?

One important difference between MS and Sony's I/O solution is MS's claim to be able to transfer data directly from the SSD to the GPU. The claim of 100 GB of NAND SSD being instantly available is brought to mind. The questions are then:
(1) What does the qualifier "instantly" means in this context?
(2) What is exactly being made "available" ?
(3) For what purpose?

The careless observer will just wave it away by saying that this is just good old virtual memory paging; that is not in fact the direct transfer of data from the SSD to the GPU . However, the idea of virtual paging does not stand up to scrutiny in this case.
Reason 1: There is nothing particularly instantaneous about virtual memory paging. It describes a tortuous circuit whereby the CPU will have to acknowledge a page fault, look through the filesytem to find the requested page on the SSD, find an empty frame in main memory or evict a stale page to create one and then swap in the correct page from the SSD. Yea, nothing to brag about in terms of instantaneousness.
Reason 2: Phil Spencer, in an otherwise mundane interview in December 2019 drops an absolute bombshell: the SSD of the upcoming Xbox can be used as virtual RAM. Now this can either mean a matching of a page on the SSD to an address in the physical memory address space which remains unchanged (virtual memory paging) OR the memory mapping of a portion of the SSD (100 GB) of it and its addition to the physical memory address space contiguous with system RAM. Phil Spencer specifically mentions that the SSD will act virtually as RAM by significantly increasing the physical memory address space, comparing it to the 32 to 64 bit transition for good measure. Thus, it becomes highly probable that MS has succeeded into making a part of an NVME SSD byte-addressable which cuts down significantly on the CPU overhead associated with virtual memory as the CPU likely can't differentiate between system RAM and the SSD.

This type of technology is not unprecedented in consoles, that's how the ROM cartridge of the good old NES functioned. Nowadays it finds an echo in a field far removed from gaming: big data and AI systems. The addressable SSD is what can be described as persistent memory, a technology now ubiquitous with dual socket servers being used for RDMA. Tom Talpey of Microsoft is actually a good source for the ongoing effort to develop a new filesystem API for presistent memory when in memory mode. This is it for the term of art 'instantaneous'.
Now what is this data available for? I speculate that it is available to be duplicated back to another portion of the physical memory space which is system RAM (the CPU will view it just as a duplication of data from one RAM address to another) and/or streaming of textures from the SSD to the GPU as part of SFS. One interesting result of this aspect of the XVA is that it doesn't actually requires the use of coherency engines or GPU scrubbers.

Isn't this what HBCC does (with system memory only in Vega)?
 
Isn't this what HBCC does (with system memory only in Vega)?
HBCC concerns system RAM being made contiguous with GPU dedicated RAM.
The MS claim (in my view) is that an SSD has been made byte addressable. As far as I know, there is only a paper from Samsung (Bae et al ,IEEE, 2018) that provides an insight into how it can possibly be implemented. NVIDIA's GPUdirectstorage framework is to cut out CPU involvement in the conduct of IO processes, a claim also different from that of MS.
 
I don't think instantly conveys anything other than not having to access the data in the typical low level file I/O means -- the developer no longer needs to create their own buffer, open the file, issue seek commands, then perform the desired reads into the buffer. I think it's as simply as reading of Mapped Memory Location X and that data for the specified game is made available. This is likely how SFS is setup to work as well, from within this 100 GB Memory Addressable segment for the game.

The Digital Foundry write up on this:

The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer.

Since the phrase is "instantly accessible by the developer", that makes me all the more confident that it's to make the developers job easier by not having to mess around with typical I/O.
 
I don't think instantly conveys anything other than not having to access the data in the typical low level file I/O means -- the developer no longer needs to create their own buffer, open the file, issue seek commands, then perform the desired reads into the buffer. I think it's as simply as reading of Mapped Memory Location X and that data for the specified game is made available. This is likely how SFS is setup to work as well, from within this 100 GB Memory Addressable segment for the game.

The Digital Foundry write up on this:



Since the phrase is "instantly accessible by the developer", that makes me all the more confident that it's to make the developers job easier by not having to mess around with typical I/O.
And how is that achieved exactly? An SSD is normally not addressable.
 
HBCC concerns system RAM being made contiguous with GPU dedicated RAM.

AMD suggest it would be the same for SSD's too in their marketing slides even though that function isn't exposed in the drivers at present. I'm assuming that's not the same as being byte addressable though?
 
That 100GB figure is a head scratcher unless it was just meant to be illustrative of a typical next gen game size. Maybe it's something like an 8K block size on a 24bit addressing on the storage. Even so nothing adds up to a 100GB limit. Or 256K on 19bits. It's like 128GB is the closest, no matter what block size we choose, in case it's an addressing limitation. It's 37bit address in bytes regardless where there might be a limit. Either on the storage adressing or the virtual mapping.
 
Last edited:
Something VR/AR could be not tied to a generation and in Eastmen’s field of current interest. And it would be highly overdue on Microsoft’s side to come with something.
 
Back
Top