Velocity Architecture - more than 100GB available for game assets

Discussion in 'Console Technology' started by invictis, Apr 22, 2020.

  1. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    17,035
    Likes Received:
    17,101

    It is more than 100gb, from earlier in this thread: https://forum.beyond3d.com/posts/2139624/

    https://news.xbox.com/en-us/2020/07/14/a-closer-look-at-xbox-velocity-architecture/

    Some choice highlighting of what they say near the end, makes it seem like it's not limited to 100 GB.

    Through the massive increase in I/O throughput, hardware accelerated decompression, DirectStorage, and the significant increases in efficiency provided by Sampler Feedback Streaming, the Xbox Velocity Architecture enables the Xbox Series X to deliver effective performance well beyond the raw hardware specs, providing direct, instant, low level access to more than 100GB of game data stored on the SSD just in time for when the game requires it. These innovations will unlock new gameplay experiences and a level of depth and immersion unlike anything you have previously experienced in gaming.​
     
    PSman1700 and AzBat like this.
  2. Metal_Spirit

    Regular Newcomer

    Joined:
    Jan 3, 2007
    Messages:
    587
    Likes Received:
    361
    I stand corrected ;)
    Regardless, is it a partition or not? And it works this way or not?

    If it does and is indeed a partition I'm shure the textures are not decompressed on a usage basis, due to the constant writing on the ssd, so textures are decompressed once, on the first game boot, and stay on the SSD. How much space from those "above 100 GB" for each game? Is the space divided according to the game size? And if so, are those few GB for each game, enough?

    Those are my doubts at the time.
     
  3. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    17,035
    Likes Received:
    17,101
    We'll certainly find out once XBSX is in the hands of enthusiasts, exactly like how we had partition layout and dumps from the Xbox One internal HardDrive. I think.

    It might be more complicated to get this info if the NVME is physically attached internally and no way of connecting it to a PC exists.
     
  4. scently

    Regular

    Joined:
    Jun 12, 2008
    Messages:
    998
    Likes Received:
    208
    We might very well get more info at HotChips in less than 2 weeks from now. On the 17th to be exact.
     
    Lalaland and chris1515 like this.
  5. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    2,910
    Likes Received:
    2,072
    Wish I had more hope that we'd get the sort of details we want but I doubt it. Maybe just jaded at the moment.
    I think we'll get: couple more k to help with vm, profiled to know where could cut cache, maybe something to do with size and heat management.
     
  6. rntongo

    Newcomer

    Joined:
    May 23, 2020
    Messages:
    11
    Likes Received:
    7
    I honestly doubt there is a 100GB partition in the SSD. With modern OSs that use demand paging you don't need that. That's what makes the whole "up to 100GB" and "above 100GB" "as virtual RAM" comments all the more interesting for me. What improvements can they do to virtual memory now that they have such a fast SSD?. I think at HotChips we're going to learn that there's going to be larger TLB caches on the CPU and I think more custom controllers for finer granularity in paging to increase RAM utilization. If they do not address the virtual RAM question it will be surprising to be honest. Also the location of the DMA controller will be interesting. Sony decided to put theirs in a separate I/O core on the die. So they need to address issues related to cache coherence while If I remember correctly the DMA engines on the Xbox One were in the processors(the GPU) so they don't have to worry about cache coherence with regards to file I/O related to the DMA. Also it will be interesting to know what advantage there is in offloading check-in and load management from the CPU like in the PS5. Will the XSX CPU still direct the DMA where to put the data in RAM? Or it also offloads this to the DMA engine?
     
  7. Metal_Spirit

    Regular Newcomer

    Joined:
    Jan 3, 2007
    Messages:
    587
    Likes Received:
    361
    Is it is not a partition, that what is it?
    If you can read from the SSD, then you would have the full ssd available. Then why the reference to the" more than 100 GB" and not 1 TB?
     
  8. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    11,691
    Likes Received:
    12,696
    Location:
    The North
    Each game is it's own sandbox. It can't access files outside of said sandbox, so that's why you can't reference the whole drive.
     
    PSman1700 and BRiT like this.
  9. Metal_Spirit

    Regular Newcomer

    Joined:
    Jan 3, 2007
    Messages:
    587
    Likes Received:
    361
    Please explain a bit better. What is that sandbox exactly?
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    11,691
    Likes Received:
    12,696
    Location:
    The North
    Its basically a container. If you think about it from a docker perspective.
    Basically to provide additional security and to ensure that the game runs within its own drivers etc, they package a liteweight VM where the OS, drivers, and game are bundled into 1. The game cannot see outside of it's own container.

    That way you don't have breaking of older games if you make dramatic changes to the dev kit.
    you ensure that the software is secure from hacking
    you ensure that the software installed cannot affect the rest of the system
    it's easy to suspend multiple types of VMs by booting them up and down. (which is how I imagine xsx will be able to retain both 360, XBO, and XSX games in memory for quick swap)

    it comes with some overhead, but it secures the system.
     
    #130 iroboto, Aug 6, 2020
    Last edited: Aug 6, 2020
    tinokun, DSoup, Metal_Spirit and 7 others like this.
  11. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,475
    Likes Received:
    3,166
    Location:
    Wrong thread
    Also, if you're doing something like MS's FlashMap (and I think they are) to reduce latency and access overhead (which they have said they've done), it's going to require an area of system ram used to store the translation table for the physical address of the data in question. And that's going to need to be part of the system reserved / protected ram for security purposes IMO.

    IIRC the FlashMap paper gave a figure of 2GB of ram needed for a 1TB SSD. So there's probably a balance between SSD space accessible this way ("instantly available") and the amount of system reserved memory you want to allocate for this purpose. The balance MS have chosen probably equates to "more than 100GB". 100GB would be about 200MB, going by the FlashMap paper, but 1TB would require a huge 2GB chunk of reserved memory which would be unacceptable. So MS will have assessed the right kind of tradeoff to make between capacity and ram cost.

    So I don't think the "more than 100GB" figure relates to any physical aspect of the hardware, or to a partition, or a separate SLC flash unit. I think it's about how software manages access to a particular part of a game's package on the SSD. The only limit would be the ram cost MS have decided is acceptable for facilitating the type of access described in their FlashMap research paper.

    That's not to say you'd only be limited to "[whatever] more than 100GB". I expect you could still access additional data using whatever current systems are in place, much like back compat games will do. I doubt third party cross gen games will care about using DirectStorage or "Virtual Memory" at all, so I wouldn't expect to see seamlessly streaming games everywhere at first.

    Edit: Link to the MS paper:

    https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/flashmap_isca2015.pdf

    "Applications can map data on SSDs into virtual memory totransparently scale beyond DRAM capacity, permitting themto leverage high SSD capacities with few code changes. Ob-taining good performance for memory-mapped SSD content,however, is hard because the virtual memory layer, the filesystem and the flash translation layer (FTL) perform addresstranslations, sanity and permission checks independently fromeach other. We introduce FlashMap, an SSD interface that isoptimized for memory-mapped SSD-files. FlashMap combinesall the address translations into page tables that are used toindex files and also to store the FTL-level mappings withoutaltering the guarantees of the file system or the FTL. It usesthe state in the OS memory manager and the page tables toperform sanity and permission checks respectively. By com-bining these layers, FlashMap reduces critical-path latencyand improves DRAM caching efficiency. We find that this in-creases performance for applications by up to 3.32x comparedto state-of-the-art SSD file-mapping mechanisms. Additionally,latency of SSD accesses reduces by up to 53.2%."
     
    tinokun, Lalaland, turkey and 7 others like this.
  12. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    17,281
    Likes Received:
    6,872
    I'm not so sure it's about MS choosing a balance but just offering that as something akin to what they expect the average Next Gen AAA game size might be. Depending on game, it's likely to be less, but it could be more.

    The size will vary by game depending how large the game's assets are an whether the assets in question require or would benefit from "almost instantaneous" access. Game files that are used mostly to initialize the game state, the engine, voice files, etc. don't need to have that level of instant access. Meanwhile texture assets would likely all be packaged in such a way that they reside in the FlashMap area (assuming this is what they are using or something similar to it).

    Regards,
    SB
     
    tinokun and function like this.
  13. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,475
    Likes Received:
    3,166
    Location:
    Wrong thread
    Yeah, with balance I was meaning what they'll need to offer a next gen game in most circumstances (there are always outliers, especially later on) and how much ram they reserve for this. Assuming, like you say, they're using something similar to FlashMap.

    Textures and geometry would seem to be the most important things for the system to be able to treat like they're (almost) in memory. Very large data sets that you might want a small piece of for the next frame, frame after frame..
     
    tinokun likes this.
  14. rntongo

    Newcomer

    Joined:
    May 23, 2020
    Messages:
    11
    Likes Received:
    7
    It honestly just sounds like due to the SSD they can have a larger virtual address space than was possible with the HDD. You don't need to partition the SSD. If I understand correctly that would increase latency(Transferring files from game install, to special partition, to RAM). Unless they had large Non Volatile memory(Say 128-256GB) in between the SSD & RAM, this doesn't make sense. So what they most likely did was increase the size of the TLB caches, improve the OS's handling of demand paging and maybe added extra hardware for more efficient paging.
     
    Metal_Spirit likes this.
  15. rntongo

    Newcomer

    Joined:
    May 23, 2020
    Messages:
    11
    Likes Received:
    7
    This is very interesting and is most likely similar to what's in the Xbox Series X. Definitely not a separate 100GB partition of the SSD.
     
    function and Metal_Spirit like this.
  16. rntongo

    Newcomer

    Joined:
    May 23, 2020
    Messages:
    11
    Likes Received:
    7
    I've only read the abstract but this is impressive how deep it goes beyond virtual memory to provide a unified abstraction layer for the file index, page tables and the FTL. The advantages to improving latency alone would be a significant benefit compared to a system with higher throughput but high latency.
     
  17. rntongo

    Newcomer

    Joined:
    May 23, 2020
    Messages:
    11
    Likes Received:
    7
    Also, they should definitely have custom firmware for the SSD controller and a custom controller just like the PS5. It will be interesting if they can elaborate a bit on that for marketing purposes.
     
  18. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    13,361
    Likes Received:
    8,902
    Location:
    London, UK
    Something I've never quite understood is how Xbox games take advantage of optimisations rolled out by Xbox OS updates. I.e. on PS4, Sony roll out a new firmware which may make some OS/API functions run faster and all games that use those API functions immediately gain a performance boost without being updated. On Xbox, the drivers are part often VM package so does that mean that all VM packages need to be updated/patched to take advantage of optimisations in the OS? It can't, surely? There must be a smart implementation behind the scenes?
     
  19. turkey

    Regular Newcomer

    Joined:
    Oct 21, 2014
    Messages:
    987
    Likes Received:
    707
    Location:
    London
    I cannot see how this would work on Xbox outside Microsoft patching the title themselves with changes to the container and VM drivers.

    How would Sony be sure this change does not have unintended consequences on PS4? Given how cautious they were with boost mode on PS4 this seems counter to that.
     
  20. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    17,035
    Likes Received:
    17,101
    It all depends on what exactly is performed inside the Game Container and what is thunked to the OS/HyperVisor.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...