Velocity Architecture - more than 100GB available for game assets

Discussion in 'Console Technology' started by invictis, Apr 22, 2020.

  1. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,436
    Likes Received:
    1,498
    My first impression was was that you were calling the mere mention of instant access to a 100 GB of data on the SDD as “fluff”.

    I still have trouble with the term in reference to the article. The article adds nothing new for us but our knowledge about the situation is built from consuming and sharing a plethora of articles, interviews, tweets and other sources. The purpose of the article is to enlighten the reader with the gist of the knowledge we have gathered without all the effort.

    It adds no practical value to our discussion because that’s not its intent. Its purpose isn’t to better inform the already well informed.

    Marketing fluff often involves a ton of wordiness and superlatives.
     
    scently, PSman1700 and BRiT like this.
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,029
    Location:
    Under my bridge
    Which is all I meant by 'fluff'. It's not a reference point for us in this discussion. I wasn't commenting on its value as a piece of public-facing marketing material; that's OT for this discussion.
     
    dobwal and DSoup like this.
  3. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,841
    Likes Received:
    1,160
    Location:
    Guess...
    This would be my assumption too. HBCC could describe what the XSX is doing perfectly based on the limited information that we have, it's AMD tech which is largely what the XSX is composed of, and there's precedent for it's use in this fashion in the form of the ProSSG. Future PC's with a similarly evolved HBCC controller could potentially do the same but with DRAM acting as an extra level of cache between the SSD and VRAM to make up for any shortcomings of lower end PC IO.
     
  4. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,436
    Likes Received:
    1,498
    Makes me wonder how HBCC (if the consoles employ a similar functionality) really works. Does the DRAM actually mimic a traditional cache? It would seem to me if you wanted to avoid writes to the SSD as much as possible then that would seriously influence the cache design of the dram/ssd portions of the hierarchy.
     
    #84 dobwal, Jul 16, 2020
    Last edited: Jul 16, 2020
  5. t0mb3rt

    Joined:
    Jun 8, 2020
    Messages:
    8
    Likes Received:
    8
    From one of James Stanard's recent tweets, he seems to allude to the "100GB of assets" wording simply referring to the fact that the average game install size is around 100GB.
     
  6. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,841
    Likes Received:
    1,160
    Location:
    Guess...
    From what I can tell from the Vega whitepaper, HBCC specifically allows the game to see the SSD and video memory as a single unified pool of memory. The HBCC controller automatically moves the most recently or commonly used pages from the "slow memory segment" (SSD) to the "fast memory segment" (VRAM) to keep things running as fast as possible, but pages can be called directly from the slow memory segment if for whatever reason they haven't been pre-cached in the fast memory segment.

    This coupled with relatively fast, low latency access to the slow memory segment (SSD) would certainly fit the Microsoft description of what the XSX is doing with "100GB of instantly accessible game data" given the presumed game size of 100GB.

    The game effectively see's the entire SSD as VRAM.
     
  7. t0mb3rt

    Joined:
    Jun 8, 2020
    Messages:
    8
    Likes Received:
    8
    Is there some special way the game files have to be packaged on the SSD for this all to work?
     
  8. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,841
    Likes Received:
    1,160
    Location:
    Guess...
    I can't see why there would be.
     
    PSman1700 likes this.
  9. Ronaldo8

    Newcomer

    Joined:
    May 18, 2020
    Messages:
    233
    Likes Received:
    232
    It seems that the directstorage riddle has been resolved:

    We have long suspected that MS has figured out a way of memory mapping a portion of the SSD and to reduce the I/O overhead considerably. I looked out for research on SSD storage from Xbox research members with no success until I realised that I was looking in the wrong place to begin with. MS research happens to count within its ranks Anirudh Badam as Principal Research Scientist. The latter has a paper published in IEEE about the concept of flashmap which subsumes three layers of address of translation into one (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/flashmap_isca2015.pdf). The claimed performance gain is a reduction of latency of SSD access by up to 54%.
     
    milk, Nisaaru, function and 8 others like this.
  10. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,743
    Likes Received:
    926
    Yet, many thought they knew better then MS and where sure they couldn't resolve this :p
     
  11. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,794
    Likes Received:
    8,190
    Location:
    London, UK
    Currently many assets are compressed and bundled in .PAK files (very similar to .zip) that exist in the NTFS filesystem to access a particular assets you are dealing with the NTFS filesystem and file I/O. I'd wager that not putting all the assets into a massive .PAK files that need to be navigated would improve latency. How much though? ¯\_(ツ)_/¯
     
    function likes this.
  12. Allandor

    Regular Newcomer

    Joined:
    Oct 6, 2013
    Messages:
    386
    Likes Received:
    192
    You wouldn't use any packaging at all for optimal performance. Well maybe small package for things that must always be loaded. As long as you don't get any bandwidth problems (I don't expect that at all at those high bandwidth the new consoles will give) it is not needed to pack anything at all. This would only lead to wasted bandwdith (because you will load things you don't need) or to wasted CPU-cycles. "files" (or whatever we want to call the data) can still get compressed individual. E.g. Textures that get divided into small chunks can still get compressed individual or you use one file and only load the parts of the texture you need, but than you can't compress the file without wasting cpu or bandwidth when you want to read only small parts of it.
    This would be overkill for a HDD but not for an ssd.
     
    DSoup likes this.
  13. Allandor

    Regular Newcomer

    Joined:
    Oct 6, 2013
    Messages:
    386
    Likes Received:
    192
    So, that is why the OS still needs that much memory?
    [​IMG]

    Or that is the reason why they say 100GB and not 1TB ^^
     
    BRiT likes this.
  14. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,794
    Likes Received:
    8,190
    Location:
    London, UK
    Yeah, this is a tricky one. If Series X's filesystem is predicated on NTFS then the consequence of pulling tens of thousands of individual asset files out of .PAK files for every game would result in a massive filesystem bloat. I think Microsoft probab;y have some sensible middle-ground.
     
    BRiT likes this.
  15. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,368
    Likes Received:
    2,877
    Location:
    Wrong thread
    Superb find!

    This is more or less what I was trying to suggest earlier this thread, as it would explain lower overhead, a finite size for the mapped space (the 100GB) and crucially the talk of low latency. Except this is a lot more detailed. And done by people who are actually clever.

    My speculation a little while back was that the "100 GB" comment was entirely due to limiting the amount of reserved OS space required. ~200MB would be a small price to pay in terms of reserved memory, and as it's in the OS space the developer never needs to worry about it. Thinking about it, being able to store parts of the dash and in-game user interface in a similar fashion might well actually allow a smaller OS reserve overall. It's down from 3GB to 2.5GB despite potentially storing a "Flash Map" for the game.

    And I'm going to hold to my guess that they'd build the "Flash Map" during install, and simply load it along with the game, or when you switch to a "recently played" game from a resume slot.

    Shit, I'm missing the big presentation!
     
    Allandor, DSoup and PSman1700 like this.
  16. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,794
    Likes Received:
    8,190
    Location:
    London, UK
    I'd suggest that unless Microsoft have plans for Series X that we don't know about, like allowing it run to run Windows, they just don't need the full implementation and bloat of metadata storage for their filesystem. You can save a ton of space, and increase I/O, but chucking out everything you don't need.
     
    function likes this.
  17. MistaPi

    Regular

    Joined:
    Jun 12, 2002
    Messages:
    371
    Likes Received:
    11
    Location:
    Norway
    Regarding SFS, does the "approximately 2.5x the effective I/O throughput and memory usage above and beyond the raw hardware capabilities on average" statement come on top of the 4.8GB/s compressed number?
     
  18. BRiT

    BRiT Verified (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    16,022
    Likes Received:
    15,010
    Location:
    Cleveland
    Yes. It is multiplicative, in that it minimizes how much data you need to send.

    Assume you originally need to load 2.5 GB of data without compression and without SFS, here is how they interact...

    So with compression of 2x that original 2.5 GB of data may be down to 1.25 GB of data.
    So with SFS that original 2.5 GB of data may be down to 1 GB of data.
    So with SFS and compression that 2.5 GB of data may be down to 0.5 GB of data.
     
    Silenti and MistaPi like this.
  19. MistaPi

    Regular

    Joined:
    Jun 12, 2002
    Messages:
    371
    Likes Received:
    11
    Location:
    Norway
    Thanks for you answer. Is SFS something that adds more work for developers, in that they have to manually manage what texture to load and when based on the sampler feedback?

    I guess since how effective SFS is the wast majority of the data being streamed from the SSD is textures?
     
    #99 MistaPi, Jul 26, 2020
    Last edited: Jul 26, 2020
  20. disco_

    Newcomer

    Joined:
    Jan 4, 2020
    Messages:
    246
    Likes Received:
    189
    6GB/s, the max number they advertised, is "approximately 2.5x the effective I/O throughput and memory usage above and beyond the raw hardware capabilities on average". Is that just a coincidence? Asking those who know.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...