Velocity Architecture - Limited only by asset install sizes

It's on demand streaming because it is purely reactive and totally deterministic.
Yes. I agree with the 'on demand' bit. I disagree with the 'instant' bit. It takes a moment to fulfil that demand. But 'instant' isn't a scientific term and relative to how long it'd take to fetch that same data from an optical disk, it is instant. We had the same discussion regard Cerny talking about PS5's data access being 'instant'. Also, the '100 GB instant access' claim that has inspired theories on how that could be achieved including this thread. The concept of 100 GB accessible as quickly as DRAM, say, is a misunderstanding dervied from the literal interpretation
 
Well, yes so is everything else from both sony and ms so far.
Some of the numbers are real and important for technical discussions. In this case, getting to the bottom of the '100 GB' figure is important. I think that's done now and the discussion concluded unless there's something new suggesting otherwise.
 
Yes. I agree with the 'on demand' bit. I disagree with the 'instant' bit. It takes a moment to fulfil that demand. But 'instant' isn't a scientific term and relative to how long it'd take to fetch that same data from an optical disk, it is instant. We had the same discussion regard Cerny talking about PS5's data access being 'instant'. Also, the '100 GB instant access' claim that has inspired theories on how that could be achieved including this thread. The concept of 100 GB accessible as quickly as DRAM, say, is a misunderstanding dervied from the literal interpretation

Maybe it's instant most of the time? When SFS directs to load the higher mip before it would cause a visual issue? :runaway: :p

I agree, it seems a good bit more of syntactic sugar for the developers to not have to worry about loading of the assets using common IO methods, than it is about expanding memory pool size with ultra low-latency figures through technical magic.
 
Yes. I agree with the 'on demand' bit. I disagree with the 'instant' bit. It takes a moment to fulfil that demand. But 'instant' isn't a scientific term and relative to how long it'd take to fetch that same data from an optical disk, it is instant. We had the same discussion regard Cerny talking about PS5's data access being 'instant'. Also, the '100 GB instant access' claim that has inspired theories on how that could be achieved including this thread. The concept of 100 GB accessible as quickly as DRAM, say, is a misunderstanding dervied from the literal interpretation

Never said it was instantaneous or as quick as DRAM. In I/O terms, one frametime is an eternity and MS has inbuilt texture filters to address this particular aspect. Remains to be seen if GPU can indeed be fed directly from the SSD (my reason for doubting it is that I have zero knowledge of a byte-addressable SSD in actual existence.NVIDIA has no qualms about direct DMA from SSD to GPU through GPUDirectstorage in rdma applications.)
 
Some of the numbers are real and important for technical discussions. In this case, getting to the bottom of the '100 GB' figure is important. I think that's done now and the discussion concluded unless there's something new suggesting otherwise.

How so? Do we actually know what directstorage (which addresses the 100 GB part of the velocity architecture) actually is after Ronald's latest bout of fluff prose? Can you actually describe it in technical terms? Then I don't see how any conclusion can be drawn from it. Anyway, the real memory multiplier is SFS when benchmarking against PCs with equal SSD throughput. I usually make fun of Jason Ronald's PR but his prose destined for the non-technical public is backed by real technology just like for his 2.5x multplier claim which frankly is the boldest one.
 
How so? Do we actually know what directstorage (which addresses the 100 GB part of the velocity architecture)
Realistically, I think yes.

If MS had a feature that made their IO faster than just a low level SSD access, they'd have been explicit about talking about it. They've devoted far and away the most PR words to SFS as a differentiator. If DS is doing more than just being an optimised file system, we'd have heard something about it by now beyond unquantifiable remarks regarding speed.

This is how DirectStorage was described in March:

DirectStorage – DirectStorage is an all new I/O system designed specifically for gaming to unleash the full performance of the SSD and hardware decompression. It is one of the components that comprise the Xbox Velocity Architecture. Modern games perform asset streaming in the background to continuously load the next parts of the world while you play, and DirectStorage can reduce the CPU overhead for these I/O operations from multiple cores to taking just a small fraction of a single core; thereby freeing considerable CPU power for the game to spend on areas like better physics or more NPCs in a scene. This newest member of the DirectX family is being introduced with Xbox Series X and we plan to bring it to Windows as well.
Nothing there about a dedicated portion of data; just accessing the whole game data far more efficiently than the 30 year old FS.

In light on no-one nowhere presenting even a hint of a real technical explanation of 'more than 100GB', there's little reason to think there is more.
 
However, this is a marketing piece. Loading a mip level on demand, just in time, is a matter of latency, not bandwidth.

This is in a piece with the words "instant access to 100 GB/s". Instant here isn't a technical measure. I don't know that 'just in time' should be taken verbatim. SFS blends mip levels as I understand it; it doesn't add more than SF does. And sample feedback happens during drawing, at which point it's too late to fetch a high mip level because you need that texture in RAM now as that texture is being drawn. Hence the desire for SFS to blend between streamed textures; that would be a redundant feature if the correct mip was always loaded.

I trust the 2.5x effective IO (for textures) from the selective loading. That's a realistic improvement. The whole '100 GB instant access' is just fluff. It's overall a fast SSD driven IO system collectively called the 'Velocity Architecture' that allows the game data ("100 GB") to be accessed freely at low latency. The 100 GB figure isn't any measure of anything and doesn't represent some specific 100 GB portion of VM or caching or clever paging.

Why is it fluff? I just look at it as a preloaded page file. Correct me if I’m wrong but currently GPUs allocate a portion of system memory and the HDD/SDD memory for it virtual memory system. Anything that ends up in the pagefile has to be initially processed by the CPU and pulled into RAM.

If you preallocate SSD storage and prefill the pagefile with data then that data doesn’t need to be processed by the CPU at runtime. Hence the use of the term “instant access”
 
Last edited:
Why is it fluff? I just look at it as a preloaded page file. Correct me if I’m wrong but currently GPUs allocate a portion of system memory and the HDD/SDD memory for it virtual memory system. Anything that ends up in the pagefile has to be initially processed by the CPU and pulled into RAM.

If you preallocate SSD storage and prefill the pagefile with data then that data doesn’t need to be processed by the CPU at runtime. Hence the use of the term “instant access”
It was fluff because they said the same thing they've been saying for the past few months. No information about what extra hardware exists in terms of say controllers but I guess that's what the hot chips talk is for in August.
 
How so? Do we actually know what directstorage (which addresses the 100 GB part of the velocity architecture) actually is after Ronald's latest bout of fluff prose? Can you actually describe it in technical terms? Then I don't see how any conclusion can be drawn from it. Anyway, the real memory multiplier is SFS when benchmarking against PCs with equal SSD throughput. I usually make fun of Jason Ronald's PR but his prose destined for the non-technical public is backed by real technology just like for his 2.5x multplier claim which frankly is the boldest one.

I was hoping he would address the virtual RAM 100GB thing but he did not. hopefully they'll address it sometime soon.
 
Why is it fluff?
You think describing the storage system of a computer as its 'soul' isn't fluff? ;)

Explained more here: https://forum.beyond3d.com/posts/2139679/

If you preallocate SSD storage and prefill the pagefile with data then that data doesn’t need to be processed by the CPU at runtime. Hence the use of the term “instant access”
And if you are doing something clever like that, why no say so? You and everyone else can present theories, and you're of course free to do so, but this piece doesn't add legitimacy to them because it's not introducing any insight. It's repeating numbers we've already had. The only difference between this and previous pieces is it's focussed on the VA, so you'd think it'd cover all its benefits, but it hasn't mentioned anything about clever load systems beyond a Direct Storage which anyone's free to imagine any Special Sauce for.
 
You think describing the storage system of a computer as its 'soul' isn't fluff? ;)

Explained more here: https://forum.beyond3d.com/posts/2139679/

And if you are doing something clever like that, why no say so? You and everyone else can present theories, and you're of course free to do so, but this piece doesn't add legitimacy to them because it's not introducing any insight. It's repeating numbers we've already had. The only difference between this and previous pieces is it's focussed on the VA, so you'd think it'd cover all its benefits, but it hasn't mentioned anything about clever load systems beyond a Direct Storage which anyone's free to imagine any Special Sauce for.

No. Its just an analogy which is applicable to both the PS5 and XSX. If the chip is the "heart" then the memory system is the "soul" which makes sense for any computing device. The biggest improvement this gen compared to last gen comes from the memory system. Therefore its easy for me to understand why MS as well as Sony are marketing the improvements on this aspect of their consoles. Outside of just jumping to a SSD, both MS and Sony have poured resources in designing their memory system.

How often do we see Sony or MS willing to provide a truly deep dive into their hardware? Outside of leaks, hotchips and random tidbits of info or slides that make it out to the wild, information put forth by MS or Sony tends to be very superficial in nature. Its just a byproduct of the market.
 
No. Its just an analogy which is applicable to both the PS5 and XSX. If the chip is the "heart" then the memory system is the "soul" which makes sense for any computing device.

So everyone who does not use instant power resume kills their console every time its turned off. :runaway:

Just being a bit silly.
 
No. Its just an analogy which is applicable to both the PS5 and XSX.
For marketing purposes. It's a piece not designed to inform about the product, but to sell the product. The choice of words isn't to lay bare its inner workings so we can understand them, but to make the console sound appealing on various levels. It's not a piece to be discussed in a B3D Technology Thread so much as a B3D Console Industry thread. ;)

To understand the tech, we need to go to something like a DF interview or a dev conference. They aren't fluff pieces of being attractively snuggly; they are...wire wool pieces?
 
For marketing purposes. It's a piece not designed to inform about the product, but to sell the product. The choice of words isn't to lay bare its inner workings so we can understand them, but to make the console sound appealing on various levels. It's not a piece to be discussed in a B3D Technology Thread so much as a B3D Console Industry thread. ;)

To understand the tech, we need to go to something like a DF interview or a dev conference. They aren't fluff pieces of being attractively snuggly; they are...wire wool pieces?

Its to do both. Everybody that interested in these new consoles aren't going to be forum dwellers like us. Like if I go online and want to buy some pots for the kitchen. I might want to know whats relevant about the marketed features (non stick surface) of a specific set of pots, but I'm not going to read a white paper that dives into the actual chemistry.
 
Its to do both. Everybody that interested in these new consoles aren't going to be forum dwellers like us.
Is your complaint that I used the word 'fluff'? Do you consider that derogatory? This is a B3D technical thread in a B3D technical forum. The quality of our reference materials is important in understanding how to interpret the language to gain a technical understanding. As this piece is not designed for forum dwellers like us, it doesn't have much technical merit in this discussion, no?

I don't get what you're arguing. I'm pointing out that because this isn't a technical document, "just in time" doesn't necessarily mean "with minimal latency" and is likely simplified language to communicate with the audience "quickly" in relation to how things were done before.

I might want to know whats relevant about the marketed features (non stick surface) of a specific set of pots, but I'm not going to read a white paper that dives into the actual chemistry.
In a technical discussion about the potential non-stick chemistry of a new range of pots, would you look to the language of the marketing materials to try and understand what they are doing?
 
You think describing the storage system of a computer as its 'soul' isn't fluff? ;)

Explained more here: https://forum.beyond3d.com/posts/2139679/

And if you are doing something clever like that, why no say so? You and everyone else can present theories, and you're of course free to do so, but this piece doesn't add legitimacy to them because it's not introducing any insight. It's repeating numbers we've already had. The only difference between this and previous pieces is it's focussed on the VA, so you'd think it'd cover all its benefits, but it hasn't mentioned anything about clever load systems beyond a Direct Storage which anyone's free to imagine any Special Sauce for.

I think you've clearly addressed the lack of clear, new and detailed information. A lot of people still think there is some sort of separate 100GB of storage or something weird like that. Although the way they've mentioned virtual RAM makes me believe they've added extra hardware to make this much more efficient. I though they'd add an HBCC for finer granularity in paging and RAM utilization for all data. For example when Phil says that the developer doesn't have to think about the limits of RAM when developing a game, it just screams HBCC and it's not a trivial statement. Request any file and the controller will determine what page is actually in physical memory. And I know game devs like low level control of the hardware and AFAIK AMD's HBCC is programmable. If there's one thing where I think they need to add more clarification it's the virtual RAM. If it was just normal virtual RAM they shouldn't have made it a marketing point.
 
as per radeon's pro ssg card, it could be similar to how they potentially use their onboard nvme ... a 100GB memory-mapped space ..

"AMD indicated that it is not using the NAND pool as a memory-mapped space currently, but it can do so, and it is a logical progression of the technology. The company also commented that some developers are requesting a flat memory model. Even though the company provides the entire pool as storage, in the future it can partition a portion of the pool, or all of it, to operate as memory, which will open the door to many more architectural possibilities."
 
Back
Top