Velocity Architecture - Limited only by asset install sizes

6GB/s, the max number they advertised, is "approximately 2.5x the effective I/O throughput and memory usage above and beyond the raw hardware capabilities on average". Is that just a coincidence? Asking those who know.

Yes, it's just a coincidence; read BRiT's post above.
 
6GB/s, the max number they advertised, is "approximately 2.5x the effective I/O throughput and memory usage above and beyond the raw hardware capabilities on average". Is that just a coincidence? Asking those who know.

It was said that 6 GB/s is a conservative estimate using typical compression levels. Perhaps one of those Twitter discussion threads.
 
Yes. It is multiplicative, in that it minimizes how much data you need to send.

Assume you originally need to load 2.5 GB of data without compression and without SFS, here is how they interact...

So with compression of 2x that original 2.5 GB of data may be down to 1.25 GB of data.
So with SFS that original 2.5 GB of data may be down to 1 GB of data.
So with SFS and compression that 2.5 GB of data may be down to 0.5 GB of data.

How???
How do you compress textures without reading them uncompressed?
And if you are reading them uncompressed, what's the use of compressing them if they are already in memory?
 

Those numbers are for illustrative and comparative purposes if you were doing things without one aspect or another being used.

For the illustration of how the techniques combine I provided 3 different scenarios.

Scenario A) Developer did not use compression or SFS
Scenario B) Developer only used compression
Scenario C) Developer used compression and SFS

In real world games, the developer should never use Scenario A. They would always have the textures compressed and packed. So games would fall under Scenario B or C.

The game would also have textures already compressed, where it is packed into small chunk sizes. I believe from a developer Twitter discussion we know that the segment/chunk size is around 64K. So if the GPU determines it only requires access to 10 segments out of a 64 segment Texture, then only those 10 segments need to be read and not all 64.

In Scenario B, all 64 segments would be read in [4 Meg].
In Scenario C, only the needed 10 segments would be read in. [640K]
 
How???
How do you compress textures without reading them uncompressed?
And if you are reading them uncompressed, what's the use of compressing them if they are already in memory?
I'm not sure I understand the question.

You would have compressed data that you transfer then process/decompress in memory.
 
It was said that 6 GB/s is a conservative estimate using typical compression levels. Perhaps one of those Twitter discussion threads.
That convo with the guy in charge of bcpack and the discussion that followed it? I was under the impression the outcome of that discussion was that bcpack allowed it to reach 1:1 compression and i/o got it to 2.5x.
 
That convo with the guy in charge of bcpack and the discussion that followed it? I was under the impression the outcome of that discussion was that bcpack allowed it to reach 1:1 compression and i/o got it to 2.5x.

No. There was a different convo talking about decompression limits.

And naturally I can't find it despite searching. :(
 
Last edited:
I'm not sure I understand the question.

You would have compressed data that you transfer then process/decompress in memory.

First an explanation. What I'm describing is what the YouTube channel XboxBr explained as beeing the way the Velocity Engine works.
SFS and its ID buffers connected to the GPU and the 100 GB virtual RAM Partition that would hold uncompressed textures as if they were in RAM.
Was the explanation correct? Dont know, and that is why I'm questioning, because if it is, I dont see it working on compressed texture data.

That is why I placed the question.
The explanation seemed logic to me since SFS is a GPU feature, and as such works with GPU memory, and this partitition is meant to to work as virtual memory, keeping data on the same state as if it was on real RAM.
As such, this can give effective gains by only supplying (reading) the required part of the the textures to the GPU real RAM.
But if the system works like that, it seems to pose a question. Can these gains be applyed to compressed data?
Because to compress data you would have to read the uncompressed data to RAM, and this would kill the need for compression since data would already be in RAM and the SSD transfer capacity already used. Besides these compressors in hardware are able to decompress in real time. But can they compress with the same efficiency? And can they compress and decompress in sequence keeping things in real time?

As I see it, the gains can only be applyed to uncompressed data. And this was why I questioned. So that if I what was explained is wrong, or if what I understood is wrong someone could explain me the process
 
You would need way more than 100 GBs for textures if they were stored uncompressed. Like WAAAAAY more.

Regards,
SB
depends on the textures and the game :mrgreen:

But yes, you are right. Also we should distinguish between texture and data-compression. Texture compression is the same on on the storage like in memory. Data compression is a general purpose compression, before the data can be used it must be decompressed.
Well.. ok, there are also texture compression formats that are just data compressions just optimized for textures but I count that to the data compression.
 
depends on the textures and the game :mrgreen:

But yes, you are right. Also we should distinguish between texture and data-compression. Texture compression is the same on on the storage like in memory. Data compression is a general purpose compression, before the data can be used it must be decompressed.
Well.. ok, there are also texture compression formats that are just data compressions just optimized for textures but I count that to the data compression.

Of course. :) A small indie game likely doesn't have over 100 GB of uncompressed texture, whether in specific texture compression formats or otherwise.

Imagine something like RDR2 without any compression for its texture set. :oops:

Regards,
SB
 
Last edited:
First an explanation. What I'm describing is what the YouTube channel XboxBr explained as beeing the way the Velocity Engine works.
SFS and its ID buffers connected to the GPU and the 100 GB virtual RAM Partition that would hold uncompressed textures as if they were in RAM.
Was the explanation correct? Dont know, and that is why I'm questioning, because if it is, I dont see it working on compressed texture data.

That is why I placed the question.
The explanation seemed logic to me since SFS is a GPU feature, and as such works with GPU memory, and this partitition is meant to to work as virtual memory, keeping data on the same state as if it was on real RAM.
As such, this can give effective gains by only supplying (reading) the required part of the the textures to the GPU real RAM.
But if the system works like that, it seems to pose a question. Can these gains be applyed to compressed data?
Because to compress data you would have to read the uncompressed data to RAM, and this would kill the need for compression since data would already be in RAM and the SSD transfer capacity already used. Besides these compressors in hardware are able to decompress in real time. But can they compress with the same efficiency? And can they compress and decompress in sequence keeping things in real time?

As I see it, the gains can only be applyed to uncompressed data. And this was why I questioned. So that if I what was explained is wrong, or if what I understood is wrong someone could explain me the process

Pretty sure it's not a separate partition. Willing to bet on that actually! I'm also expecting that you read can this data from the game install, compressed.

SFS seems to work at the level of a mip-page (a sampler miss from the page triggers its loading), and that makes a great deal of sense including in how you use the SSD efficiently. Fetching a compressed page (at least at the level of DXTC, hopefully supporting BCPack too) is actually what you want in terms of speed and BW used, and that seems to be exactly what XSX is set up to offer.

Hopefully we'll find out more at Hotchips, but so far it looks like MS are working along the lines of accelerating access to "100GB" of assets in the install. And that should, I think, support various levels of compression (DXTC etc on the GPU, LZ and BCPack on the SSD IO block).
 
It is not limited to 100 GB. We've covered this in the tech threads a couple weeks ago when the Xbox Blogs and Videos were released.

It's reasonable to assume it's not a separate partition because of that.
 
It was said that 6 GB/s is a conservative estimate using typical compression levels. Perhaps one of those Twitter discussion threads.

I saw that tweet. He said the 4.8GB/s figure was conservative. The actual effective throughput will be higher than this. And Andrew Goossen did say it's over 6GB/s in the Eurogamer interview.
 
Superb find!

This is more or less what I was trying to suggest earlier this thread, as it would explain lower overhead, a finite size for the mapped space (the 100GB) and crucially the talk of low latency. Except this is a lot more detailed. And done by people who are actually clever.



My speculation a little while back was that the "100 GB" comment was entirely due to limiting the amount of reserved OS space required. ~200MB would be a small price to pay in terms of reserved memory, and as it's in the OS space the developer never needs to worry about it. Thinking about it, being able to store parts of the dash and in-game user interface in a similar fashion might well actually allow a smaller OS reserve overall. It's down from 3GB to 2.5GB despite potentially storing a "Flash Map" for the game.

And I'm going to hold to my guess that they'd build the "Flash Map" during install, and simply load it along with the game, or when you switch to a "recently played" game from a resume slot.

Shit, I'm missing the big presentation!
So, I remember reading somewhere (but I can't find it for the life of me) that the XSX reserves 200MB (at least I think I read 200MB) of the "fast" RAM for the OS in addition to the 2.5GB of "slow RAM".

Could that have anything to do with this? Unless I'm crazy... but I swear I read that somewhere.
 
So, I remember reading somewhere (but I can't find it for the life of me) that the XSX reserves 200MB (at least I think I read 200MB) of the "fast" RAM for the OS in addition to the 2.5GB of "slow RAM".

Its only 32 Meg. It's used for video compositing of dashboard notifications like achievements popups or friends login or message received or the guide overlay or even the full dashboard screens.

https://forum.beyond3d.com/threads/...ne-x-and-xbox-series-x-2019-12-2020-03.61513/
https://forum.beyond3d.com/threads/...lease-holiday-2020.61502/page-22#post-2139858
 
It is not limited to 100 GB. We've covered this in the tech threads a couple weeks ago when the Xbox Blogs and Videos were released.

It's reasonable to assume it's not a separate partition because of that.

I was holding on adding more data because I could not find the video in question (they changed the name).
The channel is XboxBr, and they clain to be an official Xbox Channel.

I leave the video link (it’s in Portuguese)

The diagram they show is this

DFToO9u.png


As I mentioned, they refer to the 100 GB as a place where textures are uncompressed, hence the reference to the remain of the SSD as “dados compactados”, or compressed data.
SFS only gets data from this, not anywhere else, and that was why I questioned you about how compressed data could gain, since there would be no gain in SSD speed in compression because reads would have to be uncompressed.

I do not know if this diagram is correct, but the video is online, and they claim to be an official Xbox channel.
 
Last edited:
It was said that 6 GB/s is a conservative estimate using typical compression levels. Perhaps one of those Twitter discussion threads.

James Stanard tôle 4.8 GB/s is conservative. a bit over 6GB/s is the limit of the hardware decompressor.
 
Back
Top