Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Please, not this again. I am not sure about the launch specs. But it has quickly changed until flexible memory was kinda optionnal, leaving 5GB (5.5GB for Pro) of normal ram for developers if they wanted it. I posted a **cough** source somewhere in this forum.
 
Please, not this again. I am not sure about the launch specs. But it has quickly changed until flexible memory was kinda optionnal, leaving 5GB (5.5GB for Pro) of normal ram for developers if they wanted it. I posted a **cough** source somewhere in this forum.
flexible memory isn't intresting to me. I don't care about 4.5 or 5.5 GB of direct memory access.

I only care that virtual memory is being used for memory addresses outside of GDDR5. And it's present, and apparently there is a paging issue that causes performance impacts when you decide to hit it. It may incentive developers to try to roll their own memory managements so that they don't hit Virtual Memory to keep things running solid.

if we're going to be on the topic of SSD blazing fast speeds and onboarding and offloading memory to and from the SSD, we need to have a more serious discussion about paging which memory management is the next step of the discussion unless you just want to continually debate between bandwidth speeds of the drives.

And developers are going to have to decide when to page or not to page. Or to page at all times etc. IMO, I think MS is looking to solve paging performance issues, thus their 100 GB of accessible memory (even then it's limited to 100GB and not something larger).

If you don't use paging then it's up to the developer to decide how to handle everything. That's fine, and I guess I'm curious as to whether developers did this normally, or if they rely on the system handling the paging for them. I only put the source to showcase that consoles do support paging if the developers want to leverage the feature.
 
Last edited:
Wasn't 100GB (atleast on XSX?) going to be allocated as virtual ram? Sounds like a massive amount from the already small-ish internal drive space. Then we have the additional OS and resume features having to allocate space for. 100GB + beastly compression when used, games are going to be massive worlds with details no one could imagine before. Star Citizen baseline?
 
Wasn't 100GB (atleast on XSX?) going to be allocated as virtual ram? Sounds like a massive amount from the already small-ish internal drive space. Then we have the additional OS and resume features having to allocate space for. 100GB + beastly compression when used, games are going to be massive worlds with details no one could imagine before. Star Citizen baseline?

Who says its from the SSD ?:runaway:
 
And developers are going to have to decide when to page or not to page. Or to page at all times etc. IMO, I think MS is looking to solve paging performance issues, thus their 100 GB of accessible memory (even then it's limited to 100GB and not something larger).

If you don't use paging then it's up to the developer to decide how to handle everything. That's fine, and I guess I'm curious as to whether developers did this normally, or if they rely on the system handling the paging for them. I only put the source to showcase that consoles do support paging if the developers want to leverage the feature.

Address it via DirectStorage (any details on that?) :?:

Custom engines probably do what the devs need it to do, but it probably gets a little wonky the further abstracted an engine gets where it has to Just (sorta) Works™ for anyone using the engine. Not sure how various studios handle garbage collection and such.
 
Who says its from the SSD ?:runaway:

From DF:
The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system.
It's interesting that MS thinks they have a unique hold in this area over the other two.

Reading further.
As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."
So definitely a lot of paging is happening here. So this is to be expected I think. How paging hits performance I do not know, nor do I know if they solved performance impacts. So I guess their goal is to use SFS to improve capacity and I/O performance here.

The Velocity Architecture also facilitates another feature that sounds impressive on paper but is even more remarkable when you actually see it play out on the actual console. Quick Resume effectively allows users to cycle between saved game states, with just a few seconds' loading - you can see it in action in the video above. When you leave a game, system RAM is cached off to SSD and when you access another title, its cache is then restored. From the perspective of the game itself, it has no real idea what is happening in the background - it simply thinks that the user has pressed the guide button and the game can resume as per normal.
Once again, paging is the big story here at least for MS marketing of it.

So outside of throughput, curious to see how Xbox is handling all of this. It doesn't seem like they are allocating a large space for virtual memory, they are just using the game files themselves as _virtual memory_, there needs to be at least 16 GB of 'virtual memory per game however'. XSX can handle up to 5-6 games? in total before needing to drop one off? So that's approximately 100GB, perhaps not so coincidental how that number came about.
 
Last edited:
From DF:

It's interesting that MS thinks they have a unique hold in this area over the other two.

Reading further.

So definitely a lot of paging is happening here. So this is to be expected I think. How paging hits performance I do not know, nor do I know if they solved performance impacts. So I guess their goal is to use SFS to improve capacity and I/O performance here.


Once again, paging is the big story here at least for MS marketing of it.

So outside of throughput, curious to see how Xbox is handling all of this. It doesn't seem like they are allocating a large space for virtual memory, they are just using the game files themselves as _virtual memory_, there needs to be at least 16 GB of 'virtual memory per game however'. XSX can handle up to 5-6 games? in total before needing to drop one off? So that's approximately 100GB, perhaps not so coincidental how that number came about.
They way they describe it plus the performance of their box, I really cant see where competition has any real competitive advantage
 
So outside of throughput, curious to see how Xbox is handling all of this. It doesn't seem like they are allocating a large space for virtual memory, they are just using the game files themselves as _virtual memory_, there needs to be at least 16 GB of 'virtual memory per game however'. XSX can handle up to 5-6 games? in total before needing to drop one off? So that's approximately 100GB, perhaps not so coincidental how that number came about.

6 virtual consoles? I'm not sure I understand the usage scenario since that'd be 1/6th performance, unless I'm misunderstanding how virtualized instances work (probably :p ).

I figured the 100GB correlated to the size of a quad-layer blu-ray disc. xD (totally shooting in the dark)
 
Address it via DirectStorage (any details on that?) :?:

Custom engines probably do what the devs need it to do, but it probably gets a little wonky the further abstracted an engine gets where it has to Just (sorta) Works™ for anyone using the engine. Not sure how various studios handle garbage collection and such.

hmm.. probably not.. quote below
The final component in the triumvirate is an extension to DirectX - DirectStorage - a necessary upgrade bearing in mind that existing file I/O protocols are knocking on for 30 years old, and in their current form would require two Zen CPU cores simply to cover the overhead, which DirectStorage reduces to just one tenth of single core.

"Plus it has other benefits," enthuses Andrew Goossen. "It's less latent and it saves a ton of CPU. With the best competitive solution, we found doing decompression software to match the SSD rate would have consumed three Zen 2 CPU cores. When you add in the IO CPU overhead, that's another two cores. So the resulting workload would have completely consumed five Zen 2 CPU cores when now it only takes a tenth of a CPU core. So in other words, to equal the performance of a Series X at its full IO rate, you would need to build a PC with 13 Zen 2 cores. That's seven cores dedicated for the game: one for Windows and shell and five for the IO and decompression overhead."
it addresses CPU overhead, nice.. but it doesn't address memory request speeds. On a basic paging setup without hardware solution, you are using your first memory request to hit the paging table to get what you want, and then memory again to actually get what you want.
 
Last edited:
ALL memory access on PS4 is via a virtual address space provided by the kernel. Every game works under the assumption of having 5GB for PS4 and 5.5GB for PS4 Pro. The memory is dived into what is called direct memory which is still accessed via a virtual address but its physical location is known and can be accessed by GPU and CPU directly without going through a cache memory, flexible memory is one which the physical location is not known and entirely managed by the OS but is guaranteed at all times. The size of the flexible memory is determined by the game developer by including a macro with the specific size. If a developer does not specify a size then when a player launches a game a flexible memory is assigned automatically which was the 512MB that was talked about. This memory is static until the game is closed, it cannot be taken away by the OS under any circumstance. There is no paging issues accessing the flexible memory.
 
6 virtual consoles? I'm not sure I understand the usage scenario since that'd be 1/6th performance, unless I'm misunderstanding how virtualized instances work (probably :p ).

I figured the 100GB correlated to the size of a quad-layer blu-ray disc. xD (totally shooting in the dark)
hmm Richard writes:
We saw Xbox Series X hardware cycling between Forza Motorsport 7 running in 4K60 Xbox One X mode, State of Decay 2, Hellblade and The Cave (an Xbox 360 title). Switching between Xbox One X games running on Series X, there was around 6.5 seconds delay switching from game to game - which is pretty impressive. Microsoft wasn't sharing the actual size of the SSD cache used for Quick Resume, but saying that the feature supports a minimum of three Series X games. Bearing in mind the 13.5GB available to titles, that's a notional maximum of around 40GB of SSD space, but assuming that the Velocity Architecture has hardware compression features as well as decompression, the actual footprint may be smaller. Regardless, titles that use less memory - like the games we saw demonstrated - should have a lower footprint, allowing more to be cached.
So that's:
4 games they showcased.
He drops a hint that it's also possible that XSX has a hardware compression as well to compress values back onto the drive to keep things smaller. Sort of makes sense I guess if you want to fit as much as possible.
 
ALL memory access on PS4 is via a virtual address space provided by the kernel. Every game works under the assumption of having 5GB for PS4 and 5.5GB for PS4 Pro. The memory is dived into what is called direct memory which is still accessed via a virtual address but its physical location is known and can be accessed by GPU and CPU directly without going through a cache memory, flexible memory is one which the physical location is not known and entirely managed by the OS but is guaranteed at all times. The size of the flexible memory is determined by the game developer by including a macro with the specific size. If a developer does not specify a size then when a player launches a game a flexible memory is assigned automatically which was the 512MB that was talked about. This memory is static until the game is closed, it cannot be taken away by the OS under any circumstance. There is no paging issues accessing the flexible memory.
Let me just ask the obvious question then, are developers optimizing their engines today so that they aren't hitting the virtual memory and just sort of managing direct memory themselves? Or they do use the paging system and they just know when to start buffering it in?

If I assume the latter, then there must be an upper limit to how much available virtual memory there is. Do developers regularly hit this limitation? and then still be forced to do memory management even within the virtual space?

Leads me to my next question, I can only suspect that virtual memory on today's consoles are placed on the outer edge of the built in hard drive platter to have a higher bandwidth; so loading from virtual memory is going to be faster than loading directly from the game. ie; it's significantly more detrimental to load from the game files than loading from virtual memory?
 
Last edited:
@Nesh
Yes, they have a different approach, but achieve the same thing in the end, WC was hinting at that.

Edit: would be handy if we could quote in an edit also sometimes :)
 
Let me just ask the obvious question then, are developers optimizing their engines today so that they aren't hitting the virtual memory and just sort of managing direct memory themselves? Or they do use the paging system and they just know when to start buffering it in?
Direct memory management is always preferable in my opinion and its really based on specific developer needs.

If I assume the latter, then there must be an upper limit to how much available virtual memory there is. Do developers regularly hit this limitation? and then still be forced to do memory management even within the virtual space?
There is an upper limit. (can only speak for PS4)

Leads me to my next question, I can only suspect that virtual memory on today's consoles are placed on the outer edge of the built in hard drive platter to have a higher bandwidth; so loading from virtual memory is going to be faster than loading directly from the game. ie; it's significantly more detrimental to load from the game files than loading from virtual memory?
Memory on consoles is micromanaged to an absurd degree (for the games benefit of course), that is why the game patch process on PS4 is ridiculously long.
 
I know it's a little bit off the current topic, but I just wanted to point out that for multiplats that are going to be on PC, you need to be able to handle a 180 degree turn of the viewing frustum in like 1/10th of a second.

Maybe even less for some of these turbo nerd superhuman PC gamers....
 
Status
Not open for further replies.
Back
Top