The need for sustained high throughput loading of data in games? *spawn*

pjbliverpool

B3D Scallywag
Legend
The other aspect that doesn't get mentioned much in these conversations is that 16GB is actually a hell of a lot of VRAM (even if only 13.5GB is available to the GPU). We've fixated a little on it being "only double the last gen" compared with how much other metrics have increased. But when you consider that the actual game install size (uncompressed) probably isn't more than double last gen then it's still a lot of memory. I expect 13.5GB is 10% or more of most next games total size so that means at any given time you can have 10% or more of your entire game resident in VRAM. Aside from R&C type scenarios where the game suddenly bounces around a large number of it's environments in a very short period of time, are you really going to need instant access to more than 10% of your game assets in any give environment?

For that reason I'm a bit dubious about the fast IO specifically allowing for more detailed environments because you're limited by content size. I expect the reduction of loading times (including in game loading like portals and fast travel, ala R&C) to be a much bigger use for them.

So along the same lines that London-boy mentioned above about artistic talent and budget being the limiting factor this generation, I think content (install) size will also be the other major limiting factor. So new compression solutions + engines which make the the process of content production easier are likely to become more and more important in the future. And lo and behold, UE5 and the next gen consoles seem to go to great efforts on both those fronts.
 
The other aspect that doesn't get mentioned much in these conversations is that 16GB is actually a hell of a lot of VRAM (even if only 13.5GB is available to the GPU). We've fixated a little on it being "only double the last gen" compared with how much other metrics have increased. But when you consider that the actual game install size (uncompressed) probably isn't more than double last gen then it's still a lot of memory. I expect 13.5GB is 10% or more of most next games total size so that means at any given time you can have 10% or more of your entire game resident in VRAM. Aside from R&C type scenarios where the game suddenly bounces around a large number of it's environments in a very short period of time, are you really going to need instant access to more than 10% of your game assets in any give environment?

For that reason I'm a bit dubious about the fast IO specifically allowing for more detailed environments because you're limited by content size. I expect the reduction of loading times (including in game loading like portals and fast travel, ala R&C) to be a much bigger use for them.

So along the same lines that London-boy mentioned above about artistic talent and budget being the limiting factor this generation, I think content (install) size will also be the other major limiting factor. So new compression solutions + engines which make the the process of content production easier are likely to become more and more important in the future. And lo and behold, UE5 and the next gen consoles seem to go to great efforts on both those fronts.

And the speed is peak. I doubt anyone will need 9 GB/s or more on average frame. Maybe only useful for a scene like a destruction scene for example or a portal or whatever the devs will invent.
 
Last edited:
The other aspect that doesn't get mentioned much in these conversations is that 16GB is actually a hell of a lot of VRAM (even if only 13.5GB is available to the GPU). We've fixated a little on it being "only double the last gen" compared with how much other metrics have increased. But when you consider that the actual game install size (uncompressed) probably isn't more than double last gen then it's still a lot of memory. I expect 13.5GB is 10% or more of most next games total size so that means at any given time you can have 10% or more of your entire game resident in VRAM. Aside from R&C type scenarios where the game suddenly bounces around a large number of it's environments in a very short period of time, are you really going to need instant access to more than 10% of your game assets in any give environment?

For that reason I'm a bit dubious about the fast IO specifically allowing for more detailed environments because you're limited by content size. I expect the reduction of loading times (including in game loading like portals and fast travel, ala R&C) to be a much bigger use for them.

So along the same lines that London-boy mentioned above about artistic talent and budget being the limiting factor this generation, I think content (install) size will also be the other major limiting factor. So new compression solutions + engines which make the the process of content production easier are likely to become more and more important in the future. And lo and behold, UE5 and the next gen consoles seem to go to great efforts on both those fronts.

I'm wondering if we'll start to see procedurally generated texture data?

Edit: some examples:

http://www.ctrl-alt-test.fr/2018/texturing-in-a-64kb-intro/

The attached image is a procedurally generated forest floor texture.
 

Attachments

  • daniel-thiger-forestscene-a-sphere.jpg
    daniel-thiger-forestscene-a-sphere.jpg
    229.7 KB · Views: 25
Last edited by a moderator:
Twice the ram to store the next 1-2 seconds of game, instead of 30s.

But that's my point, you won't be using all that RAM to store just 2 seconds of game. Otherwise you're going to have progressed through the entire content of your game in 20 seconds assuming a 160GB uncompressed game. So the reality is you're probably still going to be storing the next 30 seconds of gameplay in VRAM if that's what you'd have done on the previous generation consoles because what else are you going to do with it? You have twice as much VRAM and twice as much game asset data (roughly speaking).

So it's not going to wildly improve the graphics of a game as if you had hundreds of GB of VRAM because the game content sizes won't support that (unless the game uses the same assets over and over in different configurations with very little variation). What it will do is give you the option to load data from anywhere on disk in an unpredictable manner (initial loads and fast travel being the obvious ones) without needing a loading screen. But I don't see it changing the way games are rendered.
 
launch games are already showing a huge jump in graphical fidelity and LOD so you don't have to worry :yes:

i remember an old quote from a dev in the PS1 era saying about the console"what's the matter of having the power to render one million flat shaded polygons a second if your memory can only contain 150 000 of them"
 
But that's my point, you won't be using all that RAM to store just 2 seconds of game. Otherwise you're going to have progressed through the entire content of your game in 20 seconds assuming a 160GB uncompressed game. So the reality is you're probably still going to be storing the next 30 seconds of gameplay in VRAM if that's what you'd have done on the previous generation consoles because what else are you going to do with it? You have twice as much VRAM and twice as much game asset data (roughly speaking).

So it's not going to wildly improve the graphics of a game as if you had hundreds of GB of VRAM because the game content sizes won't support that (unless the game uses the same assets over and over in different configurations with very little variation). What it will do is give you the option to load data from anywhere on disk in an unpredictable manner (initial loads and fast travel being the obvious ones) without needing a loading screen. But I don't see it changing the way games are rendered.

Cerny's 1s of gameplay paradigm while helpful in explaining the massive gain in RAM efficiency, has unfortunately inoculated the idea that games will be constantly cyling GB of data through RAM by design which is absolutely nonsensical.
 
Cerny's 1s of gameplay paradigm while helpful in explaining the massive gain in RAM efficiency, has unfortunately inoculated the idea that games will be constantly cyling GB of data through RAM by design which is absolutely nonsensical.

Why is it nonsensical?
 
cause it's not illustrative of what's actually happening.

What is not happening? What is nonsensical? I must be missing out on something.

To me ssd+streaming seems like the way to go about it. Fast streaming will in long term avoid all kinds of need to handtune, load times and allow arbitrary sized source assets as long as lod's are available. I wouldn't be surprised to see machine learning put into streaming systems. One uses something like GAN to play game/teach streaming algorithm what is important/what is not. ML algorithm can get way better(and cheaper) results than bunch of human monkeys trying to implement hard coded heuristics. Just think what alpha zero did to chess/go/... streaming is very nice place to use DNN's as it's super clear on what inputs to give and how to decide if taken actions were optimal, good or even worst possible.
 
What is not happening? What is nonsensical? I must be missing out on something.

To me ssd+streaming seems like the way to go about it. Fast streaming will in long term avoid all kinds of need to handtune, load times and allow arbitrary sized source assets as long as lod's are available. I wouldn't be surprised to see machine learning put into streaming systems. One uses something like GAN to play game/teach streaming algorithm what is important/what is not. ML algorithm can get way better(and cheaper) results than bunch of human monkeys trying to implement hard coded heuristics. Just think what alpha zero did to chess/go/... streaming is very nice place to use DNN's as it's super clear on what inputs to give and how to decide if taken actions were optimal, good or even worst possible.
The reality is that it's fast enough that when we need to bring textures in, it can be done with very minimal notice. That's not to be taken as GB of textures constantly being loaded.

People are conflating what can be possible, with the reality of how games are played.

If you don't move, the screen doesn't move, there is no loading. Unless your SSD is part of the renderer, then you're bound to the SSD speed.
 
What are you going to be streaming at 8GB/s on a constant basis?

Take 8GB/s as a worst case and make it work on a rainy day. Not every day needs to be rainy though. Player starts the game and we need to fill in full memory. i.e. super short load time. Another case could be a zoom in while playing fifa/nhl or the dimensions jumps in ratchet&clank. Another example could be car game using similar technology as unreal5. Ridiculously high asset quality and unique content everywhere and mm accuracy on road surface. And there is the whole open world aspect where you could use piperun assets in openworld thanks to fast streaming. Or at least in openworld you could go into buildings without having to limit the level of detail in interiors to fit ram/wait load time. Especially VR lends itself for high quality assets as it's typical to pickup/manipulate things and look them very closely.

Perhaps you sometimes stream nothing, sometimes you stream 8GB/s. Just make the HW so that disk storage is not issue. Allow developer to worry about game rather than having to concentrate on caching/load times/asset optimization for specific memory sizes.

Often console manufacturers over compensate when going into new generation when fixing previous gen issues. It will be tremendous to see if current level of ssd is over compensation or if in 5 years we end up wishing ps6/xbox foo would have even faster ssd or even optane.
 
Last edited:
Often console manufacturers over compensate when going into new generation when fixing previous gen issues. It will be tremendous to see if current level of ssd is over compensation or if in 5 years we end up wishing ps6/xbox foo would have even faster ssd or even optane.
Interestingly Cerny said the 5GB/s raw was their target in the design phase, and he said the amount of bandwidth required to allow to load on demand as the player turns around "seems about right for next gen" with that target. So in this case he calculated what he thought was required for that specific goal next gen. He established these targets based on years of discussions with developers. Same for the kraken decompressor, these things take a long time to design.

Horizon 2 have flying mounts, so dropping from a high altitude anywhere down to the ground will needs a truckload of assets extremely quickly, same for spiderman's movements across detailed rooftops, dropping down to the street level, then back to throwing a line to land on top of a tall building, or even go through a window with detailed offices, with no transition in or out.

It's not "constantly" but it can be needed very often in multiple small bursts as the player moves. How high the bursts can go will impact the game design. It's not reaching peaks only with teleports and dimensional jumps from R&C, that would be in direct contradiction with his presentation.
 
I find this back of forth very black and white. simple question does current PS4 base HDD load speeds ever limit or how offen does it limit game designers/artists intent?
 
Back
Top