Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

Nice enthusiasm, but much of what you say could have been done already but isn't for other reasons. It doesn't take meaty processors to make a physics playground - you cite yourself a game running on a Tegra X1 and 4 GBs RAM.

However, this thread is about IO. If people want to talk about what Fast IO could enable, focus should remain there. Speculation that we could see large improvements in games needs to focus on how Fast IO will enable that; if the improvements come form the CPU or GPU or ML or whatever, that doesn't matter in this thread.

So reminder, this is a thread about SSD Fast IO for games, nothing else.
 
C’mon Scrooge McDuck. 16 GB SSDs can be had for the cost a Happy Meal. Plus what is this setup for? GIFs, tweets and Tik Tok videos? JJ (I couldn’t help myself) LOL
You need more than 16Gb base storage to have 16Gb in any sort of RAID redundancy setup so you're actually looking at needing more like 20Gb+.

The vast majority of the space all of the Blu-ray Disc rips (movies and TV shows), which sit on the NAS and get served to the TV via Kodi running on a MacMini (the original discs rarely get pulled), the rest is backups of individual devices (TimeMachine) and the family digital photo album. Nothing exciting I'm afraid! :nope:
 
Believe me when I say once cross gen is over, the leap will be bigger than PS3 to PS4. Maybe even PS2 to PS3.

Not only because the graphics will improve by a ton (Matrix Demo continues to wow people in that regard, so even in terms of pure graphics the statement that there won't be big leaps is false), but also because the worlds will feel MUCH more alive. Basically, gameplay and physics are still on PS3 level with the current games.

With the new Ryzen CPUs and RDNA2 being capable of fast ML in real time, the worlds will have realistic physics, NPCs, interactivity and completely new gameplay mechanics.

Think of Zelda: Breath of the Wild. Why is this game so beloved by many? Because they implemented a solid physics system people can use to do all sorts of crazy stuff the developers never intended and thus create their own stories. Next gen gaming is going to be that, just turned up to 100. You will have a living, breathing world simulated on your hardware.

NPCs will be aware of what you do in their game world and react accordingly. You will be able to speak with them off-script indepenend from what the game developers hard-coded using natural language processors, so they will feel like real friends or foes.

Kick a chair and it will fall properly (and break if its very instable), instead of gliding on the floor like toothpaste in space.

Water will flow like real water and you will be able to manipulate the game world with it. A house will actually flood realistically if you forget to turn off the water when the sink drain is blocked, the water can reach wall outlets and kill you.

Weather will have a significant effect on the game world and gameplay. A hot summer in the game world for example will limit supply which in turn will increase the price of stuff you can buy in the game.

You will be able to interact with EVERYTHING. And I mean everything. Thanks to real time Raytracing, baked light maps are a thing of the past so you will get high quality lighting with 100% interactivity.

RTR allows for light based puzzles and gameplay elements that were simply not possible before.

And a lot more!

There perhaps has never been a generation of consoles in the 3D area after the N64 that were so bottlenecked by the previous generation. I really can't wait for cross gen to be finally over.

Very promising all that you say, i hope this dream becomes true ;) It wont be coming from the hw increases this gen but rather the software side then. Rift Apart was developed just for the PS5, no cross-gen involved there. Fast ML and the other things probably not this gen since rdna2 isnt there for that.

Edit: i see Shifty's post after replying to this post as i cant copy/paste a quote and go to the next page since the forum update anymore for some reason.
 
Nice enthusiasm, but much of what you say could have been done already but isn't for other reasons. It doesn't take meaty processors to make a physics playground - you cite yourself a game running on a Tegra X1 and 4 GBs RAM.

However, this thread is about IO. If people want to talk about what Fast IO could enable, focus should remain there. Speculation that we could see large improvements in games needs to focus on how Fast IO will enable that; if the improvements come form the CPU or GPU or ML or whatever, that doesn't matter in this thread.

So reminder, this is a thread about SSD Fast IO for games, nothing else.
I think it’s pretty clear the potential fast IO brings. The cost to develop the assets for such large worlds at such high fidelity seems to be a larger bottleneck than the technologies themselves.
 
I think it's a curious situation. As we've become aware, storage capacity ends up limiting, which then makes one wonder if the fast IO is solving a problem that didn't need solving. Sure, if money's no object you could have infinite variety and complexity streaming in GBs of data per frame. But in real terms, streaming data from very finite datasets is not particularly demanding on the IO. Long discussions about streaming assets rooted in virtual texturing, examples like Megatexturing in Rage and Trials with its real-world successes and limitations, leading in to Nanite running on existing Windows IO, all the maths and experience shows optimised asset selection and rendering is not demanding on storage or RAM if you can optimise your rendering, but then a lot of games still operate conventionally for whatever reasons.

So the question boils down to immediacy, rather than BW which is limited by storage capacity and content costs. But that itself is really a trade-off between RAM and storage - you can get away with 16 GB of RAM instead of 32 GB if 16 GB of data is fast enough at hand from SSD. Are there situations where short bursts of read and write can be revolutionary? Or will the future of super-fast data be dependent on super-massive capacity too?
 
I think it's a curious situation. As we've become aware, storage capacity ends up limiting, which then makes one wonder if the fast IO is solving a problem that didn't need solving. Sure, if money's no object you could have infinite variety and complexity streaming in GBs of data per frame. But in real terms, streaming data from very finite datasets is not particularly demanding on the IO. Long discussions about streaming assets rooted in virtual texturing, examples like Megatexturing in Rage and Trials with its real-world successes and limitations, leading in to Nanite running on existing Windows IO, all the maths and experience shows optimised asset selection and rendering is not demanding on storage or RAM if you can optimise your rendering, but then a lot of games still operate conventionally for whatever reasons.

So the question boils down to immediacy, rather than BW which is limited by storage capacity and content costs. But that itself is really a trade-off between RAM and storage - you can get away with 16 GB of RAM instead of 32 GB if 16 GB of data is fast enough at hand from SSD. Are there situations where short bursts of read and write can be revolutionary? Or will the future of super-fast data be dependent on super-massive capacity too?
I think Mark Cerny was pretty clear in the Road to PS5 presentation he gave. His biggest reason for the SSD and custom I/O was first and foremost to relieve developers of the burden of designing games around slow speeds. Basically a quality of life improvement for the developers being one less thing for them to have to focus on and design around. If developers can get away with less optimization because there's enough bandwidth there that it just works regardless... that's a massive win.

The other reason is actually a problem that does need solving. The cost of large amounts of RAM being prohibitive for a ~$400 console. They're basically doing the best they can.. with what they can. Of course developers CAN work around these bottlenecks... they have been since the dawn of computing... but it's the costs involved in requiring developers to do all that busy work instead of focusing on making the games themselves better, which is what's creating the necessity, IMO.

I think there's plenty of scenarios where short bursts of read and write will be revolutionary. Solving the problem of getting data into memory super-fast, has more immediate benefits to game development and to the end user. I'd rather developers have to design games around the constraint of capacity rather than the constraint of bandwidth.
 
I think Mark Cerny was pretty clear in the Road to PS5 presentation he gave. His biggest reason for the SSD and custom I/O was first and foremost to relieve developers of the burden of designing games around slow speeds. Basically a quality of life improvement for the developers being one less thing for them to have to focus on and design around. If developers can get away with less optimization because there's enough bandwidth there that it just works regardless... that's a massive win.

The other reason is actually a problem that does need solving. The cost of large amounts of RAM being prohibitive for a ~$400 console. They're basically doing the best they can.. with what they can. Of course developers CAN work around these bottlenecks... they have been since the dawn of computing... but it's the costs involved in requiring developers to do all that busy work instead of focusing on making the games themselves better, which is what's creating the necessity, IMO.

I think there's plenty of scenarios where short bursts of read and write will be revolutionary. Solving the problem of getting data into memory super-fast, has more immediate benefits to game development and to the end user. I'd rather developers have to design games around the constraint of capacity rather than the constraint of bandwidth.

The other big one is also the most obvious but often overlooked probably because it's too obvious - and that's load times. Load times were already borderline untenable last gen. Doubling them for the current gen was basically a non-starter. On that basis alone they pretty much had to have SSD's. And once you've made that commitment you may as well go the full hog with NMVe and aim for transformative 1s load times.
 
The other big one is also the most obvious but often overlooked probably because it's too obvious - and that's load times. Load times were already borderline untenable last gen. Doubling them for the current gen was basically a non-starter. On that basis alone they pretty much had to have SSD's. And once you've made that commitment you may as well go the full hog with NMVe and aim for transformative 1s load times.
Yep. I think the demands of extremely fast I/O will reveal themselves soon enough.

Spider-Man Remastered 2020
44.png


Spider-Man 2 2022
unknown.png


If that Spider-Man 2 trailer is indicative of the game's visuals... and there's obvious signs that it is (you can see aliasing and signs of reconstruction) the jump in material and texture quality/variety is pretty massive. Having that density and detail stream in will undoubtedly require far more I/O... especially considering the things they were already doing with Spider-man back in 2018.... Imagine what kinds of crazy scenarios they have in mind for this one!

I'm pretty confident that Spider-Man 2 will be far and away the most technically impressive game released when it does in 2023. Putting everything else to shame.
 
$100 says spiderman 2's eventual pc port will run fine on much worse IO stacks than ps5. Comparable IO requirements to spiderman 1's PC port. What looks so hard to stream in that shot?

I see more detailed (tiling) building geometry and some more ground textures. It looks like a huge step up in gpu power and for the processes of insomniacs already immensely skilled art team but I don't see anything that would break the bank for storage. Worst case we're maybe looking at 20-40% more data than spiderman 1 had on screen. Thank the graphics memory and bandwidth increases for the added layers of ground detail.

If we see the combination of much more distinct neighborhoods in terms of architectural style and texture sets, buildings with much less tiling detail, and significantly increased traversal speed I might change my position, but what we can expect so far looks more than possible on other SSDs
 
$100 says spiderman 2's eventual pc port will run fine on much worse IO stacks than ps5. Comparable IO requirements to spiderman 1's PC port. What looks so hard to stream in that shot?

I see more detailed (tiling) building geometry and some more ground textures. It looks like a huge step up in gpu power and for the processes of insomniacs already immensely skilled art team but I don't see anything that would break the bank for storage. Worst case we're maybe looking at 20-40% more data than spiderman 1 had on screen. Thank the graphics memory and bandwidth increases for the added layers of ground detail.

If we see the combination of much more distinct neighborhoods in terms of architectural style and texture sets, buildings with much less tiling detail, and significantly increased traversal speed I might change my position, but what we can expect so far looks more than possible on other SSDs
Well, thats just one shot. Having that amount of detail in a vast city where spiderman traverses in super high speeds, will be an interesting challenge.
We will see if the final game maintains that amount of detail and more while you are moving around Manhattan
 
Yep. I think the demands of extremely fast I/O will reveal themselves soon enough.

Spider-Man Remastered 2020
44.png


Spider-Man 2 2022
unknown.png


If that Spider-Man 2 trailer is indicative of the game's visuals... and there's obvious signs that it is (you can see aliasing and signs of reconstruction) the jump in material and texture quality/variety is pretty massive. Having that density and detail stream in will undoubtedly require far more I/O... especially considering the things they were already doing with Spider-man back in 2018.... Imagine what kinds of crazy scenarios they have in mind for this one!

I'm pretty confident that Spider-Man 2 will be far and away the most technically impressive game released when it does in 2023. Putting everything else to shame.
That is a jump in memory you are seeing in a screenshot, not necessarily a direct result of IO. Of course to go anywhere quickly with this level of fidelity fast IO is a requirement. But if you are looking purely at rendering quality of the image, memory is the requirement here.
 
Well, thats just one shot. Having that amount of detail in a vast city where spiderman traverses in super high speeds, will be an interesting challenge.
We will see if the final game maintains that amount of detail and more while you are moving around Manhattan

Sure -- my assumption here is that not that much is actually going to be streaming in and out as you move. Most of the data in any frame -- the various layers of tiling ground and building textures, the assets for the window interior shader, most or all of the assets for the crowds and cars, the skybox, many of the most common tiling mesh building and window pieces, etc, will be in memory for long periods of time as it's used across numerous blocks of the city, if not the entire city (like it was in spiderman 1) -- it's only the lightmaps and little bits of unique per-area or per-building details and the data used to instance the re-used pieces that's streaming in and out all the time.

In the screenshot we see the neon signs, the special sidewalk tiles right outside the building, maybe even the modeled brickwork on the nearby building, and I think those pieces might be unique to that one street corner... But is that level of variety really all that impossible to load in quickly on a normal SSD? I don't think it is.
 
The ssd hype has dried out almost everywhere else lol. Theres no concerns needed for pc ports, they will do just fine and probably even better seeing theres already more capable nvme hw on pc.
 
The ssd hype has dried out almost everywhere else lol. Theres no concerns needed for pc ports, they will do just fine and probably even better seeing theres already more capable nvme hw on pc.
Well there is, but not all of them buy those. There are low and high specs out there. Sony's business targets will determine the design of the games.
If PC ports are now a primary business strategy, games will be designed around general PC portability of games. So they will take advandage of the NVME for sure but also enough to not cause any significant compatibility issues.
 
That is a jump in memory you are seeing in a screenshot, not necessarily a direct result of IO. Of course to go anywhere quickly with this level of fidelity fast IO is a requirement. But if you are looking purely at rendering quality of the image, memory is the requirement here.
Except the new consoles dont have a lot more memory at all. Definitely nowhere near a 'generational leap' here. It's a simple doubling, instead of the normal 8 or even 16x that's usually a requirement in order to push a generational leap in developer ambitions.

The thing that all this extra I/O enables is that you do not need to have all the rest of the potential scenery loaded in memory, ready to use at a moment's notice if you throw the camera around or go somewhere else in your immediate vicinity. So you can dedicate a lot more memory to what's only being viewed at any given time than before. It's a massive efficiency improvement and is incredibly important for allowing these consoles to actually provide a generational improvement.

Basically, we cant just decouple the idea of memory and storage I/O as two totally different factors.
 
Except the new consoles dont have a lot more memory at all.

They do, at 2.5 to 2.7 times the ram footprint of prior generation.

Your discussion all relies around vague definitions of "a lot more" and "generational leap" etc. Once you get to a reasonable base you will stop seeing the days-by-gone 8x and 16x memory increases. PC's haven't had 8x to 16x memory increases in decades. You're not going to get that anymore.
 
Except the new consoles dont have a lot more memory at all. Definitely nowhere near a 'generational leap' here. It's a simple doubling, instead of the normal 8 or even 16x that's usually a requirement in order to push a generational leap in developer ambitions.

The thing that all this extra I/O enables is that you do not need to have all the rest of the potential scenery loaded in memory, ready to use at a moment's notice if you throw the camera around or go somewhere else in your immediate vicinity. So you can dedicate a lot more memory to what's only being viewed at any given time than before. It's a massive efficiency improvement and is incredibly important for allowing these consoles to actually provide a generational improvement.

Basically, we cant just decouple the idea of memory and storage I/O as two totally different factors.
There’s a large difference between needing to have assets loaded that are no where near being required to render and needing the assets present in order to render what we are actually looking at. The IO will help being able to bring in assets when we need them just as we need them. But if you look at the breakdown of VRAM: buffers still occupy a large portion of memory and if we continue to push more into post processing, we will require more and more buffers. Fast IO will relieve the pain of not having assets in time to render and kee up with traversal speeds and level loading. But that is a separate discussion from graphical quality of what can be rendered.

What the GPU can actually render is dependant on the size of available memory. The IO does not have the 560GB/S of bandwidth required to render from, so to generate even more intricate scenes you need to increase vram to hold even more assets and draw more buffers to increase scene complexity.

IE: you could double PS5s IO speed but if it sat with 8GB of VRAM graphically it’s going to be limited.
 
That is a jump in memory you are seeing in a screenshot, not necessarily a direct result of IO. Of course to go anywhere quickly with this level of fidelity fast IO is a requirement. But if you are looking purely at rendering quality of the image, memory is the requirement here.
Well I'm talking about a game where you move, not a screenshot 😄
 
Runs 'fine' or runs 'the same'........big difference.
Depending on to what level you are measuring. In real terms you're only looking for 'runs fine' where the differences aren't anything you care about. The big question will be whether the advanced IO stack of PS5 comes with significant tangible benefits or if a simple fast-enough (~3 GB/s) SSD on Windows is all that's really needed to achieve the same.

Thinking about it, I suppose at it's root Sony (and MS) identified an obvious need for SSD, and so thought about optimising this to get the most from it where SSDs on existing IO stacks were obviously hampered. They also couldn't rely on users upgrading so needed to get future-proof speeds with the tech available at launch. If you're going to create a new IO system, it makes sense to design it properly and optimise the heck out of it so that it's a solved problem going forwards and won't end up a legacy bottleneck tech in mid-gen refreshes or next-gen systems. So even if it ends up with plenty of untapped potential over the generation (not saying that will or won't happen), it's not over-engineered - it's correctly engineered! ;)

The only legitimate concern is what the cost was and if less money spent on the IO stack and storage could have been put elsewhere for better effect, which doesn't look the case as the system look very balanced. Especially when flash costs can be expected to drop.
 
Back
Top