Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

Depending on to what level you are measuring. In real terms you're only looking for 'runs fine' where the differences aren't anything you care about. The big question will be whether the advanced IO stack of PS5 comes with significant tangible benefits or if a simple fast-enough (~3 GB/s) SSD on Windows is all that's really needed to achieve the same.

Benefits will vary from person to person, I know people who are just happy the game installs are at times half the size on PS5 which means more games on the SSD.

That will always be a benefit on PS5, even in multiplats that are made with 2.4GB/s drives in mind they'll just turn the compression up on PS5 until it's transfer rate matches and benefit from a smaller install size without sacrificing load speeds.
 
Benefits will vary from person to person, I know people who are just happy the game installs are at times half the size on PS5 which means more games on the SSD.

That will always be a benefit on PS5, even in multiplats that are made with 2.4GB/s drives in mind they'll just turn the compression up on PS5 until it's transfer rate matches and benefit from a smaller install size without sacrificing load speeds.

I don't think you can just turn the compression rate up to scale throughput speed down. The compression rate you get depends on the algorithm used and both consoles have a fixed set of compression algorithms they can work with because the decompression blocks are hardware based.
 
I don't think you can just turn the compression rate up to scale throughput speed down. The compression rate you get depends on the algorithm used and both consoles have a fixed set of compression algorithms they can work with because the decompression blocks are hardware based.
From my understanding of this under the 'Kraken creates optimized streams for the decoder' section they could do this with PS5?
 
There’s a large difference between needing to have assets loaded that are no where near being required to render and needing the assets present in order to render what we are actually looking at. The IO will help being able to bring in assets when we need them just as we need them. But if you look at the breakdown of VRAM: buffers still occupy a large portion of memory and if we continue to push more into post processing, we will require more and more buffers. Fast IO will relieve the pain of not having assets in time to render and kee up with traversal speeds and level loading. But that is a separate discussion from graphical quality of what can be rendered.

What the GPU can actually render is dependant on the size of available memory. The IO does not have the 560GB/S of bandwidth required to render from, so to generate even more intricate scenes you need to increase vram to hold even more assets and draw more buffers to increase scene complexity.

IE: you could double PS5s IO speed but if it sat with 8GB of VRAM graphically it’s going to be limited.
The point is that in the past, what was needed at any one time was a much less efficient situation. Perhaps you had 5GB of VRAM to work with, but the direct view in front of you only required like, 1.8GB because the rest of the data sitting in memory was relevant, but not immediately needed. The increased I/O now means that of the memory capacity that does exist, it can be used way more efficiently. So if you now have like 10GB of available VRAM, you might be able to use more like 9GB of it any one point, representing a more than 4x increase in practical terms, rather than the mere 2x that you'd think just looking at raw capacity. Thanks to the faster I/O.

A reminder that the claim I responded to in the first place is this:

"That is a jump in memory you are seeing in a screenshot, not necessarily a direct result of IO."

It absolutely will be a property of BOTH aspects, not just the jump in memory, which itself is not that big. We needed the I/O increase to make much better use of that mere 2x increase, otherwise these systems would massively hobbled.
 
$100 says spiderman 2's eventual pc port will run fine on much worse IO stacks than ps5. Comparable IO requirements to spiderman 1's PC port. What looks so hard to stream in that shot?

Of course it will. Not everything - probably not anything yet - is pushing PS5's I/O to the max. Insomniac will probably talk about it at some future GDC. They do amazingly transparent presentations covering challenges and technical solutions. Many gaming PCs will have more available RAM than PS5 and RAM can solve a lot of I/O issues.
 
I think even with standard nvme pcie3/4 speeds there shouldnt be any problems. With direct storage/gpu decompression there are certainly no problems or concerns. I think its vice versa, the pc probably can load/stream and write things much faster. Were up at 7gb/s speeds, with gpu decompression atleast 14gb/s. The IO stack is greatly improved with directstorage and W11 aswell. Then we have DDR5/PCIE5, direct data paths etc aswell, already today. Things are movin' fast still.

So if you now have like 10GB of available VRAM, you might be able to use more like 9GB of it any one point, representing a more than 4x increase in practical terms, rather than the mere 2x that you'd think just looking at raw capacity. Thanks to the faster I/O.

Nah not really. I'd rather have 560gb/s (or 448gb/s) for vram or basically anything else than a slow 5 or even 9gb/s nvme with much higher latency times etc. We cant count the ssd as ram as thats what you would be doing by calling it the 4x increase which obviously its not.
Rift Apart, DS etc are native PS5 games, touted by their devs as pushing the IO/PS5.
 
They do, at 2.5 to 2.7 times the ram footprint of prior generation.

Your discussion all relies around vague definitions of "a lot more" and "generational leap" etc. Once you get to a reasonable base you will stop seeing the days-by-gone 8x and 16x memory increases. PC's haven't had 8x to 16x memory increases in decades. You're not going to get that anymore.
I know it would be unrealistic to expect another 8x or 16x increase, but not because it wouldn't be highly desirable or useful, but only because improvements in cost of memory per GB have slowed to glacial levels.

There is nothing 'vague' about this, though. Huge improvements in memory capacity have always been a major factor that provides the headroom for developers to push their ambitions to much greater heights and enables that 'next generation' leap. Without it, you might be able to go a lot faster, but not necessarily do a lot more.

A simple 2x(or a bit more) increase in RAM is historically a tiny increase. Especially when we're talking a generation that lasted seven years. This alone would have been woefully insufficient for next gen machines. This was undoubtedly a massive factor that necessitated the usage of SSD's in the new consoles. And there's a reason MS have talked plenty about what are effective 'memory multipliers' using technology outside raw RAM increases. Developers are going to absolutely need this stuff.
 
I know it would be unrealistic to expect another 8x or 16x increase, but not because it wouldn't be highly desirable or useful, but only because improvements in cost of memory per GB have slowed to glacial levels.

There is nothing 'vague' about this, though. Huge improvements in memory capacity have always been a major factor that provides the headroom for developers to push their ambitions to much greater heights and enables that 'next generation' leap. Without it, you might be able to go a lot faster, but not necessarily do a lot more.

A simple 2x(or a bit more) increase in RAM is historically a tiny increase. Especially when we're talking a generation that lasted seven years. This alone would have been woefully insufficient for next gen machines. This was undoubtedly a massive factor that necessitated the usage of SSD's in the new consoles. And there's a reason MS have talked plenty about what are effective 'memory multipliers' using technology outside raw RAM increases. Developers are going to absolutely need this stuff.

Its not just the memory, in raw power the GPU's didnt increase so much either compared to previous generations. DF called it before, the CPU's are the biggest leap this generation. But that had alot to do with the gimped jaguar tablet cpu's that were at core 2 quad Q6600 level (2006/PS3 generation). More capable than the Cell/Xenon for sure, but they werent really that impressive in 2013.
 
Nah not really. I'd rather have 560gb/s (or 448gb/s) for vram or basically anything else than a slow 5 or even 9gb/s nvme with much higher latency times etc. We cant count the ssd as ram as thats what you would be doing by calling it the 4x increase which obviously its not.
Rift Apart, DS etc are native PS5 games, touted by their devs as pushing the IO/PS5.
I have absolutely no clue what you're talking about. You sound all over the place here.

You can have both the ~500GB/s memory bandwidth while also having a fast 3GB/s+ storage bandwidth with super low latency. That's what the consoles have. :/ This isn't an either/or situation.

"We cant count the SSD as RAM" - I really dont get what you're even trying to say here. Of course it isn't RAM. You really didn't understand my whole paragraph there if I thought that's what I was suggesting. I was saying that the fast SSD's here enable devs to get far more from the RAM that does exist at any given point in time than they could before. Which will work as an effective 'memory multiplier'.

This is an absolutely critical piece of the puzzle that will enable these machines to provide a 'next gen' experience. Cuz the simple RAM doubling alone would be highly insufficient for that.

So when we're looking at the target visuals for Spiderman 2 there, it's definitely not just a matter of the RAM increase alone. The huge increase in I/O will have a large effect on this as well.
 
I think even with standard nvme pcie3/4 speeds there shouldnt be any problems. With direct storage/gpu decompression there are certainly no problems or concerns. I think its vice versa, the pc probably can load/stream and write things much faster. Were up at 7gb/s speeds, with gpu decompression atleast 14gb/s. The IO stack is greatly improved with directstorage and W11 aswell. Then we have DDR5/PCIE5, direct data paths etc aswell, already today. Things are movin' fast still.

But a few points

1. Games have to use DS in the first place and on PC this can take a long time for it to be standard in games and game engines
2. GPU decompression at 14GB/s will still be lower than PS5's maximum
3. PS5's I/O is proven in the real world with a handful of games already loading in sub 2 seconds, DS is not proven in the real world
4. What if Sony release a PS5 Pro with even faster speeds? PC will be behind again.
5. Nvidia's numbers for RTX I/O make no sense.
 
Its not just the memory, in raw power the GPU's didnt increase so much either compared to previous generations. DF called it before, the CPU's are the biggest leap this generation. But that had alot to do with the gimped jaguar tablet cpu's that were at core 2 quad Q6600 level (2006/PS3 generation). More capable than the Cell/Xenon for sure, but they werent really that impressive in 2013.
The GPU's definitely made a pretty sizeable increase, though. :/

We've gone from 1.3 and 1.8TF machines to 10 and 12TF machines, and that's with these newer flops 'going farther' than they used to in actual usage, along with a range of new features. That's not a gargantuan improvement, but it's a minimum 5x increase and entirely respectable.
 
I have absolutely no clue what you're talking about. You sound all over the place here.

You can have both the ~500GB/s memory bandwidth while also having a fast 3GB/s+ storage bandwidth with super low latency. That's what the consoles have. :/ This isn't an either/or situation.

"We cant count the SSD as RAM" - I really dont get what you're even trying to say here. Of course it isn't RAM. You really didn't understand my whole paragraph there if I thought that's what I was suggesting. I was saying that the fast SSD's here enable devs to get far more from the RAM that does exist at any given point in time than they could before. Which will work as an effective 'memory multiplier'.

This is an absolutely critical piece of the puzzle that will enable these machines to provide a 'next gen' experience. Cuz the simple RAM doubling alone would be highly insufficient for that.

So when we're looking at the target visuals for Spiderman 2 there, it's definitely not just a matter of the RAM increase alone. The huge increase in I/O will have a large effect on this as well.

Ofcourse the machines do benefit from the faster read speeds from storage, and it does help out the low amount of total ram. But it doesnt equal 4x the memory increase (32gb gddr6) either.
All you have is a screenshot of a spiderman game, that tells us next to nothing what the game is doing. I think looking at Rift Apart is giving us a good idea of what to expect. DF said it many times for this generation, 'keep expectations in check'. Improvements will be made, ofcourse, but its not like these new x86/off the shelf hardware machines are so hard to program for that we need to wait an entire gen to see what they can do. That wasnt even the case with the PS4. It was with PS2/PS3.

1. Games have to use DS in the first place and on PC this can take a long time for it to be standard in games and game engines
2. GPU decompression at 14GB/s will still be lower than PS5's maximum
3. PS5's I/O is proven in the real world with a handful of games already loading in sub 2 seconds, DS is not proven in the real world
4. What if Sony release a PS5 Pro with even faster speeds? PC will be behind again.
5. Nvidia's numbers for RTX I/O make no sense.

1. Maybe, same for the consoles it seems. The amount of games using this new hardware is kinda dire.
2. The 14gb/s was sustained, PS5 wont even be close. With gpu decompression, burst speeds could be over 30gb/s. Raw speeds you're looking at 7gb/s already today. PCIE5 will be substantionally faster.
3. Spiderman loads around 2 seconds slower on the PC, thats before gpu decompression.
4. PC was never behind, not even at the launch of the PS5.
5. But Cerny's numbers do, right.

The GPU's definitely made a pretty sizeable increase, though. :/

We've gone from 1.3 and 1.8TF machines to 10 and 12TF machines, and that's with these newer flops 'going farther' than they used to in actual usage, along with a range of new features. That's not a gargantuan improvement, but it's a minimum 5x increase and entirely respectable.

Its not about if they made a sizeable increase or not, many think they didnt, many others think they did. Modern/newer flops was true for every generation, not just this one. From G70 to GCN probably was a much larger increase per flop then GCN to RDNA.
PS3 to 4 GPU wise was a much, much larger increase. On pure GF/TF, but even more so considering the arch improvements made back then. The generational leaps are just smaller. Even on PC to an extend.
 
1. Maybe, same for the consoles it seems. The amount of games using this new hardware is kinda dire.
2. The 14gb/s was sustained, PS5 wont even be close. With gpu decompression, burst speeds could be over 30gb/s. Raw speeds you're looking at 7gb/s already today. PCIE5 will be substantionally faster.
3. Spiderman loads around 2 seconds slower on the PC, thats before gpu decompression.
4. PC was never behind, not even at the launch of the PS5.
5. But Cerny's numbers do, right.
1. True, but PC has had these kind of speeds for how many years now? Consoles have had them for all of 5 minutes in comparison and are already doing more with the tech.

2. Got proof that PS5 won't get close? Got proof of over 30GB/s burst speeds? There's more to I/O performance than just raw drive speeds and this is where PC lags massively behind.

3. Spiderman loads fast on even an HDD so that's not a good base for you to use to make a point

4. PC is behind, it still is.

5. Cerny's numbers have been backed up by RAD Game Tools and games that have already been released. Nvidia on the other hand make no sense and I would be more than happy for you to explain them to me if you are willing.
 
Last edited:
Ofcourse the machines do benefit from the faster read speeds from storage, and it does help out the low amount of total ram. But it doesnt equal 4x the memory increase (32gb gddr6) either.
Why not?
All you have is a screenshot of a spiderman game, that tells us next to nothing what the game is doing.
Well it's technically a screengrab from the actual trailer of the game.

You can argue that this tells us 'next to nothing' about the actual game's technical makeup, but I'm not interested in going hugely into that. I'd just say I disagree and that if you think that Ratchet and Clank represents anywhere near the peak of what can be done, you'll be proven wrong enough in due course.
Its not about if they made a sizeable increase or not, many think they didnt, many others think they did. Modern/newer flops was true for every generation, not just this one. From G70 to GCN probably was a much larger increase per flop then GCN to RDNA.
PS3 to 4 GPU wise was a much, much larger increase. On pure GF/TF, but even more so considering the arch improvements made back then. The generational leaps are just smaller. Even on PC to an extend.
I mean, this is really going off on a tangent here, and not really relevant to what I was trying to discuss in the first place. I'll just say it's a big enough leap to produce a large improvement in graphics still.
 
1. True, but PC has had these kind of speeds for how many years now? Consoles have had them for all of 5 minutes in comparison and are already doing more with the tech.

2. Got proof that PS5 won't get close? Got proof of over 30GB/s burst speeds? There's more to I/O performance than just raw drive speeds and this is where PC lags massively behind.

3. Spiderman loads fast on even an HDD so that's not a good base for you to use to make a point

4. PC is behind, it still is.

5. Cerny's numbers have been backed up by RAD Game Tools and games that have already been released. Nvidia on the other hand make no sense and I would be more than happy for you to explain them to me if you are willing.

1. Which basically means the pc was held back for a entire generation.
2. Proof it does?
3. Certainly loads slower on a HDD, in special ingame. Your going to be limited alot by the mechanical drive.
4. With 7gb/s nvme available before the PS5's release date. Nah.
5. I'd never put much stake in Cerny, but hey thats up to you if you do.


Consoles have 16gb gddr6, the ssd doesnt make it 32. Neither does it increase rendering (teraflop) capabilities in the manner you are thinking it does.

Well it's technically a screengrab from the actual trailer of the game.

You can argue that this tells us 'next to nothing' about the actual game's technical makeup, but I'm not interested in going hugely into that. I'd just say I disagree and that if you think that Ratchet and Clank represents anywhere near the peak of what can be done, you'll be proven wrong enough in due course.

Its just very hard to technically say from a single screenshot from a trailer of a game far away from release what it is doing specifically with hardware components.
These consoles since the PS4 are easier to develop for, with the PS2 between the first and last games it was quite the difference. Were not seeing such differences anymore, not even with the PS4.

I mean, this is really going off on a tangent here, and not really relevant to what I was trying to discuss in the first place. I'll just say it's a big enough leap to produce a large improvement in graphics still.

Yeah thats your assesment and thats fine. DF and other channels mentioned it many times the generational leap is the smallest one so far. Its the same on the enemy side (PC) so it doesnt really matter in that context. Its not either the leap is too small for a new gen, its just that diminishing returns are a thing now, hence the importance of AI/ML, DLSS, ray tracing, loading speeds/streaming etc.
 
Consoles have 16gb gddr6, the ssd doesnt make it 32. Neither does it increase rendering (teraflop) capabilities in the manner you are thinking it does.
I still think you didn't understand what I was saying above. No idea why you're mixing up GPU teraflops in this, or suggesting that I think the I/O increases GPU capabilities or something. :/ That's very much way off what I was explaining.

But otherwise, yes, the fast I/O from these NVMe SSD's will work as effective memory multipliers. I explained above already how this works. It is an absolutely essential part of these new consoles. Cuz the simple doubling of memory is otherwise woeful and will hobble the systems painfully. They need the increased I/O to be a major factor in allowing devs to get much more from the memory that does exist.

Its just very hard to technically say from a single screenshot from a trailer of a game far away from release what it is doing specifically with hardware components.
Nobody is making any detailed statements on this, but of course the I/O aspects are going to be utilized in an impactful way for a 1st party PS5 title from one of Sony's most technically competent developers who already have more than a foot in the door in working with the PS5.

The visuals at any given point are not going to be limited by memory alone, with the I/O having no impact on things. Insomniac are far too good to waste the potential of the hardware like that. Zero chance.
These consoles since the PS4 are easier to develop for, with the PS2 between the first and last games it was quite the difference. Were not seeing such differences anymore, not even with the PS4.
We've barely seen anything yet. This generation hasn't even really started.
Yeah thats your assesment and thats fine. DF and other channels mentioned it many times the generational leap is the smallest one so far. Its the same on the enemy side (PC) so it doesnt really matter in that context. Its not either the leap is too small for a new gen, its just that diminishing returns are a thing now, hence the importance of AI/ML, DLSS, ray tracing, loading speeds/streaming etc.
Digital Foundry is NOT in disagreement with me on this like you're making it sound. They've been perfectly complementary of the GPU upgrades in these new consoles.

And no, diminishing returns are not a thing *now*. The nature of diminishing returns is that they always exist. Diminishing returns also does not suggest that there isn't still very large improvements possible.
 
A simple 2x(or a bit more) increase in RAM is historically a tiny increase. Especially when we're talking a generation that lasted seven years. This alone would have been woefully insufficient for next gen machines. This was undoubtedly a massive factor that necessitated the usage of SSD's in the new consoles. And there's a reason MS have talked plenty about what are effective 'memory multipliers' using technology outside raw RAM increases. Developers are going to absolutely need this stuff.

Yes, advances like SFS act like memory multipliers. So the 2.7x memory increase may now be closer to 6.75x with SFS and the low latency IO so less needs to be buffered.

The need for increased memory quantity is kind of tied to rendering resolution. Even with a cost is not limited next-gen console I don't see games getting 96-128 GB (loose 8x generational increase) having any benefit over say only getting 24-32 GB. They would need to target 8K or higher for that difference to be noticeable. That would have a huge knock on impact on game development pipelines in addition to game distribution needing higher quantity storage too.
 
But otherwise, yes, the fast I/O from these NVMe SSD's will work as effective memory multipliers. I explained above already how this works. It is an absolutely essential part of these new consoles. Cuz the simple doubling of memory is otherwise woeful and will hobble the systems painfully. They need the increased I/O to be a major factor in allowing devs to get much more from the memory that does exist.

That 4x increase in memory never happened, its a 2x increase. The SSD helps out, but it doesnt equal 32gb's of GDDR6 ram.

Nobody is making any detailed statements on this, but of course the I/O aspects are going to be utilized in an impactful way for a 1st party PS5 title from one of Sony's most technically competent developers who already have more than a foot in the door in working with the PS5.

The visuals at any given point are not going to be limited by memory alone, with the I/O having no impact on things. Insomniac are far too good to waste the potential of the hardware like that. Zero chance.

Im sure the studio behind Rift Apart wasnt lying when they said they maxed the PS5's capabilities. DF had an interview (via stream) with the developers on this game.
Yes they can do more, but we're probably looking at PS4-like improvements (Shadowfall/HZD to ghost of tsushima, last of us 2 etc).

We've barely seen anything yet. This generation hasn't even really started.

Rift Apart, DS etc should be good indications, same for UE5 technology demonstrator. Around that class of experiences is what we can expect.


Digital Foundry is NOT in disagreement with me on this like you're making it sound. They've been perfectly complementary of the GPU upgrades in these new consoles.

And no, diminishing returns are not a thing *now*. The nature of diminishing returns is that they always exist. Diminishing returns also does not suggest that there isn't still very large improvements possible.

DF mentioned multiple times that this generation is the smallest leap so far. Its still a large enough improvement. But it aint the hardware leaps we have seen before.
 
Last edited by a moderator:
Back
Top