Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

Some extreme situations of the PS5 IO system could be in designing something like Doctor's Strange portal rooms in the sanctum, where each door leads to a completely different biome that is visible through the door/portal. And you can go into each one instantly, maybe even go back and forth.

View attachment 9389

Another interesting concept is the Tesseract scene in Interstellar where you can quickly scroll through space and time of a region in space.

View attachment 9390

However all of these things are doable just fine with enough RAM, or with a combination of RAM and fast SSD. Or some clever data planning and management.
The point of the SSD in PS5 wasnt something revolutionary that would have never been achieved before. The very first point of the technology was that the PS5 would have a larger headroom than what it's RAM would have allowed.
There was no hyperbole. Parallels were made with the streaming solutions of past games such as Crash Bandicoot, Soul Reaver and Jak and Daxter. Only this time, the SSD made this its exponentially more efficient next gen iteration, allowing the devs to pump and access more detail and amount of assets, faster and easier than past solutions. SSD serves as a (slower thus partial) extention of RAM. If you have enough RAM then you can achieve the same result.
Crash Bandicoot on PS1 serves as an interesting example of that potential, since that game pumped more detail than any other PS1 game due to the devs' innovative solution in streaming. Other devs thought that Sony provided ND with unique tools because they couldnt figure out how the PS1 could feed that detail in its tiny RAM.
 
Last edited:
Ratchet itself shows that you need a lot more ram/bandwidth and CPU power in a game without the SSD and subsequent subsystems. This proves that the PS5 and the SSD are working as intended. That doesn't mean the PS5 can work miracles as not only is the SSD still limited by its own speed, it's limited by the ram/bandwidth and CPU just like usual.

We won't get to what cerny was talking about for a number of years of that is where the industry is heading. But it's a foot in the door. I certainly would not count on an SSD with a similar subsystem not being in every console after this generation including possibly for MSs next machine. And Nintendo will eventually get there too.
 
Microsoft already have a similar subsystem in the Xbox. They literally have a hardware decompression block and texture compression format designed for it. They even have a texture filtering method built in for use with differing mip levels across tiles on the same texture - that's the level of granularity they were/are anticipating.

The way that Sony have so thoroughly dominated mindspace on the issue of asset streaming is bizarre. It's like the whole "PS5 has a Primitive Shader zomg RDNA 4 Cerny Tech!!" stuff that resonated across the gaming space for 3 years.
 
The way that Sony have so thoroughly dominated mindspace on the issue of asset streaming is bizarre. It's like the whole "PS5 has a Primitive Shader zomg RDNA 4 Cerny Tech!!" stuff that resonated across the gaming space for 3 years.
I think you are exaggerating how big it was.
 
I think you are exaggerating how big it was.
The recent above posts was pretty normal actually 3 years ago. I think a lot of 'streaming' and REYES and everythign was attributed to what Sony did with the PS5 i/o stack. We've had many discussions and many poeple point at things like cache scrubbers, latency and raw bandwidth, decompressors etc when there was no way to test any of it.

Oddly, at the same time, MS arrived at the exact same conclusions as Sony even launched within a month of PS5, but invested a little less into the solution and was pretty much deemed a follower.

I'm going to have to agree with Function here on that statement. We heavily over attribute to Sony what was largely the whole industry headed into that direction anyway.
 
Microsoft already have a similar subsystem in the Xbox. They literally have a hardware decompression block and texture compression format designed for it. They even have a texture filtering method built in for use with differing mip levels across tiles on the same texture - that's the level of granularity they were/are anticipating.

The way that Sony have so thoroughly dominated mindspace on the issue of asset streaming is bizarre. It's like the whole "PS5 has a Primitive Shader zomg RDNA 4 Cerny Tech!!" stuff that resonated across the gaming space for 3 years.

It wasn't gamers hyping it up though, it was actual 3rd party developers. If we're being honest about the past, console fans were initially upset with PS5 spec reveal because the teraflop metric wasn't impressive to them relative to Xbox specs and they didn't understand the importance of the SSD and i/o. It wasn't until actual developers were discussing how revolutionary it was beyond both the Xbox and PC solutions. Developer's explained PS5 i/o had a distinct advantage compared to other platforms. I don't think it's out of line to draw comparisons to Nvidia vs AMD RT. Yes, both GPUs support hardware RT in some way but Nvidia is clearly far and above AMD.

I bet the farm GTA 6 will break the mold of other multiplatform games and have plenty of the game's performance and features scale with i/o, just as certain features scale with gpu power. I imagine the RAGE engine and PS5 i/o is a match made in heaven.
 
Microsoft already have a similar subsystem in the Xbox. They literally have a hardware decompression block and texture compression format designed for it. They even have a texture filtering method built in for use with differing mip levels across tiles on the same texture - that's the level of granularity they were/are anticipating.

The way that Sony have so thoroughly dominated mindspace on the issue of asset streaming is bizarre. It's like the whole "PS5 has a Primitive Shader zomg RDNA 4 Cerny Tech!!" stuff that resonated across the gaming space for 3 years.
Im not trying to start a console war here. I'm just saying the systems in these machines are good for future proofing. So Sony invested more into it initially. Good for them. The technology will be utilized well by developers and I don't think either PS5 or X have any "wasted ideas that were just marketing fluff" even if they are obviously still limited systems as fixed boxes on a relatively bargain price point

The fact people are so touchy about who did what and when and what the "narrative" is what is most annoying to me because one can't even talk in generalities without being taken out of context on either side.

Everyone just calm down
 
The recent above posts was pretty normal actually 3 years ago. I think a lot of 'streaming' and REYES and everythign was attributed to what Sony did with the PS5 i/o stack. We've had many discussions and many poeple point at things like cache scrubbers, latency and raw bandwidth, decompressors etc when there was no way to test any of it.

Oddly, at the same time, MS arrived at the exact same conclusions as Sony even launched within a month of PS5, but invested a little less into the solution and was pretty much deemed a follower.

I'm going to have to agree with Function here on that statement. We heavily over attribute to Sony what was largely the whole industry headed into that direction anyway.


There is nothing odd. Everyone recognised that XBOX had also a similar solution. The only difference is that the PS5 got more attention for, at least on paper, showing an even more advanced and significantly more efficient hardware implementation.

Everything else is in the imagination of fanboy snowflaking.
 
Ratchet itself shows that you need a lot more ram/bandwidth and CPU power in a game without the SSD and subsequent subsystems. This proves that the PS5 and the SSD are working as intended.

Ratchet doesn't say anything about needing a lot more bandwidth or CPU power on the PC to match the PS5's solution. We see that with GPU decompression disabled on the PC side, even CPU's in the PS5 ballpark like the 3600x can handle the decompression requirements with no impact in gameplay.

That's not to say the decompression unit has no use. It's clearly a great thing if you want to max out the capabilities of your drives throughput, particularly if already CPU limited in a gameplay scenario. But it turns out R&C does neither of those things, at least on fairly modest PC's. TLOU is probably a better example of how the decompression unit offloads the CPU to a gameplay impacting degree, even on higher end systems. Although given the games streaming requirements should be pretty modest given it's lineage, and the other issues the game has seen, that may be more down to a poor implementation than an real hardware problem.

In terms of bandwidth, I'm not really sure what you mean. What bandwidth? Agreed a PC needs more RAM to achieve a similar result to the PS5 if it's not using a HDD. That's been something we've been saying here for years.



There is nothing odd. Everyone recognised that XBOX had also a similar solution. The only difference is that the PS5 got more attention for, at least on paper, showing an even more advanced and significantly more efficient hardware implementation.

Everything else is in the imagination of fanboy snowflaking.

More efficient? It has a faster SSD but that's largely it. In terms of implementation the two systems are pretty similar in that they both use NVMe SSD's to transfer data directly into shared memory via a hardware decompression unit that can decompress the full data stream in real time. They both use custom API's and firmware to drive the hardware and despite Sony's promotion of special sauce unique hardware like "Dedicated DMA Controllers, I/O Co-Processors, On-Chip RAM" these elements are really just the standard components you find on any decent SSD controller and likely feature in the Xbox as well. The only real unique hardware element in the PS5 are the cache scrubbers which are more designed as a CPU/GPU side efficiency improvement (of debatable impact) to avoid excessive cache flushes in situations of very heavy streaming.
 
The main thing Sony had was the fastest consumer-level SSD of its time. They came out of the gate with storage faster than PC had (in a single SSD). This meant in talk of console storage, PS5 had a real flag-waver for attention. PS5's SSD was also the focus of of UE5's next-gen software reveal, solidifying it in the core gamer mindshare. MS's solution worked but was neither the fastest nor most controversial. So like an Olympic race, we remember the winners, especially when they break records, and the guys caught using performance-enhancing drugs or cheating. We don't recall the also-rans, even if they also broke records; they are overshadowed by the Number One.

It also strikes me as similar to the XBO talk with reference to the Move Engines, which were largely rebranded GPU DMA units. If one console company holds a technical breakdown and calls out some tech in a slide and names it with Capital Letters, it's like they alone have it and the other console misses out and is inferior. Thankfully we're smarter here and can consolidate our understanding over time to known what the machines are under the hood regardless of public presentation, and make informed conversation about the different platforms regardless how the rest of the internet might skew conversation. In particular, identifying the differences and comparing their results as that's basically what this entire forum is about!
 
Ratchet doesn't say anything about needing a lot more bandwidth or CPU power on the PC to match the PS5's solution. We see that with GPU decompression disabled on the PC side, even CPU's in the PS5 ballpark like the 3600x can handle the decompression requirements with no impact in gameplay.

That's not to say the decompression unit has no use. It's clearly a great thing if you want to max out the capabilities of your drives throughput, particularly if already CPU limited in a gameplay scenario. But it turns out R&C does neither of those things, at least on fairly modest PC's. TLOU is probably a better example of how the decompression unit offloads the CPU to a gameplay impacting degree, even on higher end systems. Although given the games streaming requirements should be pretty modest given it's lineage, and the other issues the game has seen, that may be more down to a poor implementation than an real hardware problem.

In terms of bandwidth, I'm not really sure what you mean. What bandwidth? Agreed a PC needs more RAM to achieve a similar result to the PS5 if it's not using a HDD. That's been something we've been saying here for years.
I wasn't talking about anything in comparison to pcs. But how ratchet was legitimately showing off next gen technology as compared to last like ps4.

I was referencing DFs video where they compared lower end PCs that were crapping out due to not having enough ram, being tested on a slow SSD, not having a strong enough CPU. And any combination of the 3. And according to rich were actually best case scenario compared to PS4 and Xbox one since the components tested even at low end were notably more powerful than the base last gen machines.

There are things possible now due to current tech that just would have been either impossible or just not feasible without tons of systems dedicated just to making it work in game. And that's a good thing for advancement of technology
 
The main thing Sony had was the fastest consumer-level SSD of its time.

I'm not sure that it was. It was certainly the fastest when it was announced which goes a long way towards building the hype - I still recall how the Xbox 360 was hyped as being faster than 2x 6800Ultra's in SLI (the fastest GPU on the planet) when it was announced - but by the time PS5 actually launched I think there were PCIe 4.0 drives available on the PC market as well. Here's one example:

 


There is nothing odd. Everyone recognised that XBOX had also a similar solution. The only difference is that the PS5 got more attention for, at least on paper, showing an even more advanced and significantly more efficient hardware implementation.

Everything else is in the imagination of fanboy snowflaking.
I would say that Sony marketed well. There’s no reason to say that people are being imaginative or there is any sort of snowflaking.

No one is hurt, by this. Sony just markets well. To deny that isn’t a factor is awkward to me. You literally linked the main story that got everyone onto the narrative that only PS5 was capable of this.

What’s interesting is going back in time to re-read some of these threads. And as usual we had supporters and skeptics and that narrative went back and forth and back and forth. It was a healthy discussion.
 
Ratchet doesn't say anything about needing a lot more bandwidth or CPU power on the PC to match the PS5's solution. We see that with GPU decompression disabled on the PC side, even CPU's in the PS5 ballpark like the 3600x can handle the decompression requirements with no impact in gameplay.

That's not to say the decompression unit has no use. It's clearly a great thing if you want to max out the capabilities of your drives throughput, particularly if already CPU limited in a gameplay scenario. But it turns out R&C does neither of those things, at least on fairly modest PC's. TLOU is probably a better example of how the decompression unit offloads the CPU to a gameplay impacting degree, even on higher end systems. Although given the games streaming requirements should be pretty modest given it's lineage, and the other issues the game has seen, that may be more down to a poor implementation than an real hardware problem.

In terms of bandwidth, I'm not really sure what you mean. What bandwidth? Agreed a PC needs more RAM to achieve a similar result to the PS5 if it's not using a HDD. That's been something we've been saying here for years.



More efficient? It has a faster SSD but that's largely it. In terms of implementation the two systems are pretty similar in that they both use NVMe SSD's to transfer data directly into shared memory via a hardware decompression unit that can decompress the full data stream in real time. They both use custom API's and firmware to drive the hardware and despite Sony's promotion of special sauce unique hardware like "Dedicated DMA Controllers, I/O Co-Processors, On-Chip RAM" these elements are really just the standard components you find on any decent SSD controller and likely feature in the Xbox as well. The only real unique hardware element in the PS5 are the cache scrubbers which are more designed as a CPU/GPU side efficiency improvement (of debatable impact) to avoid excessive cache flushes in situations of very heavy streaming.
I guess the problem is often that Sonys way is the much easier to understand and it also works the brute force way, while the MS solution needs much more work to get to the same point. Yes the Xbox also provides much bandwidth etc, but to reach the same "peak" extra work is required to really only load the parts that are needed, in a much more granular way.
Or we can just simply call it way more complicated :)
But this should be the future way, as the current way with ever growing asset sizes can't be compensated by the rest of the system on the long run. We currently see that in the GPU market where even a rtx 4090 can be to slow because the code, assets, ... are just grown without optimization. Brute force does no longer work so good at these levels of "data crunching".
 
Last edited:
Im not trying to start a console war here. I'm just saying the systems in these machines are good for future proofing. So Sony invested more into it initially. Good for them. The technology will be utilized well by developers and I don't think either PS5 or X have any "wasted ideas that were just marketing fluff" even if they are obviously still limited systems as fixed boxes on a relatively bargain price point

The fact people are so touchy about who did what and when and what the "narrative" is what is most annoying to me because one can't even talk in generalities without being taken out of context on either side.

Everyone just calm down


I'm calm as a cucumber, but I do think Xbox Series has at least three hardware features that failed to go beyond marketing hype so far.

1. Tier 2 VRS
2. Mesh Shaders
3. Extra bits for HW ML support

For all the talk from them in the beginning we haven't seen a single game release take advantage. The responsibility is on xbox studios to make games that leverage these so called hw gamechangers. Sony only bragged about their i/o system and I think it was warranted considering Demon Souls, Ratchet, even Spiderman to a lesser extent. Even with ML Sony didn't mention anything about it during the marketing runs but it has been implemented in Spiderman Miles Morales for muscle deformation and God of War Ragnarok for texture upsampling.
 
It is always hard to get visions on the street, especially when they are hard to integrate (or at least mean extra work for small benefits). MS talked about TF advantage because it was easy to understand (at least the message was easy). Sony talked a lot about absurd high IO bandwidth, because it was easy to understand (higher = better). It is more or less the new "64bits" of the old console generations, which just hides that the "truth" is a bit more complex and that there is not only one way to reach something. But I guess we can all agree that MS sometimes makes it a bit more complicated than it could be :)
 
It wasn't gamers hyping it up though, it was actual 3rd party developers. If we're being honest about the past, console fans were initially upset with PS5 spec reveal because the teraflop metric wasn't impressive to them relative to Xbox specs and they didn't understand the importance of the SSD and i/o. It wasn't until actual developers were discussing how revolutionary it was beyond both the Xbox and PC solutions. Developer's explained PS5 i/o had a distinct advantage compared to other platforms. I don't think it's out of line to draw comparisons to Nvidia vs AMD RT. Yes, both GPUs support hardware RT in some way but Nvidia is clearly far and above AMD.

I bet the farm GTA 6 will break the mold of other multiplatform games and have plenty of the game's performance and features scale with i/o, just as certain features scale with gpu power. I imagine the RAGE engine and PS5 i/o is a match made in heaven.

PS5's SSD/IO stuff is very good - outstanding even - for a 2020 system. And I remember developers being very pleased about it. I don't remember any developers talking about how it was revolutionary beyond the Xbox system though. Better perhaps, but not "revolutionary beyond".

So far, even Sony first party studios haven't delivered anything a reasonably modest PC can't match, or that wouldn't be very easily within reach of what the current Xbox consoles can do. At 'full level loads' the PS5 would in practice have a bit of an advantage if you could mostly eliminate CPU or other constraints, but in terms of traversal there are limits to how much data you will need to change per second or per frame for a given degree of data density across the game world - and the top end of that is heavily affected by resolution and platform memory.

Rockstar are very disciplined in terms of how geometry and material data is spread out across maps and scenes - predictable and relatively stable streaming performance is core to how they work. Whatever they do will work excellently in all or almost all cases from a relatively modest NVMe drive. They have excellent artists creating great assets with memory budget always front and centre during creation.

I'm calm as a cucumber, but I do think Xbox Series has at least three hardware features that failed to go beyond marketing hype so far.

1. Tier 2 VRS
2. Mesh Shaders
3. Extra bits for HW ML support

For all the talk from them in the beginning we haven't seen a single game release take advantage. The responsibility is on xbox studios to make games that leverage these so called hw gamechangers. Sony only bragged about their i/o system and I think it was warranted considering Demon Souls, Ratchet, even Spiderman to a lesser extent. Even with ML Sony didn't mention anything about it during the marketing runs but it has been implemented in Spiderman Miles Morales for muscle deformation and God of War Ragnarok for texture upsampling.

IIRC Doom Eternal used VRS Tier 2 back in 2001.

It is a pity that Intel, and not MS, have used DP4a mixed precision to create an FSR beating upscaler. Xbox could easily run such an upscaler, and its absence is disappointing.

Mesh Shaders are the future of geometry in the 3D pipeline - they're like AMD's RDNA1 primitive shaders (as seen in PS5) but better. But better doesn't count for much until someone uses it - and if a game needs to get by with vertex shaders for compatibility reasons it's probably not a priority to bother with Mesh Shaders. Fortunately, UE5 has incorporated Mesh Shader, but I expect it will still take time to find adoption and have its use fully refined.
 
Back
Top