Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

So Ratchet and Clank was ~5GB/s after decompression at certain points, so assuming a ~2:1 compression ratio, you'd need a drive capable of ~3GB/s. So basically a Gen3 drive.

Some people were acting like it was saturating the entire 5GB/s raw bandwidth of the PS5 SSD.

It sounds like he didn't know by the profiler if that was raw bandwidth or decompressed bandwidth. Since it works perfectly fine on a 3.5GB/s drive we can fairly safely assume it was decompressed bandwidth.

So if they're running into CPU bottlenecks with 2.5-3GB/s raw bandwidth... was the PS5 I/O overengineered?
 
He's still pusing the Supercharged SSD in the PS5 over two years after launch. The determination of some is truly out of this world. That SSD-hype has died a long, long time ago.
All his efforts are for nothing, the PS5 SSD has left me 'its fast, but its outpaced along time'.

Where did I push this like I said PS5 SSD speed is not important. My first comment was more about a game tailored around SSD Returnal and being able to run it with HDD.

And for Albuquerque he basically call an Insomniac dev a liar when he said they load the full level after each portal because they can and they don't need to use any trickery and it is less work. Again this is possible on PC too, there is already faster SSD and a fast storage API with Direct Storage 1.1..

It was debunked by I think a @Karamazov video where and he traverse a portal on an on rail section pause the game and move the camera inside the level(photomode?) and everything is loaded.
 
Last edited:
So Ratchet and Clank was ~5GB/s after decompression at certain points, so assuming a ~2:1 compression ratio, you'd need a drive capable of ~3GB/s. So basically a Gen3 drive.

Some people were acting like it was saturating the entire 5GB/s raw bandwidth of the PS5 SSD.

It sounds like he didn't know by the profiler if that was raw bandwidth or decompressed bandwidth. Since it works perfectly fine on a 3.5GB/s drive we can fairly safely assume it was decompressed bandwidth.

So if they're running into CPU bottlenecks with 2.5-3GB/s raw bandwidth... was the PS5 I/O overengineered?

They don't run inside CPU Bottlenecks because of the bandwidth but again they said themselves the Game engine architecture is not good enough. They optimize as much as they can but they need to refactor the game engine. It was enough to make Spiderman.

jzCHabM.png




Into this thread during my search I see the first page of the thread. At the end what the dev was saying is true. And I will repeat this is possible on PC too probably faster with better CPU and faster SSD speed with Direct Storage 1.1.
 
Last edited:
They don't run inside CPU Bottlenecks because of the bandwidth but again they said themselves the Game engine architecture is not good enough. They optimize as much as they can but they need to refactor the game engine. It was enough to make Spiderman.
I know that. Regardless, you have a game which very specifically only uses that bandwidth at select moments when shifting dimensions. A game like Spider-man doesn't even come close to put that much stress on the I/O while streaming in data during gameplay. Now unless a future Spider-man game has you shifting between all of the Spider-verses at the touch of a button (that would be sick) then even if they refactor the engine.. trying to stream that much through during gameplay, they're still going to run into other CPU bottlenecks holding them back, I believe.

But it's better to have the engine holding the hardware back than the hardware holding the engine back, I'd think.
 
I know that. Regardless, you have a game which very specifically only uses that bandwidth at select moments when shifting dimensions. A game like Spider-man doesn't even come close to put that much stress on the I/O while streaming in data during gameplay. Now unless a future Spider-man game has you shifting between all of the Spider-verses at the touch of a button (that would be sick) then even if they refactor the engine.. trying to stream that much through during gameplay, they're still going to run into other CPU bottlenecks holding them back, I believe.

But it's better to have the engine holding the hardware back than the hardware holding the engine back, I'd think.

They are not so far from reaching the ceiling, they were not using oodle texture. It means the ceiling is more around 8-9 GB/s. 5 GB/s is not so far.
 

Just to remind, in R&C it's not just about the portals but even during gameplay that they unload what's behind the player's viewpoint, and that they are just beginning to use the tech. It's a geat advancement over last gen and will benefit all platforms now that it's widely available.
And i d'ont think it's just a matter of SSD or "just add more ram" as MS and NVIDIA are developping their tech such as SFS and RTX IO.
Great things and games to come. We should all rejoice instead of nitpicking.
 
Right. The current gen hardware upgrades across the board for all platforms will benefit more than than a singular game feature in extreme scenario like portals. People think too simplistically about game design I think
 
So if they're running into CPU bottlenecks with 2.5-3GB/s raw bandwidth... was the PS5 I/O overengineered?

The CPU limits was because of how their engine was set-up and not because of the CPU hardware itself.

I sure some of their guys have stated somewhere they'll address it for future titles.
 
I had heard before that insomniacs engine was not really multithreading optimized, which is surprising considering what they were able to do with PS4
 
I had heard before that insomniacs engine was not really multithreading optimized, which is surprising considering what they were able to do with PS4

If you want to understand read this about multithreading in game engine



jzCHabM.png

And with this image you can make the link in which side they are on multithreading. They want to improve it but no idea when they will find the time to do it. I hope for Spiderman 2.
 
I seem to recall someone arguing a while back in this thread (I don't recall who) that GPU's wouldn't be able to keep up with PS5's decompression rate. I may be misrepresenting that argument, but in any case, as a reference point here's an example of a 3080Ti sustaining over 18GB/s uncompressed at around 6.8GB/sec input using Direct Storage:

 
I seem to recall someone arguing a while back in this thread (I don't recall who) that GPU's wouldn't be able to keep up with PS5's decompression rate. I may be misrepresenting that argument, but in any case, as a reference point here's an example of a 3080Ti sustaining over 18GB/s uncompressed at around 6.8GB/sec input using Direct Storage:


What I find the most impressive of the video is the fact SSD maker are beginning to do firmware optimized for Direct Storage. This is important and will help have better performance.
 
I seem to recall someone arguing a while back in this thread (I don't recall who) that GPU's wouldn't be able to keep up with PS5's decompression rate. I may be misrepresenting that argument, but in any case, as a reference point here's an example of a 3080Ti sustaining over 18GB/s uncompressed at around 6.8GB/sec input using Direct Storage:


There's FPS differences in that video which makes sense as the output is higher but considering the somewhat simple test scene I wouldn't have expected any drop.

110fps vs 104fps.
 
There's FPS differences in that video which makes sense as the output is higher but considering the somewhat simple test scene I wouldn't have expected any drop.

110fps vs 104fps.

That's explained in the video as being because the faster streaming allows higher res textures to be used thus resulting in higher GPU load.
 
There's FPS differences in that video which makes sense as the output is higher but considering the somewhat simple test scene I wouldn't have expected any drop.

110fps vs 104fps.

They explain it as the demands of higher quality textures, but I would definitely be interested to see (if it's possible) to segregate out the GPU load for the decompression stage vs. the general rendering load.
 
I seem to recall someone arguing a while back in this thread (I don't recall who) that GPU's wouldn't be able to keep up with PS5's decompression rate. I may be misrepresenting that argument, but in any case, as a reference point here's an example of a 3080Ti sustaining over 18GB/s uncompressed at around 6.8GB/sec input using Direct Storage:


Crap thats fast. Almost DDR3/4 speeds.
 
That's explained in the video as being because the faster streaming allows higher res textures to be used thus resulting in higher GPU load.

And you believe that? I can't remember the last time increasing texture detail/resolution in a game affected frame rate (As long as there's enough VRAM) so find it hard to believe.

Higher quality textures are basically 'free' on PC (again as long as you have enough VRAM) and have been for years.
 
Last edited:
Back
Top