Sony PS5 NVME Expansion Options?

I mean if you need something faster than 6gb/s your just going to pay extremely cutting edge tax. A 1tb pci-e 4 nvme drive hitting 5gb/s is $250 itself

It's pretty amazing how fast ssd sony was able to put into console considering all that, isn't it? Respect to sony for ssd awesomeness. Not an easy thing to do. Console lifecycle is long, eventually ssd's will become cheaper.
 
If you're not jumping around a dozen games at a time, use and external, cheap HDD for archiving and wait until SSD prices for expansions are nice and cheap. ¯\_(ツ)_/¯
 
This picture answers better than hundred words. There is good stuff in the i/o chip sony developed. That same io chip is used to handle both internal and external ssd.

71340_512_understanding-the-ps5s-ssd-deep-dive-into-next-gen-storage-tech_full.png

Right but will a 3rd party NVMe drive use the internal 'custom flash controller' or its own?...

Edit: I guess it comes down to how much of the "magic" is done in the flash controller vs I/O chip...
 
Right but will a 3rd party NVMe drive use the internal 'custom flash controller' or its own?...

It will use it's own controller. The overhead/issues with controller leads to needing the nvme ssd to be a bit faster than ps5 internal drive. Sony will test the drives and release a list of drives that fit inside ps5 and pass the performance bar.

All the real magic of dma/decompression/cache scrubbing/6 priority queues/... will be taken care of by the io chip which is used for both internal ssd and extension ssd.
 
i can tell you from my setup that using an external ssd is much faster than a HDD.
On a PC yes, on a PS5 running PS4 games? I'm waiting for solid information. Nothing we're heard about PS5 suggests this should be a problem, indeed the entire architecture of PS5 is designed to leverage maximum I/O performance, but I'd like to know how much faster PS4 games will be on an external SSD connected via USB compared to a HDD. I want to know this before I drop the rough equivalent of the probable cost of the PS5 itself on a SSD primarily for those PS4 games and some other archiving needs.
 
I'm waiting for solid information. Nothing we're heard about PS5 suggests this should be a problem, indeed the entire architecture of PS5 is designed to leverage maximum I/O performance, but I'd like to know how much faster PS4 games will be on an external SSD connected via USB compared to a HDD. I want to know this before I drop the rough equivalent of the probable cost of the PS5 itself on a SSD primarily for those PS4 games and some other archiving needs.

I'm exactly in the same situation. Though I already have a few external units I'll be migrating over that are both close to 85% full. And I uninstalled games I completed that I wont be revisiting soon. I have 2TB SSD and 4TB HDD. I dont know how much the difference of those two will be for NextGen. The current gen is already limited by other aspects that the difference isnt as noticeable as you would think. Loading Borderlands3 takes way too many dancing claptraps.

It should be more economical to pick up a 8TB HDD, but copying entire game collections to or from HDD takes so long, so juggling games to mechanical spinners may be annoying enough where I pick up another external SSD.
 
I’m sure this is a stupid question but.

What would happen if theoretically we put an nvme SSD in there that doesn’t quite run at the minimum required speed? Would it not work? Would it run thing slowly? Surely that would break things for games like, say, R&C which might be requesting gigs of data for those weird dimensional warp thingies?
 
Patching with software priority queues can be problematic because the drive needs hundreds of entries in it's hardware queue to reach peak performance (i.e. at least 0.05ms worth of operations). So if it's already filled with a lot of requests, the software priority queues would be a bit less effective since it would still serve existing requests first. They will need to keep the two queues relatively short to manage the priorities outside of the nand controller, and that will decrease the overall throughput from the peak figure. But anything can be done in software if the hardware queues are kept relatively short. I have used this trick on a file server to implement a software assisted QoS.

Some people have said nvme have more than 2 priorities, but those are weighted round-robin within a single queue, they are not absolute priority queues, and even worse is that they are optional in the specs. That's probably why Cerny said nvme only has "two true priority levels".
 
I've not seen this discussed here yet:

https://www.anandtech.com/show/15848/storage-matters-xbox-ps5-new-era-of-gaming/4

Some really in depth info about both consoles storage solutions here including around the additional speed requirement for expandable drives in the PS5. They seem to think 6.5GB/s would be more than enough.

Anandtech said:
Sony says the lack of six priority levels on off the shelf NVMe drives means they'll need slightly higher raw performance to match the same real-world performance of Sony's drive because Sony will have to emulate the 6 priority levels on the host side, using some combination of CPU and IO coprocessor work. Based on our observations of enterprise SSDs (which are designed with more of a focus on QoS than consumer SSDs), holding 15-20% of performance in reserve typically keeps latency plenty low (about 2x the latency of an idle SSD) without any other prioritization mechanism, so we project that drives capable of 6.5GB/s or more should have no trouble at all.
 
So an extra ~ 18% on the top end to account for less priority channels?
 
Some really in depth info about both consoles storage solutions here including around the additional speed requirement for expandable drives in the PS5. They seem to think 6.5GB/s would be more than enough.
I read this today, it's a good read but Anandtech concede it involves a lot of guesswork.
 
I’m sure this is a stupid question but.

What would happen if theoretically we put an nvme SSD in there that doesn’t quite run at the minimum required speed? Would it not work? Would it run thing slowly? Surely that would break things for games like, say, R&C which might be requesting gigs of data for those weird dimensional warp thingies?

I would assume (because it's wise to ;-)) the OS will benchmark a new drive. The 360 did some tests for USB media if I recall.
 
But how long will it have to benchmark third party devices to make sure it being used constantly doesnt trigger throttling? Or does it present that as a warning to the user so they're aware it might be an issue?
 
It will use it's own controller. The overhead/issues with controller leads to needing the nvme ssd to be a bit faster than ps5 internal drive. Sony will test the drives and release a list of drives that fit inside ps5 and pass the performance bar.

All the real magic of dma/decompression/cache scrubbing/6 priority queues/... will be taken care of by the io chip which is used for both internal ssd and extension ssd.

Cache scrubbing?
 
But how long will it have to benchmark third party devices to make sure it being used constantly doesnt trigger throttling? Or does it present that as a warning to the user so they're aware it might be an issue?

The test has to be comprehensive, I can see this taking a few minutes at least. But this is something that only needs to be done once for each drive and will surely be part of the drive's initialisation/formatting.
 
Cache scrubbing?

When data in ram is replaced with new data from disk the cpu and gpu caches containing that old data must be invalidated/updated. If caches are not updated cpu/gpu sees incorrect data and wrecks havoc. Considering the potential amount of data replaced in ram per second makes cache scrubbing fairly heavy operation. Using io-controller to do the cache maintenance saves cpu time and makes developers life easier(it just works).
 
Last edited:
When data in ram is replaced with new data from disk the cpu and gpu caches containing that old data must be invalidated/updated. If caches are not updated cpu/gpu sees incorrect data and wrecks havoc. Considering the potential amount of data replaced in ram per second makes cache scrubbing fairly heavy operation. Using io-controller to do the cache maintenance saves cpu time and makes developers life easier(it just works).

I know what it does but where did you get the info that this is handled by the I/O controller?
 
Back
Top