Velocity Architecture - Limited only by asset install sizes

Dunno if this is exactly true all the time because usually max speeds just refer to the rated bandwidths of the NAND modules, their config in parallel and the bandwidth the flash memory controller interfaces to the system over what number of PCIe lanes and their type. At least, I think that's the case.

Also the Phison bandwidth mentioned there, I figure that also includes overhead? If just knocking some off for encoding then it would be closer to 3.9 GB/s, which lines up with what Brit posted here:



So maybe the bandwidth you're quoting is the regular version of that controller but MS & WD redesigned it to allow a higher peak range.

I'm not sure I follow what you're saying but I thought Quadbitnomial explained it eloquently. The change to a custom ASIC was simply to enable the drive to work with a 2x PCI4 interface rather than a 4x PCI3 interface. There's no change in bandwidth there, it's most likely simply a future proofing exercise because the console, and more importantly it's external peripherals will still need to be manufactured more than half a decade from now and PCI3 is already an old standard.

The max speed of the drive is determined by the number of NAND channels supported by the controller along with speed of the NAND used. The Phison E19T supports 4 channels and NAND speeds up to 1200 MT/s to give a maximum possible throughput of 3.75GB/s. However that doesn't mean the SSD has to use such high speed (expensive) memory.

As mentioned above we know the XBSX uses a Western Digital SN530, but the only customisation to that drive that has been mentioned is to the PCIe interface which results in no change to overall potential bandwidth. So given that the standard version of that drive is rated at 2.4GB/s, and given that we've been told of no changes that would impact that peak throughput, and given that Microsoft advertise the drive as being capable of 2.4GB/s, I think Occams Razor applies.

Also the sustained speed thing is likely a red herring. Any drive can sustain it's max throughput under ideal circumstances as long as it doesn't throttle. Presumably Microsoft are simply very confident of their cooling solution - something that would obviously be helped greatly by using a drive with much slower memory than the maximum supported.
 
The SSD in the Series X is a cache-less Western Digital SN530 SSD with custom ASIC for PCIe Gen 4 support. The PC version of that SSD is rated at 2.4GB/s on a PCIe Gen 3 x4 interface whereas the Series X is on Gen 4 x2 but both max at 3.938GB/s.



The Max throughput of the controller as rated by Phison is 3.75GB/s. The max speed of an SSD is determined by the sum of its parts and not by the max rated capability of the controller or the PCIe lane. You have to take into account the rated speed of the NAND as well.



They cannot guarantee a minimum of 2.4GB/s as that is entirely dependent on the the type and size of file you are reading or writing. They can market their sustained speed as does every storage manufacturer hence 2.4GB/s raw and 4.8GB/s compressed. A game written for last gen consoles for example will not be able to take advantage of that same SSD as a game written for current gen. Here is an experiment you can try if you have a Series X and the accompanying expansion card. Copy a game from the internal SSD to the external SSD and time how fast the copy process takes (a 50GB game should take ~20 sec at 2.4GB/s). You will find you would not even hit close to 1GB/s copy speed owing to it being cache-less and that incurs a huge penalty on SSD speed. That is just the OS doing a read and write operation so it is a straight stress test of the SSD without the clever decompression and sampler feedback file management multipliers games will use to gain higher effective speed.

I didn't want to go into this because in fact the speeds can go above and below the 2.4GB/s my claim was that for gaming workloads, devs can bank on pushing 2.4GB/s at a minimum. I agree with you on the actual performance of transferring the data from internal to external is below the rated figures for example, copying 65GB of Dirt 5 to the internal SSD is at a rate of 650MB/s, ~4 times below the 2.4GB/s. In some other tests its been rated at read speeds of about 730MB/s well below the rated 2.4GB/s still. I think this comes down to the OS and the firmware being in early stages. Its going to be interesting to reconcile how an SSD with 730MB/s is providing similar performance with Sony's 5.5GB/s rated SSD. The only thing I can think of now is this variability is due to the OS and controller firmware which will improve with time. We just have to wait and see or seek clarification from MSFT tbh.
 
As mentioned above we know the XBSX uses a Western Digital SN530, but the only customisation to that drive that has been mentioned is to the PCIe interface

But that does not eliminate the possibility of other customizations. The drive is software driven and Microsoft has a lot of technical papers about improvements in that area.
 
I actually have been thinking about this with drumming up some spec speculation for 10th-gen systems and there is one thing I have considered faster decompression bandwidths (that greatly outstrip the capacity limit of the main memory) being potentially useful for: rapid streaming of unique data assets into a space of the VRAM acting as framebuffer processed to the GPU by the bandwidth of the VRAM. But in order to fully take advantage of that, you would need a LOT of unique assets to stream in, easily hitting hundreds of gigabytes if not more than that, and by that point storage capacity will become the bottleneck because it's not like you can leave all that unique data sitting on the Blu Ray, otherwise BR drive access would become the bottleneck instead (or better to say, an additional bottleneck).

I think for 10th gen systems they need to focus on shipping with 2TB SSDs and an expandable slot similar to what's in the PS5. That would alleviate some of the issues you've mentioned. With a target of about 200-300GB for game sizes this wouldn't be hard. They could simply ship 2 Bluray discs. Again it's not clear if companies like MSFT will even ship another console after this tbh.

The reason I've been thinking about that so much has to do with the argument of diminishing returns; I don't think we've hit that point yet, actually. Yes overall fidelity and image IQ has increased gen-over-gen but one of the biggest advantages a lot of high-quality CG films (or CG-heavy films) have over games is just readily having a crapton of unique high-quality, large-sized assets that can stream in and be processed on farms of systems with lots of RAM. If there is any hope or chance to partially recreate that in a gaming environment it will come from even larger expansion of raw storage bandwidths helping to push decompression bandwidth rates at many multiple times beyond the VRAM capacity so that the VRAM can act as a framebuffer for a steady stream of new data to calculate for a scene by the GPU. But that will also mean a need for greater capacities of the storage device, more powerful decompression hardware, more efficient compression/decompression algorithms and most importantly, some shift in how game assets are created that leverages heavy use of some GPT-style AI programming and asset model generation models (but in an ethical way, so entire human workforces aren't being replaced by AI).

I think with SSDs you solve a lot of the issues you've mentioned. The Series X and PS5 will be capable of fully utilizing the small amount of RAM they have thanks to the SSDs. You can simply have the game install on the SSD as part of virtual RAM. So roughly speaking, if they doubled the RAM on 10th gen to 32GB, a game with 200GB install size could possibly see 232GB of virtual RAM. All they'd need is an SSD with 12GB/s throughput and decompression ratio of about 2.5:1 and the game could instantly page in anything it needs from the 200GB install. Thats how the Series X and PS5 games are going to work as well. Much better memory paging. We're definitely not at the point of diminishing returns if there's anything to learn from Apple's IC gains with the M1. The 10th gen will definitely have GPUs much more powerful and efficient than an RTX 3090. Thats 6-7 years from 2020. So much better hw acceleration for RT, better geometry processing engines, higher bandwidth RAM, etc. So finally achieving photorealistic games is possible on the 10th Gen.



I think once that becomes a reality, is when we'll truly start hitting the point of diminishing returns in terms of graphical fidelity for gaming, which is something I think the 10th-gen systems will be able to accomplish. What we're going to see from the 9th-gen systems barely scratches the surface there IMHO, but it's a start. And that brings it back to your point in a sense: games this upcoming gen won't be able to do the sort of stuff I was just talking about, so there won't really be a design paradigm shift in that way by the industry at large. Therefore as strong as PS5's SSD I/O design is, I don't see any game design concepts that would be possible there that suddenly become impossible on the Series X or Series S. I'd still like to know exactly how the reserved 100 GB block MS referred to before works in practice, a few of you guys like function and iroboto had some pretty good ideas there (and also the idea that part of the system's reserved 2.5 GB GDDR6 for OS is maybe being used as a temp cache and mapping space for SSD data), because you'd think that alone would be a giveaway that Sony aren't the only ones who have designed a SSD solution with more than just game loading times in mind.

Yes I agree, the 9th gen is only scratching the surface. Its a huge leap over the last gen but its the perfect foundation for 10th gen. For the 100GB MS referred to I'm 100% certain its just having the game install as part of virtual memory. Because the SSD is so fast, you can have a larger portion of data on disk as part of virtual memory. Thats what it is. Much more efficient memory paging. The game thinks it has access to 116GB of RAM yet in reality it only has 13.5GB of RAM(It also thinks it has the 2.5GB for the OS) but can demand page in any part of the 100GB on the SSD. The SFS determines what textures to demand page in from available virtual RAM. And some data is cached into static memory. So much much better utilization of RAM. Once devs start fully utilizing these systems its going to be amazing.
 
There being no diminishing returns is one of the reasons one could see a different future. Its not where the hardware is going but where the market is going.

Being stuck on 7+ years with the same hw/plastic box isnt all that exciting. Think ms talked about this before.
 
There being no diminishing returns is one of the reasons one could see a different future. Its not where the hardware is going but where the market is going.

Being stuck on 7+ years with the same hw/plastic box isnt all that exciting. Think ms talked about this before.

Personally I don't see a need for a mid-gen refresh this time. What would it be for? Higher frame rates at 4K? For example hitting native 4K 60 in sports and fighting games is already going to be very possible. Shooters, RPGs, third person AAA games could all achieve 4K 30 at a minimum and with AI upscaling 4K 60 is very possible.

They could sell the consoles in smaller more attractive boxes in 2-3 years and with higher storage sizes(Discless Series X coming out next year?). Otherwise the hw in the Series X and PS5 can handle 6 years of gaming.
 
Totally agree. CPUs have been absolute dog shit in the last few generations of consoles. Zen 2 (even at the lower clock speeds) is eons better than jag. Resolutions will probably lower over time but game code and storage I/O should be fine.
 
I'm not sure I follow what you're saying but I thought Quadbitnomial explained it eloquently. The change to a custom ASIC was simply to enable the drive to work with a 2x PCI4 interface rather than a 4x PCI3 interface. There's no change in bandwidth there, it's most likely simply a future proofing exercise because the console, and more importantly it's external peripherals will still need to be manufactured more than half a decade from now and PCI3 is already an old standard.

The max speed of the drive is determined by the number of NAND channels supported by the controller along with speed of the NAND used. The Phison E19T supports 4 channels and NAND speeds up to 1200 MT/s to give a maximum possible throughput of 3.75GB/s. However that doesn't mean the SSD has to use such high speed (expensive) memory.

As mentioned above we know the XBSX uses a Western Digital SN530, but the only customisation to that drive that has been mentioned is to the PCIe interface which results in no change to overall potential bandwidth. So given that the standard version of that drive is rated at 2.4GB/s, and given that we've been told of no changes that would impact that peak throughput, and given that Microsoft advertise the drive as being capable of 2.4GB/s, I think Occams Razor applies.

Also the sustained speed thing is likely a red herring. Any drive can sustain it's max throughput under ideal circumstances as long as it doesn't throttle. Presumably Microsoft are simply very confident of their cooling solution - something that would obviously be helped greatly by using a drive with much slower memory than the maximum supported.

I guess there would be some truth to this, though I don't know if calling their mention of sustained speeds a red herring is necessary. To me (as one interpretation, admittedly) it'd suggest the speeds aren't sustainable but you already make a case for why they would very much be sustainable. I don't do a ton of looking into SSD performance metrics on PC but from what I have noticed is certain drives tend to hit their peaks sporadically, then have noticeable drops, basically bobbing and weaving their bandwidth rates up and down dramatically over the course of a transaction. It can be especially noticeable with certain tasks like 4K video file writes to drives, again though I haven't kept up too much with benchmarks on very recent drives in the PC space.

So if I'm understanding you here, the flash memory controller's been changed from 3.0 to 4.0 moreso to keep the interconnect standard more relevant. But functionally the bandwidth could very well still be 3.75 GB/s. However I also still agree with Brit that they may've made some changes to the controller, even if they are minor. 4.0 has some small adjustments to the PHY layer and link-level management compared to 3.0, so the flash controller would need to be modified to accommodate at least those changes and be 4.0-compliant I think.

But that does not eliminate the possibility of other customizations. The drive is software driven and Microsoft has a lot of technical papers about improvements in that area.

Yeah at the very least they'd need to change the controller's PHY and link-level management to be in line with 4.0. But there's also the Flashmap papers and they clearly have done a lot of R&D into flash memory optimization systems for bandwidth, latency, redundancy etc. They also have designed their own custom SSDs like the 100 GB/s ASIC drive they made, I forgot it's name though.

My guess is even if the flash memory controller is more or less like the stock one minus some changes to accommodate 4.0 specifications, the decompression block will have most of the customizations insofar as MS's I/O solution. The flash memory controller is only a part of the whole I/O subsystem (goes for both Microsoft and Sony's systems).

I think for 10th gen systems they need to focus on shipping with 2TB SSDs and an expandable slot similar to what's in the PS5. That would alleviate some of the issues you've mentioned. With a target of about 200-300GB for game sizes this wouldn't be hard. They could simply ship 2 Bluray discs. Again it's not clear if companies like MSFT will even ship another console after this tbh.

If developers industry-wide can get better at compressing their data assets maybe 2 TB will be doable for 10th-gen but we have to assume the worst and therefore I think 4 TB minimum might be more likely :p . Though, I have thought about a possibility of them making an even *more* genuine move back to cartridges and dropping Blu-Ray altogether for cheap USB 4.0 Gen 2-based flash carts. 128 GB - 192 GB capacities, should be readily doable by 10th-gen with 2.4 GB/s bandwidths and not costing more than $5 - $8 at those capacities provided NAND prices continue to trend downward over time (which they should).

That way your cold storage medium is fast enough to send potentially double digits worth of compressed data to a 2 TB-size SSD drive, saving on costs for the SSD capacity and game installs being literal seconds. I think Microsoft might have something like that planned in the future when you look at the specs for the expansion cards as well as their form factor. They're perfectly sized, their capacity just needs to come way down but NAND prices also would need to fall more too and they're a few years out on that front. But I won't be too surprised if say 2024 or so, some Series games start shipping on small 64 GB expansion cards rebranded as flash cartridges, and they can release a Series system with no internal storage but same spec'd decompression hardware as the S and X. Could maybe even try this with a Series S refresh around 2023/2024 just depends on how NAND pricing works out among a few other things.

I think with SSDs you solve a lot of the issues you've mentioned. The Series X and PS5 will be capable of fully utilizing the small amount of RAM they have thanks to the SSDs. You can simply have the game install on the SSD as part of virtual RAM. So roughly speaking, if they doubled the RAM on 10th gen to 32GB, a game with 200GB install size could possibly see 232GB of virtual RAM. All they'd need is an SSD with 12GB/s throughput and decompression ratio of about 2.5:1 and the game could instantly page in anything it needs from the 200GB install. Thats how the Series X and PS5 games are going to work as well. Much better memory paging. We're definitely not at the point of diminishing returns if there's anything to learn from Apple's IC gains with the M1. The 10th gen will definitely have GPUs much more powerful and efficient than an RTX 3090. Thats 6-7 years from 2020. So much better hw acceleration for RT, better geometry processing engines, higher bandwidth RAM, etc. So finally achieving photorealistic games is possible on the 10th Gen.

This sounds like it'd work very well insofar as game loading, but I'm thinking more along the lines of rapid asset streaming. Which is why I still think for 10th-gen they may want drives with faster raw bandwidths and larger decompression ratios. To make up for that maybe they do indeed go with 2 TB standard instead of 4 TB or something like that, gotta save costs somewhere. Plus if they can do something like the aforementioned "flash cartridges" to replace Blu-Ray while not costing too much more than Blu-Ray ($5 - $8, depending on capacity and also hopefully NAND pricing being cheap enough by then), they could do a lot more with the 2 TB of storage than what Series X can do with 1 TB or PS5 with 825 GB.

And since storage space is already one of the biggest things people are taking issue with regarding 9th-gen out of the gate, that sounds like a part of the systems MS and Sony will want to resolve for 10th-gen; "flash cartridges" in lieu of Blu-Ray discs would save costs on not needing a Blu-Ray drive and also offer MUCH better transfer speeds to the internal SSD, plus game-specific updates and save data (at least some of it) could be optionally written back to the flash cartridge if the user wants, freeing up more SSD space for dynamic data (dynamic in the case of both virtual RAM and multiple reads/writes for dynamic game data, this kind of requires NAND with even better P/E cycles at the cheaper levels however or as a last resort, some inclusion of NVRAM reserved simply for massively dynamic game data)
 
Though, I have thought about a possibility of them making an even *more* genuine move back to cartridges and dropping Blu-Ray altogether for cheap USB 4.0 Gen 2-based flash carts. 128 GB - 192 GB capacities, should be readily doable by 10th-gen with 2.4 GB/s bandwidths and not costing more than $5 - $8 at those capacities provided NAND prices continue to trend downward over time (which they should).

Compared to the cost of a disc, that is positively enormous.
The sharks have tasted the blood (digital content) in the water, and they like it.

I think only some kind of consumer revolt (no idea what form that would take) could preserve physical media of any kind.
 
Compared to the cost of a disc, that is positively enormous.
The sharks have tasted the blood (digital content) in the water, and they like it.

I think only some kind of consumer revolt (no idea what form that would take) could preserve physical media of any kind.

They could go discless and just up the storage space as well I presume, but the only reason I'm entertaining flash cartridges is because, if you look right now, you can find USB 3.0 drives with 300 MB - 500 MB per second read speeds (already multiples more than UHD Blu Ray) at 64 GB - 128 GB capacities for around $10, and that's taking into account markup from sellers and markup from the manufacturer to turn a healthy profit. Production costs on those drives should be much less than that.

So in 4-5 years as USB 4.0 becomes more standard and NAND prices continue to fall then small cartridges with a few GB/s bandwidth at 64 GB - 128 GB capacities should be doable at the similar $10 price which'd include packaging, and with game publishers placing orders in the millions economies of scale would kick in on top of the lower NAND prices and Microsoft/Sony wouldn't feel as much a need to profit directly off the sell of such carts to publishers since they are going to get their 30% from digital sales and (I believe) physical sales too.

Such a cart shouldn't cost much more within five years time than a UHD Blu Ray with packaging costs today in terms of manufacturing costs, and it'd save on costs for a Blu Ray drive, plus offer much better bandwidth for data off the cart to/from the system. Even with a digital-only system I think there's more upside than downside, at the very least they can just make a 2nd SKU with more internal storage but lacks compatibility with this hypothetical flash cartridge. Just keep everything priced accordingly ;)
 
If developers industry-wide can get better at compressing their data assets maybe 2 TB will be doable for 10th-gen but we have to assume the worst and therefore I think 4 TB minimum might be more likely :p . Though, I have thought about a possibility of them making an even *more* genuine move back to cartridges and dropping Blu-Ray altogether for cheap USB 4.0 Gen 2-based flash carts. 128 GB - 192 GB capacities, should be readily doable by 10th-gen with 2.4 GB/s bandwidths and not costing more than $5 - $8 at those capacities provided NAND prices continue to trend downward over time (which they should).

We should wait and see. The cost of 256GB NVMe PCIe 4.0 is $44 right now(Sabrent 256GB Rocket NVMe PCIe M.2 2280). It will definitely be much cheaper to get an NVMe 256GB card in 7 years but The cost of a whole physical video game(distribution, packaging bluray disc) is about $4 so its hard to beat that. So I can see your perspective. If in 7 years time there exists dirt cheap $4 256GB NVMe cards(don't have to be PCIe 4.0) then its very possible what you're suggesting.

That way your cold storage medium is fast enough to send potentially double digits worth of compressed data to a 2 TB-size SSD drive, saving on costs for the SSD capacity and game installs being literal seconds. I think Microsoft might have something like that planned in the future when you look at the specs for the expansion cards as well as their form factor. They're perfectly sized, their capacity just needs to come way down but NAND prices also would need to fall more too and they're a few years out on that front.

The issue is the 2280 form factor is more likely to get cheaper per GB as the NAND prices go lower, not what MSFT is doing with their cfexpress card like storage. Thats way too expensive. A 980 pro costs the same but is about three times as fast. By end of next year it will be much cheaper as well. But Cfexpress cards and cards similar to the Series X expansion card still cost like $300-$800 for slow SSDs. For example a 64GB cfexpress card costs $199!!.

This sounds like it'd work very well insofar as game loading, but I'm thinking more along the lines of rapid asset streaming. Which is why I still think for 10th-gen they may want drives with faster raw bandwidths and larger decompression ratios. To make up for that maybe they do indeed go with 2 TB standard instead of 4 TB or something like that, gotta save costs somewhere. Plus if they can do something like the aforementioned "flash cartridges" to replace Blu-Ray while not costing too much more than Blu-Ray ($5 - $8, depending on capacity and also hopefully NAND pricing being cheap enough by then), they could do a lot more with the 2 TB of storage than what Series X can do with 1 TB or PS5 with 825 GB.

Actually it works perfectly for rapid asset streaming It's what they're doing on 9th gen and most likely will do for 10th gen(Unless I've misunderstood). For example, loading textures during a scene, you don't need to have all your high quality textures resident in RAM. The virtual Address space is 100GB of the game install plus 16GB of RAM. You can get any texture when you need it and when its most relevant for a scene. What you don't need can be resident on the SSD and instantly available in RAM when needed! This is already possible on the Series X and to some extent on the PS5(we don't know much about their texture streaming hw/sw but I would assume the virtual Address Space includes a larger part of the game install). Look at it this way, if you had 32 GB of RAM and your raw sustained SSD speed was 40GB/s, you'd be wasting at least 6GB/s of disk I/O bandwidth. The whole system would be bottlenecked by the size of the RAM. So on 10th gen, if they decide to double the size of RAM to 32 GB, they would only need to hit around 12GB/s SSD speeds. The decompression block would be able to convert that into 24 - 30GB of data as soon as its in RAM. Anything higher and it would be wasted since there isn't enough RAM. And RAM is a cache, if you have to constantly refresh the whole cache as Cerny showed in the road to PS5 video, then you're doing something wrong. I think he just intended to show an example of what was possible.
 
We should wait and see. The cost of 256GB NVMe PCIe 4.0 is $44 right now(Sabrent 256GB Rocket NVMe PCIe M.2 2280). It will definitely be much cheaper to get an NVMe 256GB card in 7 years but The cost of a whole physical video game(distribution, packaging bluray disc) is about $4 so its hard to beat that. So I can see your perspective. If in 7 years time there exists dirt cheap $4 256GB NVMe cards(don't have to be PCIe 4.0) then its very possible what you're suggesting.

Ah, I should've clarified a couple things beforehand: I'm not thinking in terms of NVMe drives, but mostly-reads USB thumb-style drives. A form factor that's more suitable for a console (don't want them sticking out of the system), maybe with eMMC-style NAND over USB 4.0 Gen 2 style ports. 128 GB flash drives, if you know where to look, even for the more well-known brands you can find 300 MB/s - 450 MB/s ones going for around $10 or so, and that's with seller's upmark and manufacturer profit margin factored in.

I don't think it would be cheaper overall than the $4 distribution cost of a physical Blu-Ray game as you've mentioned, but if it ends up being about only 2x that amount for this hypothetical, if game prices increase (again) or physical distribution's just saved for pricier versions of the game then maybe they can make it work out.

The issue is the 2280 form factor is more likely to get cheaper per GB as the NAND prices go lower, not what MSFT is doing with their cfexpress card like storage. Thats way too expensive. A 980 pro costs the same but is about three times as fast. By end of next year it will be much cheaper as well. But Cfexpress cards and cards similar to the Series X expansion card still cost like $300-$800 for slow SSDs. For example a 64GB cfexpress card costs $199!!.

Oh, interesting observation then xD. Maybe it's better to think of it along NVMe drives in terms of form factor then. In that case though it's probably not worth doing unless they want to sell a model without storage so if people just want a game or two, they buy the game with its 128 GB drive and put it in the system and play. There'd still need to be separate NAND in the system for OS files though, and some means of inserting that type of drive that's a lot more like a cartridge or what MS's doing with their expansion cards. PS5's way of expanding storage, it'd be too cumbersome.

Actually it works perfectly for rapid asset streaming It's what they're doing on 9th gen and most likely will do for 10th gen(Unless I've misunderstood). For example, loading textures during a scene, you don't need to have all your high quality textures resident in RAM. The virtual Address space is 100GB of the game install plus 16GB of RAM. You can get any texture when you need it and when its most relevant for a scene. What you don't need can be resident on the SSD and instantly available in RAM when needed! This is already possible on the Series X and to some extent on the PS5(we don't know much about their texture streaming hw/sw but I would assume the virtual Address Space includes a larger part of the game install). Look at it this way, if you had 32 GB of RAM and your raw sustained SSD speed was 40GB/s, you'd be wasting at least 6GB/s of disk I/O bandwidth. The whole system would be bottlenecked by the size of the RAM. So on 10th gen, if they decide to double the size of RAM to 32 GB, they would only need to hit around 12GB/s SSD speeds. The decompression block would be able to convert that into 24 - 30GB of data as soon as its in RAM. Anything higher and it would be wasted since there isn't enough RAM. And RAM is a cache, if you have to constantly refresh the whole cache as Cerny showed in the road to PS5 video, then you're doing something wrong. I think he just intended to show an example of what was possible.

No that's a great explanation actually, much appreciated. I can see your illustrative point, but I want to clarify the example I had in mind. When I was bringing up the idea of having a bandwidth on storage paired with decompression hardware to allow multiples of the physical RAM equivalent to be streamed in repeatedly as a framebuffer, I meant that more in terms of accessing the equivalent of unique data in that time frame.

Actually now thinking about it, if I look back at some of my PS6 speculation for SSD I/O, a game'd need a really large set of data for that required amount of decompression bandwidth wouldn't it? I think maybe I should re-think that in terms of cloud-based asset streaming instead, as it might be unrealistic expecting 1 TB game sizes for 10th gen, even if the data's compressed. FS2020 does great things with some texture streaming over the cloud, as game sizes get larger at some point the cloud will absolutely factor into things more, even for single-player content.

So I think I can see your point WRT required SSD I/O bandwidth and decompression speeds, actually. It's gonna help with me thinking about 10th-gen more in terms of cloud streaming of large assets though. If we're talking film-quality assets and all, the only issue would be ISP service restrictions in the network. And maybe for what I've been thinking way larger decompression limits (over 100 GB/s) for the SSDs would be useful for, that can be better served with built-in hardware acceleration and functions for real-time rapid transformation of data resident in memory, that can be satisfied with in-time streaming of new data from the SSDs. Something like new hardware accelerants in or tightly coupled to the GPU specialized for specific texture and geometry transformations that can save devs time from writing some of that code out explicitly in the software.

Welp, back to revamping some 10th-gen system specs
 
So I think I can see your point WRT required SSD I/O bandwidth and decompression speeds, actually. It's gonna help with me thinking about 10th-gen more in terms of cloud streaming of large assets though. If we're talking film-quality assets and all, the only issue would be ISP service restrictions in the network. And maybe for what I've been thinking way larger decompression limits (over 100 GB/s) for the SSDs would be useful for, that can be better served with built-in hardware acceleration and functions for real-time rapid transformation of data resident in memory, that can be satisfied with in-time streaming of new data from the SSDs. Something like new hardware accelerants in or tightly coupled to the GPU specialized for specific texture and geometry transformations that can save devs time from writing some of that code out explicitly in the software.

Welp, back to revamping some 10th-gen system specs

For the cloud you can simply use HBM memory in your servers and SSDs obviously. So biggest challeng is a networking issue where you need high bandwidth low latency internet access.
 
Personally I don't see a need for a mid-gen refresh this time. What would it be for? Higher frame rates at 4K? For example hitting native 4K 60 in sports and fighting games is already going to be very possible. Shooters, RPGs, third person AAA games could all achieve 4K 30 at a minimum and with AI upscaling 4K 60 is very possible.

They could sell the consoles in smaller more attractive boxes in 2-3 years and with higher storage sizes(Discless Series X coming out next year?). Otherwise the hw in the Series X and PS5 can handle 6 years of gaming.
OT but As well as assured 4k60 a more enhanced RT implementation would see me part with my cash.
 
Umm...No. The faster you can move data to RAM from the SSD the faster you can move that data to the processor.

Is this to say that if your I/O decompression of data from SSD storage is many times higher than the physical RAM capacity, the higher that rate the better? Would that essentially mirror the relationship between RAM capacity and RAM bandwidth serving its function to CPU and GPU?

Because that's one of the things I thought about with some earlier speculation, but wasn't sure about committing to more recently. There's a little research I'm going to do before spinning back up some of my own 10th-gen speculation but I did put some pretty high decompression bandwidth rates earlier, maybe that wasn't too much of a stretch in hindsight.
 
Umm...No. The faster you can move data to RAM from the SSD the faster you can move that data to the processor.
You wouldn't have the RAM to fill up with that data. That was the point I was making. Thats why Sony and MSFT were content with 9GB/s and 4.8GB/s after decompression given theres 16GB of RAM. If they'd put SSDs with 7GB/s throughput before decompression it would be diminishing returns compared to the 2.4GB/s and 5GB/s they ended up with. I think I remember hearing rumors Sony was aiming for 7GB/s at one point. If true they chose 5.5GB/s because it is sufficient for the size of RAM, and needs of developers without wasting the effective throughput.
 
OT but As well as assured 4k60 a more enhanced RT implementation would see me part with my cash.

If a guaranteed 4K 60 across the board simply needs a better CPU but the same GPU then its possible as a midgen refresh. As long as its a cheap IC. Otherwise 4K 60 using some upscaling sounds very possible once the devs figure out how the RDNA 2.0 GPUs work. I don't think people would be able to tell the difference. The dirt 5 dev mentioned IIRC correctly that they weren't able to max the Series X CPU(>90%) despite having an engine that can scale the game to any number of CPU cores. This was partly because of failure to utilize the RDNA 2.0 GPU.

Then for better RT acceleration we have to wait and see. What's clear is that having a separate die area for RT acceleration is much more efficient. Something like what NVIDIA is doing. The RT cores in the GPU shader compute units in the RDNA 2.0 architecture are not as good. So it would likely be expensive to implement the latter for a midgen refresh.

Most important thing is for devs to fully utilize the current hardware imho. We haven't seen anything of what these systems are capable of. The midgen refresh made a lot more sense last time since the hw was pretty weak. You don't want midgen hw that costs money and is only a few years away from a new generation that no dev is fully utilizing.
 
You wouldn't have the RAM to fill up with that data. That was the point I was making. Thats why Sony and MSFT were content with 9GB/s and 4.8GB/s after decompression given theres 16GB of RAM. If they'd put SSDs with 7GB/s throughput before decompression it would be diminishing returns compared to the 2.4GB/s and 5GB/s they ended up with. I think I remember hearing rumors Sony was aiming for 7GB/s at one point. If true they chose 5.5GB/s because it is sufficient for the size of RAM, and needs of developers without wasting the effective throughput.

If you could fill the RAM pool in 0.5 seconds instead of 1.0 seconds it would be better. If you could fill the RAM in 0.1 seconds instead of 1 second that would be even better still. The faster it is, the more developers can depend on having assets streamed into the GPU/RAM in time for what they want to do.

The SSD speed chosen wasn't limited in any way by the available RAM pools in the consoles.

The SSD speeds chosen were a compromise between the cost and tech available.

Regards,
SB
 
Back
Top