Velocity Architecture - Limited only by asset install sizes

They might be just coded differently. Which is what I’m thinking is happening here. It may just come down to older engine history where someone decided for PS4 or Xbox they would load it this way. And maybe something is more modern or older for the other version and no one cares because loading didn’t matter
Yeah they might. But you have to consider that Microsoft have been supporting older generation games for a decade, Sony have not. Microsoft will have have learned a lot in that time and will have been able to tune all parts of the b/c pipeline.

The new patch for Last of Us also also proofs that. Faster loading times were always possible, it was just never a design target.

And Until Dawn, which has not been patched since 2015 and which now has almost no load times since firmware 8.0 was released, demonstrates that games can have drastically improved the performance from the OS without patches.
 
They might be just coded differently. Which is what I’m thinking is happening here. It may just come down to older engine history where someone decided for PS4 or Xbox they would load it this way. And maybe something is more modern or older for the other version and no one cares because loading didn’t matter


I cannot see any amount of cpu or ssd speeds to make up 3X differences in this way. To me it’s a code issue.

But the thing is, the PS4 was the lead platform for vast majority of 3P games this gen right? Which means I'd assume however the I/O stack was programmed in multi-plat games, it would have invariably benefited PS4 in most games since it's the lead platform.

What you're saying is a part of it, and Allandor expanded on that well. At the same time though I wouldn't completely count out the CPU's involvement in the differences. After all, Series X can enable non-SMT mode for BC games so that pushes the CPU clock even higher, and these are all games that were likely CPU-bound in how they handled I/O. If that presents a 300 MHz delta on the CPU side for BC games with I/O stacks that rely on the CPU.
 
Games on today's consoles and PCs generally have their data stored in 'packs' (.pak files) which are often just .zip archives with or without actual compression. There can be many of these and they're often organised so their contents have all the data needed for any given level or area. This is obviously done to to improve loading/streaming times by reducing seek times. This is why there is so much data duplication, e.g. trees found in pack files for level 1 will probably be needed in other levels with trees.

On nextgen consoles I'd expect organisation to be based on data type, i.e. here is a pack file with all the foliage geometry, here is a a pack file with all the foliage textures etc. Because there are no seek times and virtually no overhead for file access you can pull 60 trees from these two packs and the the same will be true of most other assets. Textures compressed with zlib / kraken / oodle / bcpack will decompress on the fly during load and this approach will virtually eliminate CPU-bound check-in.

Ideally you want to be storing game data in the format that is immediately usable by the game engine, even if it's a little larger. It'll likely compress anyway and it saves a CPU-process during check-in to manipulate data into the format it is required. It probably won't eliminate all CPU check-in but it would alleviate a ton of it. You can see a practical demonstration of this with Spider-Man Miles Morales on PS5 where you go from game menu to city in 2 seconds of game load. But I bet PS5 running the PS4 Spider-Man, with smaller assets lumped together, takes much, much longer.

Hopefully indeed with increases in bandwidth and lowering latencies with better NAND technology, more mature APIs and algorithm implementations it can lead to a reduction in the older ways of packaging data to try pushing further and further with smaller granularity of access.
 
Yeah, I think AMD are onto something with IC. If it works out well (even if it has issues in RDNA 2, they can refine it for RDNA 3), that could draw others like Nvidia to develop equivalents. They can refine to work even smarter at smaller capacities, or get results with "slower" caches at the L3$ in later generations comparable to what they could be getting in the first generations. That might open up the chance of seeing it in future gaming consoles as well.

Speaking of Infinity Cache, I thought you might be interested to see this retrospective from Anandtech on the i7 5775c. (Bear with me!)

It's a 5xxx Intel 'Core' CPU that was mainly for mobile, but was special in that it had a 128 MB on package pseudo-L4 edram cache (you may remember edram from such classics as the PS2 and the Xbox 360). At the time the edram was thought to mostly just benefit the IGPU, and the slightly older i7-4790K was benchmarked as beating it when they were both equipped with a dedicated GPU due to its much higher clocks. Move on five years, and with games pushing multiple cores harder that phat 128 MB (pseudo) L4 cache is really kicking some ass ... even against some much more modern processors!

https://www.anandtech.com/show/1619...ective-review-in-2020-is-edram-still-worth-it

One take I'm not agreeing with that some are trying to go for, though, is that variable frequency is affecting the SSD performance on PS5's side and that somehow is making for the (generally) longer load times for BC games there vs. Series X. That take doesn't make a lot of sense to me; it's already been well described variable frequency is a CPU/GPU thing, nothing in relation to the SSD or in PS5's case, the SSD I/O hardware block. AMD's own version of variable frequency they showed off at the RDNA 2 event is also CPU/GPU related.

Granted, BC games aren't stressing the SSDs in either PS5 or Series X, but it's just weird to see people rationalizing BC load times down to SSD being affected by variable frequency and Smartshift, or that variable frequency as-is would be affecting load times of BC games on PS5 because I don't think any of these unoptimized BC titles are necessarily pushing the GPU to its limits if much at all. Half the GPU is disabled anyway for BC on PS5 (IIRC), so there's almost virtually no way the GPU would have workloads stressing it to require power from the CPU's power budget (and therefore lower CPU profile performance which actually would affect BC).

Like you, I don't think variable frequency is involved. I doubt BC games are pushing PS5 remotely close to needing to throttle anything. And while Sony haven't stated that the PS5 SSD won't throttle, I think it wouldn't be common at worst, and that it surely won't be the case for BC games.

That's also accounting for the fact PS5's CPU doesn't have a non-SMT mode (not that it'd be needed for BC; its clock is still much faster than PS4 or Pro's CPUs, though maybe the additional MHz overhead for Series X running BC games in non-SMT mode does help some with loading times there for non-optimized BC games?).

Yeah, I think CPU is always going to be a factor coming from last gen. I'm totally with DSoup on this.

Unpacking and/or decompressing last gen was mostly on the CPU. Even if you only go up from 30 MB/s to, say, 600 MB/s that's still a 20 fold increase and well beyond respective CPU gains. And if mid gen CPUs were showing some moderate loading gains, I think it's fair enough that a slight advantage in XSX CPU clocks might typically translate to a slight gain in loading times.

Anyway, next gen is here now. Time to speculate about how often cross gen Xbox games are sticking with legacy SSD mode and ignoring SFS, because that's easiest. ;)
 
Speaking of Infinity Cache, I thought you might be interested to see this retrospective from Anandtech on the i7 5775c. (Bear with me!)

It's a 5xxx Intel 'Core' CPU that was mainly for mobile, but was special in that it had a 128 MB on package pseudo-L4 edram cache (you may remember edram from such classics as the PS2 and the Xbox 360). At the time the edram was thought to mostly just benefit the IGPU, and the slightly older i7-4790K was benchmarked as beating it when they were both equipped with a dedicated GPU due to its much higher clocks. Move on five years, and with games pushing multiple cores harder that phat 128 MB (pseudo) L4 cache is really kicking some ass ... even against some much more modern processors!

https://www.anandtech.com/show/1619...ective-review-in-2020-is-edram-still-worth-it

There's life in Intel yet xD!

All jokes aside, it was a good idea then, clearly a good idea now. I don't think people give Intel enough credit. Yeah the Spectrum/Meltdown stuff did a lot of damage and AMD are beating them ATM, but hopefully they come back stronger than ever. Healthy competition is always a good thing ;)

Like you, I don't think variable frequency is involved. I doubt BC games are pushing PS5 remotely close to needing to throttle anything. And while Sony haven't stated that the PS5 SSD won't throttle, I think it wouldn't be common at worst, and that it surely won't be the case for BC games.

Exactly. Maybe instead of "throttling" a better term I should've used was "fluctuate". I don't really see it happening though, except in instances where the game itself doesn't need maximum I/O bandwidth.

Yeah, I think CPU is always going to be a factor coming from last gen. I'm totally with DSoup on this.

Unpacking and/or decompressing last gen was mostly on the CPU. Even if you only go up from 30 MB/s to, say, 600 MB/s that's still a 20 fold increase and well beyond respective CPU gains. And if mid gen CPUs were showing some moderate loading gains, I think it's fair enough that a slight advantage in XSX CPU clocks might typically translate to a slight gain in loading times.

Anyway, next gen is here now. Time to speculate about how often cross gen Xbox games are sticking with legacy SSD mode and ignoring SFS, because that's easiest. ;)

Actually something interesting happened with Until Dawn on PS4; there was a firmware update recently and now the game has near-instant load times...on a PS4! That surprised me because I've always been of the opinion PS4/XBO were simply incapable of this, even with SSDs installed. But a firmware update does the trick.

That has me thinking, even though we know PS5's SSD is 2x faster than the Series system, for 1st-party games in particular, it's not completely infeasible MS studios could hit near-instant load times or data streaming figures, or at least essentially comparable with whatever Sony's 1st-parties do, through full use of XvA features and smart coding. Because software optimizations (including firmware optimizations), I guess they really can make some massive differences if done correctly, going by the Until Dawn stuff.

The drawback though is, if it's 1st-party related on MS's side it'd be on a game-by-game basis, not really something that could probably be packaged into software tools 3P devs could implement on their own titles, so later in the generation when the I/O in both systems are being fully leveraged (in 3P terms) at worst we see differences in line with what the paper specs say. But then it asks the question, how long will the average load times be for games further in the generation? I doubt devs will ever want to get back to the 1 minute + times we saw this current generation, they'll do everything they can to avoid that, otherwise users will start to feel these consoles failed in maintaining one of their biggest advantages and that softens people's expectations on similar innovations in future systems.

And, that worst-case scenario also depends on how XvA actually performs, which I'm particularly bullish on "punching above its weight" (cheesy phrase now but eh). How much so is still up for debate, but if the worst-case load times for the typical PS5 game later in the gen is, say, 15 seconds, from what we're seeing so far that just puts Series X load times for that same title at maybe 25 seconds. Which isn't actually all that bad.

One other thing I'll say quick is though neither system's showing any games (in terms of cold boot) with load times matching what you'd assume based on the raw speeds (or even compressed bandwidths) in relation to the physical memory capacities once you subtract the OS reserves (13.5 GB Series X, probably 14 GB PS5), going off the Miles Morales (PS5) and AC Valhalla (Series X) cold boot times, if the 7 seconds and 10 seconds figures are correct I think at least in those cases the Series X is doing a bit more with less in comparison to PS5.

Because, again, taking paper spec figures into account PS5 should be able to fill its memory in 2.5 seconds and Series X in 6.6 seconds (I didn't take out the OS reserves for this :S). But 7 seconds is showing an almost 3x increase over the former figure, while a 10 seconds load is showing only a 1.5x increase over the latter figure, that's also factoring in Miles Morales is probably more optimized for PS5 than AC Valhalla will be on either PS5 or Series X. Again though, just two cases, but I thought that was interesting to see given they're both cross-gen games with explicitly next-gen versions available (and TBF, it's also not accounting for actual design differences between the two games from a coding POV).

I also think Quick Resume's going to be something of a hidden secret for the Series systems when it comes to shaving off load times in relation to PS5 games. Because unless a game explicitly features both autosaves and load points from those autosaves, wherein a person on PS5 would have to launch the game, then select a save, and then possibly travel to the location they were actually last at (if they got to a point they wanted to get at but not far enough to make a save), with Series you essentially just create your own autosave save-state file, and pick right back up where you last at.

I know you technically had something kinda like this with Suspend/Resume on PS4, but that was just with a single game, and if the power cuts out, so also goes the state in the game you were at :S. Would even say it might (maybe?) be possible for some innovative game design features (particularly with episodic games) to leverage Quick Resume, but that depends on how much control the OS grants to games for such things (some of that might be held back due to security reasons).
 
This is almost-definitely attributed to the CPU. Most game data check-in will be CPU-driven, i.e. once a file has actually been loaded into RAM the data is often a mishmash of compressed and uncompressed textures, shaders, audio, geometry and other stuff and separating and processing the data so it's usable by the game engines is on the CPU, aside from zlib decompression which is managed in hardware in both consoles. Series X's CPU clock is a shade faster than PS5 and most games exhibit a shade faster loading on Series X.

Coincidence? :nope: Conspiracy? :nope: Results fitting the facts? :yep2:

I can understand on BC games how this comes into play but with NBA 2k21 for example the Series X simply loads into gameplay faster than the PS5. You would think the I/O coprocessors in the PS5 are handling this along with the custom DMAC to ensure optimal throughput. I was expecting say 2 seconds for the PS5 vs 4 for the Series X but surprisingly the Series X is able to fill up RAM for games faster. Could be better DMA engines in the Series X alongside the faster CPU clock and the fact there is lower latency since all the instructions are handled on the CPU then offloaded to the DMA. On the PS5 you have to first offload them to the I/O controllers then the DMAC. If I was to guess as we move forward with next gen, and as loading times significantly reduce, the limited amount of RAM will mean we wont see a huge difference in loading times.
 
I can understand on BC games how this comes into play but with NBA 2k21 for example the Series X simply loads into gameplay faster than the PS5. You would think the I/O coprocessors in the PS5 are handling this along with the custom DMAC to ensure optimal throughput. I was expecting say 2 seconds for the PS5 vs 4 for the Series X but surprisingly the Series X is able to fill up RAM for games faster.
For the PS5 to be twice as fast, storage, CPU and RAM all needs to be twice as fast. Load times are not juts limited to the speed of storage, using Spider-Man as an example I explained in this post the type of work the games needs to do once the data has loaded in before it's ready to play. The way data needed by the game is stored on the drive is nothing like how it exists in memory. There is a lot to do. :yes:
 
I can understand on BC games how this comes into play but with NBA 2k21 for example the Series X simply loads into gameplay faster than the PS5. You would think the I/O coprocessors in the PS5 are handling this along with the custom DMAC to ensure optimal throughput. I was expecting say 2 seconds for the PS5 vs 4 for the Series X but surprisingly the Series X is able to fill up RAM for games faster. Could be better DMA engines in the Series X alongside the faster CPU clock and the fact there is lower latency since all the instructions are handled on the CPU then offloaded to the DMA. On the PS5 you have to first offload them to the I/O controllers then the DMAC. If I was to guess as we move forward with next gen, and as loading times significantly reduce, the limited amount of RAM will mean we wont see a huge difference in loading times.

It's like has been thought to be the case for a while now; yes Sony has a very good I/O solution, but MS aren't idiots. They knew what they were doing, too, and their approach is both more scalable (we see both Nvidia and AMD are going DirectStorage-based routes) and at least as performant as with Sony's in virtually all instances. We simply aren't seeing anywhere near the 2.25:1 gap in storage I/O the paper specs were suggesting would be the case, and I don't see that ever becoming a ratio that's near reached because neither system's I/O is being fully taxed yet and, as you and Dsoup are saying, there's a lot more involved WRT actually getting the data into usable state in memory for the system.

I'm actually a bit more curious what MS's customizations were for the CPU now. They've vaguely said it's "server class" but it's a bit hard to gauge what that would mean specifically. Also as the gen goes on, games should be able to further optimize their access patterns for data in memory (especially 1P titles) which should help keep load times short & sweet (if not virtually nonexistent) while game complexity grows in scope.
 
We simply aren't seeing anywhere near the 2.25:1 gap in storage I/O the paper specs were suggesting would be the case

People are focusing way way way too much on the game loading drag races which is not the point with the PS5s storage I/O. The point was to remove bottlenecks for in game scenarios where data needs to be streamed in rapidly without taxing the APU.
 
People are focusing way way way too much on the game loading drag races which is not the point with the PS5s storage I/O. The point was to remove bottlenecks for in game scenarios where data needs to be streamed in rapidly without taxing the APU.

I understand that, but like Johnny Awesome was also saying (I think), MS have also focused on removing those bottlenecks. And I actually kind of question how much those bottlenecks are bottlenecks in the first place, with the recent firmware update on PS4 dramatically cutting load times for games like TLOU and Until Dawn.

Those are not open-world games to be fair, and aren't pushing asset streaming, but it has resulted in load times once thought impossible for that type of system, with its SATA II interface. I think the bigger bottleneck might've been the storage medium itself; accessing data from a platter drive is just much different than an array of NAND flash modules. Then there's also need to consider how the game itself codes its I/O stack for accessing and managing data while it's loaded in.

So yes, I understand very well that Sony's bigger focus was in facilitating low-latency asset streaming, but both PS5 and the Series systems will be able to do that in a way that's "next-gen" enough. Though I expect Sony's to have a slight advantage in that regard, just probably nothing that'll create a large disparity, particularly when on Series devs can leverage SFS and the mip-blending hardware to aid in texture asset streaming (although that is going to be harder to do versus Sony's solution, so it could be more hit-and-miss).
 
Yes yes, I have heard it all. "This feature, that feature, bum SDKs, RDNA 22.2⅓, VRS, SFS, and partridge in a pear tree is going to make all the difference you just wait and see"..... It's all nebulous man.

I'm going from what I have read and seen from the developers, system architects and actual running software. For the hear and now, proof is in the pudding.
 
Yes yes, I have heard it all. "This feature, that feature, bum SDKs, RDNA 22.2⅓, VRS, SFS, and partridge in a pear tree is going to make all the difference you just wait and see"..... It's all nebulous man.

I'm going from what I have read and seen from the developers, system architects and actual running software. For the hear and now, proof is in the pudding.
But, going by your argument that would mean proof is in the pudding and Series X SSD/IO is punching way above its weight vs more hyped PS5 solution?

As far as GPUs go, it was always going to be small difference duo to not only close specs, but also devkit availability, maturity and in the end resolution scaling which is very hard to notice and can bring 10-15% of preformance increase easily if done right. Loading and streaming though, that should have been slam dunk by PS5. I happen to think MS solution is extremely elegant and well thought out (like Sony's SOC is for example).
 
People are focusing way way way too much on the game loading drag races which is not the point with the PS5s storage I/O. The point was to remove bottlenecks for in game scenarios where data needs to be streamed in rapidly without taxing the APU.

I think it made more sense for Sony to go the route its competent at which is hw. Thats why they have large I/O coprocessors. MSFT on the other hand simply rewrote their algorithms for file I/O and most likely upgraded the DMA engines to handle SSD workloads. And thus far that has proven to be a more efficient route. Despite having half the speed, its performed extremely well so far despite only having half the effective throughput.
 
For the PS5 to be twice as fast, storage, CPU and RAM all needs to be twice as fast. Load times are not juts limited to the speed of storage, using Spider-Man as an example I explained in this post the type of work the games needs to do once the data has loaded in before it's ready to play. The way data needed by the game is stored on the drive is nothing like how it exists in memory. There is a lot to do. :yes:

Yes thats why the decompression block for the PS5 has a higher equivalent number of Zen 2 cores than the Series X(I think 13 compared to like 5?). But besides that, Its down to the OS optimization for SSD workloads, the DMA controller and the SSD controller. Thats where the Series X seems to be more efficient thus far and sincerely punching above its weight. The DMA controllers in both systems are built to handle SSD workloads. The CPU requests for data should be negligible in both systems(10% of Zen 2 core vs I/O coprocessors). I only see a small latency disadvantage on the PS5 by having an extra IC for processing those instructions(I/O coprocessors) before informing the DMA controller. We know the PS5 has a custom controller for their SSD. Which apparently they built from the ground up which is impressive because it means they can actually sustain the 5.5GB/s. Most likely they did this because it was cheaper and better than either customizing the E-16 Phison or waiting late for the E-18. But now we know the Series X Phison E-19 controller is custom as well since it is rated up to 3.9GB/s and not the 3.75GB/s you'll find in off the shelf SSDs. So you can bet the Series X SSD will almost always be at 2.4GB/s or above(rarely below). And just like Sony, the firmware for the SSD controller is completely unique to the platform, you wont find it anywhere else. Actually it was rewritten by Andrew Goossen's team at MSFT.

So basically all I'm wondering, for example in NBA 2K21, could the Series X be loading faster because of the lower latency? It can't be CPU overhead since both systems basically eliminated them. If anything it would be the Series X suffering from that since its using 10% of a Zen 2 core. But alas its loading some next gen third party games faster!
 
But, going by your argument that would mean proof is in the pudding and Series X SSD/IO is punching way above its weight vs more hyped PS5 solution?

You're going to see only minor differences between the consoles when running games not optimised for the new storage solutions, this includes everything running in backwards compatible modes but also games released with nextgen versions like Valhalla, Watch Dogs and NBA.

We've only seen two games where the assets have been packaged in a way to leverage the new storage: Astrobot (3 second load) and Spider-Man (6 second load). It could be a while before an optimised cross-platform game is released.
 
No matter what, the CPU is going to be involved in loading because even though you can take take data off the SSD and put it straight into RAM, the game engine is going to have to instantiate that data into game objects or entities etc. Not saying that would explain the loading differences between the two consoles, but the CPU is not out of the picture entirely.
 
Yes thats why the decompression block for the PS5 has a higher equivalent number of Zen 2 cores than the Series X(I think 13 compared to like 5?). But besides that, Its down to the OS optimization for SSD workloads, the DMA controller and the SSD controller. Thats where the Series X seems to be more efficient thus far and sincerely punching above its weight. The DMA controllers in both systems are built to handle SSD workloads. The CPU requests for data should be negligible in both systems(10% of Zen 2 core vs I/O coprocessors). I only see a small latency disadvantage on the PS5 by having an extra IC for processing those instructions(I/O coprocessors) before informing the DMA controller. We know the PS5 has a custom controller for their SSD. Which apparently they built from the ground up which is impressive because it means they can actually sustain the 5.5GB/s. Most likely they did this because it was cheaper and better than either customizing the E-16 Phison or waiting late for the E-18. But now we know the Series X Phison E-19 controller is custom as well since it is rated up to 3.9GB/s and not the 3.75GB/s you'll find in off the shelf SSDs. So you can bet the Series X SSD will almost always be at 2.4GB/s or above(rarely below). And just like Sony, the firmware for the SSD controller is completely unique to the platform, you wont find it anywhere else. Actually it was rewritten by Andrew Goossen's team at MSFT.

So basically all I'm wondering, for example in NBA 2K21, could the Series X be loading faster because of the lower latency? It can't be CPU overhead since both systems basically eliminated them. If anything it would be the Series X suffering from that since its using 10% of a Zen 2 core. But alas its loading some next gen third party games faster!

That's interesting; I knew MS's controller wasn't exactly off-the-shelf but I didn't know it was also capable of higher bandwidth than the standard one. I'm curious of what other customizations were done on the controller.

About the latency stuff due to having more ICs involved in the process, I've seen that brought up once before, it could be something worth looking out for. Especially considering that in the I/O block (technically applicable for both systems, but likely less an issue on Series systems since some slice of the I/O stack processing is still done on CPU) is "equivalent" Zen 2 cores, but it's not like it has literally 13 Zen 2 cores in there. I figure all of the talk of being comparable to such and such many Zen 2 cores in that aspect is similar to the way both Sony and MS have described their audio solutions as being analogous to prior system CPUs; more for illustrative purposes to give a picture of rough peak performance capability, but not much beyond that.

Another interesting thing is that indeed Series X is performing a lot closer to PS5's SSD I/O than most probably expected, yet at the same time games are just scratching the surface of XvA. Granted, that could probably be said for PS5's SSD as well, but I'm honestly not expecting the delta between them to grow any larger than it already is in this regard. If anything, it will probably shrink even more, especially with 1P games. When you're averaging load time differences of a literal second or two and have equally performant latency/file I/O for asset streaming, it all basically becomes a moot point.

You're going to see only minor differences between the consoles when running games not optimised for the new storage solutions, this includes everything running in backwards compatible modes but also games released with nextgen versions like Valhalla, Watch Dogs and NBA.

We've only seen two games where the assets have been packaged in a way to leverage the new storage: Astrobot (3 second load) and Spider-Man (6 second load). It could be a while before an optimised cross-platform game is released.

This is true, although I think the performance we're seeing with games currently between the two in terms of load times will generally stay true once the optimization process begins. It's ironic because it was actually the recent PS4 firmware update that's convinced me you can technically do a LOT more with less, considering that system's interface standard and general design regarding I/O, yet games like TLOU and Until Dawn are pulling load times there comparable with BC titles on PS5 and Series X. That says a lot IMHO.

However if, once that optimization process starts, we do see the delta start to grow between the two some (particularly with 3P titles), then I think it'll come down more to Sony's I/O solution being the "easier" of the two to leverage in a shorter span of time, since a lot of that hardware is there to automate tons of the process. MS's approach seems a bit more flexible but there are parts of it which have a higher learning curve, like parts of SFS (to my knowledge), and there could be cases where getting even lower load times comes down to parts of the game code which might have to be adjusted to accommodate that. Not all 3P titles would likely have the means to dedicate that type of resource, but MS's own API tools being readily available (and technical support for 3P devs; they seem to be very good with this) can resolve a good deal of that likely.
 
No matter what, the CPU is going to be involved in loading because even though you can take take data off the SSD and put it straight into RAM, the game engine is going to have to instantiate that data into game objects or entities etc. Not saying that would explain the loading differences between the two consoles, but the CPU is not out of the picture entirely.
Yup. I think devs will spend a lot fo time rethinking not just how data is organised in storage but how it's packaged and what format it is in. It may be advantageous for performance (loading and running) if data is stored in formats which take a little more space but where it's quicker to utilise. I can also envisage entirely new nextgen check-in, e.g. as you're turning your car maybe you don't want to load in a streets worth of data before you start generation the world geometry, instead maybe you want to do that in smaller chunks so that world generation is handled in smaller chunks, racing to keep up with I/O as data is loaded. Much of the work I did on servers was akin to this.

On PS5 Spider-Man is a 6 second load and Astrobot is a 3 second load. I think in 2-3 years, devs will have shaved a good fraction off such times.
 
Back
Top