What wild claims am I pushing? You're using the term superior, but I'd say PC and console solutions for data are just fundamentally different at the moment. It is the combination of more performant i/o architecture and unified memory that has allowed the consoles to be more nimble and outclass the PC platform with these newer games that pack in much larger assets and requiring efficient memory solutions that to keep performance at a high level.
You've answered your own question in the bolded statements above. "More performant" and "outclass" are the same as "superior", at least in this context.
There is nothing fundamentally more performant about console IO architecture than the current state of the art in PC. It is in fact, less performant, in every respect compared to the currently best available PC solutions (lower transfer speeds, lower decompression throughput, potentially much higher latency depending on how the split memory pools are utilized), although it will obviously be more performant than the vast majority of PC's out there, and important to note will require less CPU performance to attain the same level of performance - no-one is arguing that fact.
If you had used the term "more efficient" (in terms of compute resources required for the same result) then I could support that, but more performant is simply wrong. The problem with your analysis is that you're basing it on issues experienced in several recent games, and here's why that's problematic:
- You're mentioning "newer games" without being specific about what issues these games are having that you can place at the foot of a "less performant" IO architecture. Many of the stuttering and performance issues experienced in some recent games (Gotham Knights, Forsaken, Calisto, Sack Boy, Dead Space, Hogwarts etc...) were patched out post launch, and are often down to shader compilation stuttering which is completely unrelated to the IO system. This points at a software/development issue rather than an IO architecture one.
- None of these "newer games" that you're referencing are using the modern storage API on PC that is specifically designed to address the requirements of the larger assets and higher streaming requirements that you cite as causing problems for PC. That is except Forsaken, which guess what - loads faster on the PC, albeit with higher CPU requirements, due not in small part due to the fact that it's still doing decompression on the CPU.
- Even with the older and one could argue, unfit for purpose storage API in use (win32), all of these games - including TLOU - can offer consistently higher performance than the PS5 on sufficiently powerful PC hardware. I'm not a fan of brute forcing to a solution, but if you're arguing that the PC as a platform i.e. regardless of hardware used has a "less performant IO architecture" which allows the consoles to "outclass" said platform, in newer games, then why do all said newer games still perform consistently better on that platform with certain hardware configurations? PC IO can be less efficient, particularly when using sub optimal API's and decompression schemes, sure. But not less performant. And certainly not less performant if the optimal API's and decompression schemes where being used (along with just general optimisation to the platform) as Forsaken has shown us.
You don't have to accept my words. You can hear it straight from a developer. If you subsequently downplay the message of developers who are creating the games and working on all of these platforms then you're clearly blinded by arrogance, ego, and perhaps an obsession with the idea of PC dominance in all areas.
That's not "the message of developers". That's the message of one art director who as mentioned in other posts isn't ideally placed to talk definitively about the low level technical details being discussed here.
For example, at one point he talks about really bad experiences he's had in recent PC games, with "stuttering and hitching", then puts that at the feet of data transfers, and then later notes he doesn't know if the stuttering he experienced in another game was due to shader compilation stuttering or something else. And also to note that none of the games he's talking about use the modern storage API that is now available on PC, so straight away drawing conclusions about the
general capability of PC IO from games that are literally not leveraging that capability properly, is a faulty premise.
In fact his description of how data is being decompressed and then passed "back and forth" between memory pools doesn't tally at all with how modern PC technologies, like Direct Storage, resizeable bar, and GPU decompression would work, which leads me to believe he's basing all of his statements on a fairly high level understanding of how things have been done in the past, rather than how they
can be done in the present.
I'm not saying all games will use the potential of the PC's IO capabilities well. I think it's clear that many do not, and many will continue to not do so. But that holds true for consoles too. What percentage of PS5 games have sub 2 second load times? But the point here is that there is nothing fundamentally less performant about the PC's IO architecture that if utilised efficiently cannot result in very similar results to those we see on the PS5 with similar levels of hardware (again, a bit more CPU will always be required). Forsaken is a great example of what can be done when it's done properly. TLOU is a great example of what happens when it's not done properly.
Don't you think we should wait for the implementation of directstorage in an actual game instead of comparing realities vs hypotheticals? We've heard so much promise about this API with no real world applications yet. Running benchmark tests is neat, but they're not games. We can only go off of what we're seeing in the present. Once a game with in-game streaming using directstorage goes commercial, I would be more than happy to compare and contrast. Until then, lets just stick with hard facts and realities.
Again, Forsaken is the real world example which proves it's possible on the PC. I don't place that all at the feet of DirectStorage. I expect the lions share of that games impressive loading is down to simply being well optimised in that respect both for PC and PS5. We've seen other non DS games which perform very well in that respect on PC as well.
The point here though is you are also talking in theoreticals in saying that '
as a platform, the PC has a less performant IO architecture than the PS5', and you're making vague references to games that "prove" that while ignoring the existence of games which show the opposite. So putting aside individual implementations which can of course be sub optimal (which TLOU for example very obviously is), I am explaining at a hardware and API level why that's not actually the case. Whether the theoretical capability is put to good use in more than a handful of games moving forwards though is an entirely different argument. Consoles will certainly always have the advantage of receiving more targeted optimization than PC's and so the advantage sits with them from that perspective.