Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
This is great and all but, if you follow the flow of the discussion, you'll see it was not a pc vs console debate. It was a discussion about properly constructed arguments and straw mans.

Which he acknowledged, so seems like he was following the discussion just fine.

BTW - I get why you are doing it in order to refute that other person, however it could lead people to the wrong conclusions.
 
I agree it's not the same as hUMA because the dev still has to manage which datasets reside on the CPU and which reside on the GPU in order to maximise performance, which means more work for the developer, and more chance of making the wrong decision and getting non-optimal performance. All that said, as long as it's implemented well, it should greatly mitigate the biggest complaint the recently linked source has about the PC which is the copying of data back and forth between memory pools.

With that potentially significantly mitigated. The relative balance of benefits vs costs of split vs unified memory architectures shifts significantly, particularly in raw performance terms.
This is one of the believe it when i see it kind of thing. Yes, it's possible in theory but, is it probable in the near future? I don't think so. Like I said in an earlier post, if gpu upload heaps is utilized in any meaningful way in the next decade, I'll be pleasantly surprised.
What is your evidence that this is a limitation and why? All evidence that I've seen (i.e. actual benchmarks) suggests that there is no speed up when moving from the current, to a newly released iteration of the fastest PCIe interface.

But hey, if you can show me benchmarks from when PCIe4 was first launched that demonstrate games at that time saw a sudden performance boost going from PCIe3 to PCIe4 then I'd be interested to see them.

Similarly, PCIe5 exists now on motherboards. Why have neither Nvidia or AMD chosen to use it in their latest, just released GPU lines if this is bottlenecking the system? Surely that would be a relatively cheap way to gain a competitive advantage if that were the case?

And even if PCIe were a bottleneck, and again, I'm curious to understand what your reasoning is for thinking this, have you considered how GPU based decompression will significantly reduce the load on PCIe?

One final point to consider, if PCIe bus bandwidth is a bottleneck in PC's, and as you claim; increasing VRAM bandwidth does not bypass that bottleneck, then why when we increase VRAM bandwidth (in line with GPU compute resources), do we see performance go up? Surely if PCIe is a true bottleneck there, then performance should not increase at all. And yet when we swap out an already very big GPU (lets say a 4080) tethered to the end of this apparent bottlenecking PCIe interface with an even bigger GPU (lets say a 4090), we see a big performance gain.
The comment was not made if reference to games but, in reference to architecture. Again if you just examine the flow of data from storage -> ram -> <-vram, it's clear to see why the ~64GB/s of DDR5 is a problem.
 
Well we have to consider that these very talented people are focusing on the console and they aren't about to take them off their projects to work on a PC port of an old game. So they may very well have the knowledge to absolutely make an incredible PC version, but either aren't on the team, or wouldn't have been given the time to do the job properly.

Making games that look great, run great, and scale well across a huge range of hardware configurations is a matter of both having developers skilled and current at those particular things, and having a set of technologies that have been developed and maintained at some cost.

ND are clearly very talented developers, but that doesn't mean they will automatically be ideally placed to plop their game onto a PC and have it be a technically robust PC game. You tend to be best at what you do, and ND have spent a long time working only on PS games and presumably developing technology only to that end. No doubt if given the support from Sony they would become incredibly competent at bringing games to PC.

I've always felt that really good multi-platform developers are underappreciated, as platform advocates have more to gain emotionally by elevating their first party exclusive developers to a level of technical omnipotence. Developers like iD, Playground Games, 343i, and the incredible Rockstar Games have a area of ability that can't be meaningfully compared to great developers who only ever work on one fixed platform.
 
PC is clearly a more complicated space for development. You have a huge landscape of hardware, a heavy operating system, more abstraction in apis, more pitfalls in terms of io between devices and memory. Then you have unique challenges like shader compilation. Things like directstorage are a cool improvement, but it's going to be a long time before they can be targeted exclusively. Same with things like resizable BAR. It's just a reality that PC development is harder. You have to support all of the PCs that don't have those features.

I think everyone has been pretty consistent in agreeing that PC development is harder, that's pretty much a given. The main thrust of much of the discussion has been about whether the PC architecture (seemingly focussed on memory and IO systems) is fundamentally inferior to specifically the PS5 which (apparently) allows the PS5 to do things that the (any) PC isn't capable of matching. And by PC architecture, I at least take that to be what is possible on the PC today if a developer chooses to dedicate the time and resources to getting the most out of it.

In the context of that I think it's important to note that things like Direct Storage and Resizable Bar / GPU Upload Heaps do not need to be targeted exclusively/as a baseline, for those systems which support them fully, to be able to take advantage of them. So this isn't really a case of "they're nice but it'll be a decade before they have any real impact". They could have a real impact today, while also having an elegant fallback path for systems that don't support them.

To give more detail on that, Direct Storage is supported on any system running Windows 10 upwards. It supports NVMe drives, SATA drives and even HDD's. You won't get much, if any benefit if you use the last two, but the game will still work just fine on your system. The GPU decompression aspect will also work on any GPU all the way back to Maxwell. And even if you can't support that, it'll simply fall back to CPU decompression.

Similarly with Re-bar/GPU Upload Heaps, my basic understanding at least is that if the system doesn't support that, then the memory space which would otherwise have been in VRAM providing read/write access to both the CPU and GPU would fall back to system memory where it can be read by the GPU, but would need to be copied to VRAM for any GPU changes to be made, and them back to the CPU memory for CPU changes, and so on. So much less efficient (basically what happens now) but still a totally viable fall back.
 
This is one of the believe it when i see it kind of thing. Yes, it's possible in theory but, is it probable in the near future? I don't think so. Like I said in an earlier post, if gpu upload heaps is utilized in any meaningful way in the next decade, I'll be pleasantly surprised.

I don't really see what the issue would be. The advantages to be gained from re-bar are already implemented at the driver level which is why some games see improvements with it on. I don't know whether it would be possible to take advantage of GPU Upload Heaps in a similar fashion but even if not, given this should make developers lives a fair bit easier, increases performance, and has (as best I understand it) a fall back path to the existing way of memory management, I would assume it will start to be used fairly quickly.

The comment was not made if reference to games but, in reference to architecture. Again if you just examine the flow of data from storage -> ram -> <-vram, it's clear to see why the ~64GB/s of DDR5 is a problem.

You were talking about PCIe being a bottleneck earlier. Now you're talking about DDR. Which is it? They're completely different things. Also the above statement makes no sense. If we swap in the bandwidth of the different segments as follows: 7GB/s -> 64GB/s -> 500GB/s how do you conclude that the RAM (the middle one) is the bottleneck?
 
Don't we have another thread for 'those' videos? The latter point was shown quite clearly in the DF coverage, it's almost 80% faster than a 2070 super in spots. It's insane.

Edit: Hell yes the 2700x is back baby
 
Last edited:
Don't we have another thread for 'those' videos? The latter point was shown quite clearly in the DF coverage, it's almost 80% faster than a 2070 super in spots. It's insane.

Edit: Hell yes the 2700x is back baby
In a way, I can't help but be impressed.

Combined with these devs inexperience with PC platform, the PS5s deeper systems are a far cry from PS4 which was relatively simplistic in how it did things. This paradoxically forced devs to worker harder and get far more creative to eke more cycles out of the machine. Whereas with PS5 it probably allows them to do things relatively fast and without worry in a lot of ways regarding optimization of many things.

Even though horizon had early PC issues, I doubt they were anything close to what tlou part 1 is going through now
 
There is one game using Direct storage forspoken and the game without using the GPU version load as fast as PS5 but we just need to wait a game using Direct storage 1.1.

I really couldn't care less about boot up times. That is by far the most boring benefit of fast i/o. I am specifically talking about on the fly decompression during gameplay.

It will arrive soon probably.

Right... for now let's keep hypotheticals to a minimum. Some of you here are hellbent on comparing what consoles are actually doing in the present to a proposed solution for the PC platform. It's really a pointless discussion.

There is nothing fundamentally more performant about console IO architecture than the current state of the art in PC.

Ok pal.
 
@pjbliverpool is totally correct. If you look at PC high end and ps5, there is no comparison to what the PC can do versus the PS5 can do when both are optimized properly utilizing their best, the PC has a lot more to give and in the future we will inevitably see that brought to bear in gaming.

Of course, the actual situation we are talking about is not about the highest end PC components, or the newest technologies like direct storage, but PS5 compared to how PCs are traditionally coded for and lower end pc hw relative to PS5 that cannot simply brute force their way past what the PS5 is doing.

In this way the PS5 does inherently have the advantage based on it's architecture right now in that the PC has to be properly reconfigured to do what was once a relatively much more simple port job.

As someone else said, it's gonna take a whole lot more work and optimization to get this running well on PC for naughty dog, if it's even possible with how they went about doing this. But they need to be trying. Cause this kind of launch state is very bad among a trend of very bad launches this generation
 
@pjbliverpool is totally correct. If you look at PC high end and ps5, there is no comparison to what the PC can do versus the PS5 can do when both are optimized properly utilizing their best, the PC has a lot more to give and in the future we will inevitably see that brought to bear in gaming.

Our dear friend PJ is a long ways away from being "totally correct". As you perfectly explained below, the conversation was never about which platform was the most powerful or the like. It was strictly centered around systems integration and data management. And he has since doubled and tripled down on the belief that consoles do not have a leg up on PC in this area as of present day. As it relates to the topic at hand, he is totally wrong. You don't get cookies for giving answers to a question that was never asked. You get straws.

Of course, the actual situation we are talking about is not about the highest end PC components, or the newest technologies like direct storage, but PS5 compared to how PCs are traditionally coded for and lower end pc hw relative to PS5 that cannot simply brute force their way past what the PS5 is doing.
 
Ultimately atm, this I/O architectural discussion is pretty much based on the severe under-performance of one title - TLOU. The vast majority of other titles do not show anywhere near this discrepancy, and by far the most common and significant performance bottleneck in PC titles recently is due to some games not performing shader compilation, something which neither directstorage or UMA can do anything about.

So yes, in terms of 'present day', it is true is has not been shown explicitly that the PC has an insurmountable architectural flaw that cannot be reasonably worked around. If you want to argue that while being one title, TLOU is actually the canary in the coalmine, fine - it may very well be! But that's speculation based on future titles, just as Directstorage being the solution to this is speculation based on future releases also.

What we have is what we've always had - PC games need more CPU/GPU grunt than the exact equivalent console hardware to get roughly the same rasterization job done, this has always been the case with the API overhead and just having to have your games scalable across a far wider range of hardware. It's magnified more now due to the cost of PC GPU's and that PC gamers are probably using older cards whereas in past gens, 2 years in you could get a console-beater for $300.

But the point is that TLOU is by far an outlier in just how egregious that buffer you need on the PC is. Maybe it's a harbinger and DS 1.1 flops, and we all need 16GB+ VRAM and Zen5 going forward to compete. But you or I don't know either way, and I'm not going to base that on one release that has yet to see its first significant patch.
 
According to IGN's analysis TLOU1 is indeed running High settings on PS5. The game is always CPU limited on PC, but when extremely GPU limited, the PS5 is often faster than an RTX 2070 by a considerable margin.


Fantastic coverage by NX Gamer. He does a great job of distinguishing between the faults of Naughty Dog in releasing a very messy port, while also explaining why many PC configurations are performing much worse than console based on the architectural differences and capabilities (independent of developer faults). Objective and thorough analysis are the greatest.
 
This entire debate seems premature. We have no worthwhile data to draw any conclusions from. On the console side there is only one game that has done anything new with the current IO tech and it hasn't yet been attempted on PC. We have no idea if the current PC environment prevents this from being achievable. On the PC side it's also way too soon to be touting future DX and DS features as a fix for potential shortcomings.
 
Nah, you've gone and taken the other extreme of this position which is also ridiculous. TLOU Pt1 is decent looking, but it's not even actually next gen, visually. It's like a small step up from TLOU2 and little more.

Again, something like Plague Tale Requiem looks pretty clearly better.

To my subjective eyes, the texture quality in TLOU is clearly better. I'm in the market at the start of the game for APT, where NPCs are repeated and the dedicated VRAM usage is still 9GB at 4k with DLSSQ.
 
Status
Not open for further replies.
Back
Top