Digital Foundry Article Technical Discussion [2024]

The realities of development didn't allow for either to have tech pushed much beyond PS4 games. Well made games but I expect big improvements over both by generations end.
I think Rift Apart was one of the most clear cut, sparing few true 'next gen' titles we had earlier on in the generation. Even compared to the still great looking PS4 Ratchet and Clank, it's a really big step up in almost every way, all while performing really well, too.

And I find it continually hilarious how much people just cant get to grips with the idea that dedicated first party titles built towards their own fixed spec hardware with a super low level API might well be a fair bit more optimized than some generalized DX12 PC release that comes later down the line.
 
Right but what difference in architecture?? This isn’t Cell, it’s two desktop level architectures (Zen and RDNA), those optimizations should carry over to PC, at least somewhat (and certainly relative to Nvidia!)
hUMA is the largest difference for starters.
PC has to load data into system memory and copy that over to the VRAM over the PCI bus.

This means that anything the CPU touches and GPU touches are largely separate. If there is a large parallel amount of processing that needs to be done on vertex data for animations or what not, the CPU is full responsible for this. As if we asked the GPU to do it, we would have to ensure the GPU had that data, do all the processing, and then copy it back to the CPU. And that round trip is much too long.

in hUMA consoles the CPU and GPU share the same addresses. So while your CPU could be extremely weak at large tasks, because the GPU is there, it can be asked to process the data where it writes the results and CPU can pick up and continue.

There are a great number of cycles saved not having to move data around. In fact, one could assume this is one of the primary benefits of console. Computation prowess can be magnitudes order slower, but data retrieval can be much faster in some aspects, they aren't spending hundreds of cycles copying data about.

Thus we see greater adoption of Async compute on consoles than we do on PCs, because while there are moments the GPU is not using it's CUs, the CPU can sneak jobs in there while it's processing the frame ahead to do some things.

That's the first big one, the second comes down to API. You can just do things on consoles that you can't do on PC. A good example is how we're introducing work graphs on PC in DX12. That type of design pattern existed on consoles for a very long time as far back as XBO. GPU driven rendering is further ahead on console than it is on PC for this reason.

Thirdly, data is moved directly into the shared pool memory on consoles. So developers will be taking advantage of streaming technologies and relying on that nvme to cut data over JIT.
PC's don't quite have a method to drop data from nvme drives directly into the VRAM. So they'll need a different solution that works over 2 pools of memory.

While direct storage certainly speeds things up, and PC drives can be faster, it's not quite the same in terms of how things work, thus porting could be more complex for solutions that are heavily reliant on streaming directly from the nvme.
 
Last edited:
I would image the unified memory usage is as much (or more) an ease of development thing than a performance thing. Absolutely having unified memory is easier from a developer standpoint and so devs designing a game as a console exclusive would make full use of it.

Directly porting that over to PC is then going to have performance issues if communication over PCIe is excessive. It may be that the game could be radically re-designed to avoid most of that PCie traffic at the cost of additional developer headaches, while also achieving equal or better performance on the PC side to PC equivalent hardware in the consoles. But the development/porting effort involved with that would likely be far to high for Sony's taste. Afterall they have a vested interest in their ports overperforming on their own hardware vs PC so why spend excessive amounts to bring the PC side performance up?
I think its a cost savings thing more than anything, but as I understand it, there can be benefits in terms of performance. It isn't always a performance drag as @Cappuccino asked.

I think Rift Apart was one of the most clear cut, sparing few true 'next gen' titles we had earlier on in the generation. Even compared to the still great looking PS4 Ratchet and Clank, it's a really big step up in almost every way, all while performing really well, too.

And I find it continually hilarious how much people just cant get to grips with the idea that dedicated first party titles built towards their own fixed spec hardware with a super low level API might well be a fair bit more optimized than some generalized DX12 PC release that comes later down the line.
It is still one of the best looking titles, but the rendering tech seems like a small improvement with the art doing the heavy lifting. If not for the streaming choking up a PS4 and its HD, I think a PS4 port would hold up very well. RT excluded of course.
 
There are always going to be advantages to being able to code to a fixed target with an API designed around that. In addition to some of the ones others have mentioned, a few others that come to mind:

1) The console APIs do have significant advantages specifically in terms of less CPU overhead. They don't need an additional translation layer in between and there's no need to abstract things like state that "might" need to be dynamic vs. baked on a given piece of hardware. Much more significant amounts of things can be baked offline in advance than on PC.

2) Similarly more internal hardware formats and features can be exposed on consoles - one that comes to mind is direct access to compression metadata and stuff like HZB (hierarchical Z-buffer) which can be used to accelerate certain passes. Despite many years of discussions, few of these sorts of thing have been able to make their way to PC in a portable way.

3) The shader compilers on consoles are different. They are owned by the platform vendors while on PC they are written by the IHVs. I would pretty confidently guess that this is a bigger reason for differences on the GPU side than anything to do with PCIe traffic and UMA.

4) Again since shaders can be compiled offline and inspected they can be tweaked to produce good output, which is very important for stuff like GPU occupancy. On PC they may be good on the day you write and tested the shader, but small changes in the drivers over time frequently cause regressions. On console once you've compiled and shipped a shader it's not going to change.

5) Async compute effectively requires manual scheduling (aside: it's a pretty bad programming model); on console this is tedious but at least reasonably possible to do. On PC it's basically not possible to use it well across a range of hardware, even before you layer the driver uncertainty on top of it.

Anyways as folks here know I'm very much a PC gamer, but I don't think it should be surprising to anyone that there are advantages to fixed architectures, and not just in terms of "spending optimization time". Abstraction always has a price; even if we had exactly similar hardware on PC you should assume that it will run somewhat worse head-to-head, but of course buying you all the other advantages of PCs.
 
3) The shader compilers on consoles are different. They are owned by the platform vendors while on PC they are written by the IHVs. I would pretty confidently guess that this is a bigger reason for differences on the GPU side than anything to do with PCIe traffic and UMA.

This is an interesting one but I'm curious why a shader compiler owned by Sony or Microsoft is likely to produce better results than one owned by AMD or Nvidia. Wouldn't the latter know the hardware better and thus be able to extract more performance from it (all other things being equal)?

4) Again since shaders can be compiled offline and inspected they can be tweaked to produce good output, which is very important for stuff like GPU occupancy. On PC they may be good on the day you write and tested the shader, but small changes in the drivers over time frequently cause regressions. On console once you've compiled and shipped a shader it's not going to change.

Which I guess goes a long way towards explaining performance regression over time of PC hardware vs consoles.
 
I think its a cost savings thing more than anything, but as I understand it, there can be benefits in terms of performance. It isn't always a performance drag as @Cappuccino asked.


It is still one of the best looking titles, but the rendering tech seems like a small improvement with the art doing the heavy lifting. If not for the streaming choking up a PS4 and its HD, I think a PS4 port would hold up very well. RT excluded of course.
Minus the 1 million polygons tail also
 
Back
Top