Digital Foundry Article Technical Discussion [2024]

The realities of development didn't allow for either to have tech pushed much beyond PS4 games. Well made games but I expect big improvements over both by generations end.
I think Rift Apart was one of the most clear cut, sparing few true 'next gen' titles we had earlier on in the generation. Even compared to the still great looking PS4 Ratchet and Clank, it's a really big step up in almost every way, all while performing really well, too.

And I find it continually hilarious how much people just cant get to grips with the idea that dedicated first party titles built towards their own fixed spec hardware with a super low level API might well be a fair bit more optimized than some generalized DX12 PC release that comes later down the line.
 
Right but what difference in architecture?? This isn’t Cell, it’s two desktop level architectures (Zen and RDNA), those optimizations should carry over to PC, at least somewhat (and certainly relative to Nvidia!)
hUMA is the largest difference for starters.
PC has to load data into system memory and copy that over to the VRAM over the PCI bus.

This means that anything the CPU touches and GPU touches are largely separate. If there is a large parallel amount of processing that needs to be done on vertex data for animations or what not, the CPU is full responsible for this. As if we asked the GPU to do it, we would have to ensure the GPU had that data, do all the processing, and then copy it back to the CPU. And that round trip is much too long.

in hUMA consoles the CPU and GPU share the same addresses. So while your CPU could be extremely weak at large tasks, because the GPU is there, it can be asked to process the data where it writes the results and CPU can pick up and continue.

There are a great number of cycles saved not having to move data around. In fact, one could assume this is one of the primary benefits of console. Computation prowess can be magnitudes order slower, but data retrieval can be much faster in some aspects, they aren't spending hundreds of cycles copying data about.

Thus we see greater adoption of Async compute on consoles than we do on PCs, because while there are moments the GPU is not using it's CUs, the CPU can sneak jobs in there while it's processing the frame ahead to do some things.

That's the first big one, the second comes down to API. You can just do things on consoles that you can't do on PC. A good example is how we're introducing work graphs on PC in DX12. That type of design pattern existed on consoles for a very long time as far back as XBO. GPU driven rendering is further ahead on console than it is on PC for this reason.

Thirdly, data is moved directly into the shared pool memory on consoles. So developers will be taking advantage of streaming technologies and relying on that nvme to cut data over JIT.
PC's don't quite have a method to drop data from nvme drives directly into the VRAM. So they'll need a different solution that works over 2 pools of memory.

While direct storage certainly speeds things up, and PC drives can be faster, it's not quite the same in terms of how things work, thus porting could be more complex for solutions that are heavily reliant on streaming directly from the nvme.
 
Last edited:
I would image the unified memory usage is as much (or more) an ease of development thing than a performance thing. Absolutely having unified memory is easier from a developer standpoint and so devs designing a game as a console exclusive would make full use of it.

Directly porting that over to PC is then going to have performance issues if communication over PCIe is excessive. It may be that the game could be radically re-designed to avoid most of that PCie traffic at the cost of additional developer headaches, while also achieving equal or better performance on the PC side to PC equivalent hardware in the consoles. But the development/porting effort involved with that would likely be far to high for Sony's taste. Afterall they have a vested interest in their ports overperforming on their own hardware vs PC so why spend excessive amounts to bring the PC side performance up?
I think its a cost savings thing more than anything, but as I understand it, there can be benefits in terms of performance. It isn't always a performance drag as @Cappuccino asked.

I think Rift Apart was one of the most clear cut, sparing few true 'next gen' titles we had earlier on in the generation. Even compared to the still great looking PS4 Ratchet and Clank, it's a really big step up in almost every way, all while performing really well, too.

And I find it continually hilarious how much people just cant get to grips with the idea that dedicated first party titles built towards their own fixed spec hardware with a super low level API might well be a fair bit more optimized than some generalized DX12 PC release that comes later down the line.
It is still one of the best looking titles, but the rendering tech seems like a small improvement with the art doing the heavy lifting. If not for the streaming choking up a PS4 and its HD, I think a PS4 port would hold up very well. RT excluded of course.
 
There are always going to be advantages to being able to code to a fixed target with an API designed around that. In addition to some of the ones others have mentioned, a few others that come to mind:

1) The console APIs do have significant advantages specifically in terms of less CPU overhead. They don't need an additional translation layer in between and there's no need to abstract things like state that "might" need to be dynamic vs. baked on a given piece of hardware. Much more significant amounts of things can be baked offline in advance than on PC.

2) Similarly more internal hardware formats and features can be exposed on consoles - one that comes to mind is direct access to compression metadata and stuff like HZB (hierarchical Z-buffer) which can be used to accelerate certain passes. Despite many years of discussions, few of these sorts of thing have been able to make their way to PC in a portable way.

3) The shader compilers on consoles are different. They are owned by the platform vendors while on PC they are written by the IHVs. I would pretty confidently guess that this is a bigger reason for differences on the GPU side than anything to do with PCIe traffic and UMA.

4) Again since shaders can be compiled offline and inspected they can be tweaked to produce good output, which is very important for stuff like GPU occupancy. On PC they may be good on the day you write and tested the shader, but small changes in the drivers over time frequently cause regressions. On console once you've compiled and shipped a shader it's not going to change.

5) Async compute effectively requires manual scheduling (aside: it's a pretty bad programming model); on console this is tedious but at least reasonably possible to do. On PC it's basically not possible to use it well across a range of hardware, even before you layer the driver uncertainty on top of it.

Anyways as folks here know I'm very much a PC gamer, but I don't think it should be surprising to anyone that there are advantages to fixed architectures, and not just in terms of "spending optimization time". Abstraction always has a price; even if we had exactly similar hardware on PC you should assume that it will run somewhat worse head-to-head, but of course buying you all the other advantages of PCs.
 
3) The shader compilers on consoles are different. They are owned by the platform vendors while on PC they are written by the IHVs. I would pretty confidently guess that this is a bigger reason for differences on the GPU side than anything to do with PCIe traffic and UMA.

This is an interesting one but I'm curious why a shader compiler owned by Sony or Microsoft is likely to produce better results than one owned by AMD or Nvidia. Wouldn't the latter know the hardware better and thus be able to extract more performance from it (all other things being equal)?

4) Again since shaders can be compiled offline and inspected they can be tweaked to produce good output, which is very important for stuff like GPU occupancy. On PC they may be good on the day you write and tested the shader, but small changes in the drivers over time frequently cause regressions. On console once you've compiled and shipped a shader it's not going to change.

Which I guess goes a long way towards explaining performance regression over time of PC hardware vs consoles.
 
I think its a cost savings thing more than anything, but as I understand it, there can be benefits in terms of performance. It isn't always a performance drag as @Cappuccino asked.


It is still one of the best looking titles, but the rendering tech seems like a small improvement with the art doing the heavy lifting. If not for the streaming choking up a PS4 and its HD, I think a PS4 port would hold up very well. RT excluded of course.
Minus the 1 million polygons tail also
 
  • Haha
Reactions: snc
Anyways as folks here know I'm very much a PC gamer, but I don't think it should be surprising to anyone that there are advantages to fixed architectures, and not just in terms of "spending optimization time". Abstraction always has a price; even if we had exactly similar hardware on PC you should assume that it will run somewhat worse head-to-head, but of course buying you all the other advantages of PCs.

Guess it's always been the case though, what about 20 or 25 years ago or even more. Probably things could have only have improved over the years one would hope.
 
This is an interesting one but I'm curious why a shader compiler owned by Sony or Microsoft is likely to produce better results than one owned by AMD or Nvidia. Wouldn't the latter know the hardware better and thus be able to extract more performance from it (all other things being equal)?

I think it's more about that when you have a fixed compiler (or you can control which compiler you're using), you can do some tweaks against that specific compiler. It's probably not common to get into the very low level compiled results (I doubt many people would be writing shaders directly in GPU assembly), but there are still many arrangements you can do to get better performance on a specific compiler.

However, on a platform where you can't control which compiler you're using, such tweaks are no longer useful. On a tweak you might get better performance on a specific version of a compiler from a specific vendor, but on a compuer from another vendor or even another compiler version (or even worse, a different hardware from the same vendor), it probably gives worse performance. This makes such tweaks impossible so basically the only thing you can do is to do optimizations in algorithms and data structure, and leave everything else to the IHV's compiler.

Obviously such problem is not GPU specific. It's the same on CPU programs, but people have more options on a CPU (for example, some people still write programs using intrinsics which is very close to assembly). CPU's performance characteristics are also more predictable because CPUs are much more dynamic thus the performance variations caused by tiny tweaks are smaller (imagine the difference between an in-order CPU and an out-of-order CPU).

In a way, these "optimization" drivers today are doing something like this in some cases (such as shader replacement), but a fixed platform still have quite an advantage here.
 
So Nvidia and Intel continue to be the only companies who know how to make a worthwhile AI upscaler. Sony, AMD, and Microsoft have a long way to go.
 
PSSR seems just fine if implemented correctly. Definitely tiers above FSR, shitty implementations pending like AW2…Which has happened with DLSS as well in certain games. They aren’t perfect.
 
PSSR seems just fine if implemented correctly. Definitely tiers above FSR, shitty implementations pending like AW2…Which has happened with DLSS as well in certain games. They aren’t perfect.
To be fair, it might just be that the resolution is too low for PSSR to really work properly. In most other titles, the internal resolution is much higher. It could be simply that PSSR doesn't hold at 864p.
 
I haven't watched the video yet but people need to be fair. PSSR much like DLSS will have good implementations and bad ones. PSSR is new and we'll need a lot more data to come to any conclusions about the tech overall.
 
  • Like
Reactions: snc
To be fair, it might just be that the resolution is too low for PSSR to really work properly. In most other titles, the internal resolution is much higher. It could be simply that PSSR doesn't hold at 864p.
Not debating that point…Only what the guy above me said about it not being a good AI upscaler. None of these AI upscalers have a perfect track record with games in their implementation. Hopefully remedy figures it out with the pro version.
 
The most striking thing is just how far PS5 Pro is behind the 4070, and it even loses to the 3070 with RT on. This despite the PS5 Pro using lower than low RT settings.

Yikes.
 
It's interesting that nearly all PS5 ports for PC run noticeably worse on equivalent GPUs and nearly equivalent settings. I think it shows Sony games were tailored specifically for PS5's UMA architecture and fast hardware accelerated decompression.
That should be obvious to anyone. Unfortunately it's not and those PC ports are often labelled as "bad" just because PS5 run them very well. When It's just the "close to the metal" that is still a thing on PS hardware thanks to more efficient APIs.

And BTW I hope people won't use one port to judge PS5 Pro hardware. Alan Wake 2 was already running terribly on PS5 hardware... It's just not a game tailored to consoles with limited bandwidth.
 
Back
Top