Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I'd be surprised if a 3600x wasn't enough to match the PS5 in R&C. It seems very light on the CPU in my (admittedly limited) testing.

With RT maxed on every setting I'm basically never exceeding 50% CPU usage on my 5900X3D with the spread across cores being pretty even.

Oh and that's with Direct Storage disabled.

I turned DS off yesterday and was around 60-65% load on my R5 7600 but I had stutters and hitching.

I'm not sure if it's related to the game itself or the fact I'm using it on RAID 0.
 
Was this ever actually confirmed? I saw mixed reports of this.

Yeah, the die shots show a physically reduced FPUs on PS5 CPU cores with a halved register file, and micro benching showed only half the number of FPU ports. For whatever reason (I'd guess maybe something related to power draw or heat) Sony greatly cut back on the PS5s FPUs.



Can't see it being a problem for the PS5 though as it's the market leader.
 
Windows is constantly uses cores for background tasks and security, you would have to go through and stop every process to stop it from using cpu power above what you are saying. . Not additional software like outlook.
What is different about Windows is vs. what we see in the console space is that those CPU resources aren't reserved in the same way. On consoles, that 1.5 core reservation is simply not available to the games. On windows, yest there would be some contention for resources, but a game would benefit if it was properly threaded even if Windows was using a core for something as long as the core isn't at 100%. This is part of the reason why, in many games, a well tuned system with a 6 core CPU featuring a modest boost clock advantage keeps pace with PS5/Series.
 
What is different about Windows is vs. what we see in the console space is that those CPU resources aren't reserved in the same way. On consoles, that 1.5 core reservation is simply not available to the games. On windows, yest there would be some contention for resources, but a game would benefit if it was properly threaded even if Windows was using a core for something as long as the core isn't at 100%. This is part of the reason why, in many games, a well tuned system with a 6 core CPU featuring a modest boost clock advantage keeps pace with PS5/Series.
I think that part of the equation is that a lot of games still lean on one or two dominant threads; I often see my 12700k and 13700k PCs disproportionately mis-loaded, and whilst I understand the reason for this, I am disappointed that investing resource (time/effort) in exploring opportunities for multithreading to solve certain gaming-centric problems hasn't made better progress as it would benefit all platforms.

When you're in that situation, the consoles offer some tangible benefits. On Windows, as you say, you have no real control over what applications use what processors. A game can nominate particular physical cores for certain threads but it cannot prevent Windows processes or other applications for using those same cores. The Windows kernel does have provision for 'receive side scaling' which allows applications to be assigned to certain processors but that's functionality tied to the Windows administrator user role, not something games can just do.

On a console, you can really hone algorithms that rely on tight datasets and throw a couple of threads on one physical core and predict, with absolute certainty, the L1/L2 cache usage because you know how big the cache is and you know only your threads are running on that core. You can't really hone for this on PC at all because the variety of CPU hardware and cache size and configuration is massive and even if you do optimise, Steam's interface may just randomly shove some data into the cache a time-critical algorithm is using and force you to revert to data n L2 or L3.. or even.. shock horror.. main memory! :runaway:

As for how much a different this makes is, like the variety of PC CPUs, probably highly variable.
 
Last edited by a moderator:
even if they have a new channel, I am quite disappointed they don't have a video on the upcoming RTX 4090Ti.

FIbPpIu.jpg
 
Plus: are GPU teraflop metrics now essentially useless?
It may become a larger trend in the future if we continue to move towards accelerators. But in the context here, it’s an appropriate time to call it out. They’ve had 3 years of data points now, what could have been a bunch of anomalies is now just a baseline. Every reason that it could have been has had sufficient time for maturity to be ironed out.

The deficiencies within the XSX architecture (whichever they are, split pool, rops, fixed clock speed, imbalanced cu to SE ratio or a combination of) has put it into a position that it is unable to separate itself from ps5. It’s a bit of a shame, they didn’t need much more to fix it. Remove the split pool, and have a full complement of ROPs. Allow for variable clocks. I’m not sure what that would have done to the price point, the split mobo probably didn’t help imo.

I’m sure in most circumstances it helps to be the lead console, but they can’t rely on that considering their market position.
 
Status
Not open for further replies.
Back
Top