Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
It's the suggestion that looking at something on screen and attributing it to software or hardware. It could be bad code, it could be the hardware is simply not fast enough to do what the software is demanding. There's no way you can tell without deeper analysis of code and performance tools. Something not executing quickly enough to hit 16ms and being predictable/reproducible can equally be bad code or hardware choking. ¯\_(ツ)_/¯
To be clear, I said typically it's not a hardware issue; looking at historical precedent, we see tons of high powered rigs suffer from similar hitching none of which should be attributable to the hardware.
The video shows a brief halting of frame rate, going from 60 and diving to 30, visually you can see the game pause briefly. Therefore, something not executing quickly enough to hit 16ms would hit 17ms or 18ms, which is 58 and 56 fps.
Typically GPUs that are running behind schedule will do so gradually, sharp dips are often indicative of a complete stall. You shouldn't expect the GPU to run 16+ms over to put up text.

We've had these types of hitching issues around forever, it has not favoured higher clocks vs wider GPUs. I believe the whole fast/narrow and wide/slow is a complete misnomer. There's virtually no such thing, having less compute units has no direct correlation to how fast you can run your chip. You can have the widest chip also run insanely fast if you're willing to pay the premium for that type of yield and cooling. This is like saying the 5700 is fast/narrow vs the 5700XT being wide/slow. The 5700XT is both faster and wider.

I believe this narrative needs tuning. There is imo, extreme overemphasis on what clockspeed is capable of. You have a better argument that the XSX is unable to make full use of it's CUs due to the imbalance of compute to fixed function units, is a much more probable argument than clock speed alone being the determining factor why PS5 can outperform XSX on the titles reviewed so far. In which the reality of the problem isn't that PS5 is fast and narrow, and XSX is wide and slow; it's not wide enough, as it's only widened only 1 part of it's hardware and not everything uniformly.

When it comes to complete compute workloads, in which all work is heavily parallelized, clock speed is very unlikely to achieve anything significantly more than computational throughput. Cache speed etc are unlikely to play a relevant factor here between the two.

The 24% gap in compute fed by 24% memory bandwidth will hold in these scenarios.

edit: thanks Alex! Settles this discussion. Nothing to add really. Interesting to see the bottlenecks all over though, curious to see where the bottleneck is in the hardware in those situations.
 
Last edited:
DF Article @ https://www.eurogamer.net/articles/...trol-ultimate-edition-photo-mode-rt-benchmark

Control Ultimate Edition's photo mode doubles up as a ray tracing benchmark
PS5 and Series X put through their paces.

Just when we thought that our coverage of Remedy's Control for next generation consoles was complete, we received a curve ball. Twitter user Another LED pointed out that the game's photo mode also serves to unlock the frame-rate, eliminating the 30 frames per second cap of the graphics mode and opening the door to direct comparisons of ray tracing performance between Xbox Series X and PlayStation 5. The results are intriguing, though perhaps somewhat academic.

To put it simply, dipping into Control's photo mode freezes the current game scene and allows you to navigate around with a free camera, letting you pick your best shot at your leisure. No changes are made to the game's rendering settings in the transition from gameplay and those settings are also the same between PlayStation 5 and Xbox Series X. So, basically, unlocking the frame-rate and ensuring that neither of them hits 60 frames per second (which effectively caps performance) opens the door to a benchmark of sorts - a like-for-like, no holds barred look at how Sony and Microsoft's new consoles power through some exceptionally demanding workloads, rendered by one of the most forward-looking and technically impressive engines on the market.

So, what do the results show? On the face of it, the engine is well balanced to deliver a consistent 30 frames per second in the graphics mode for both systems. We can see this by looking at our now infamous PC benchmark sequence: the Corridor of Doom. We're not quite sure why this one is so demanding on system resources, but it's definitely problematic on PC and its challenging characteristics transfer over to the consoles. Series X renders it at 33 frames per second, PlayStation 5 at 32. This fits the narrative we've seen from the majority of the cross-platform titles we've seen to date - that the two machines are very closely matched. More specifically, in the case of Control, if there's still overhead beyond 30fps in this most challenging of areas, that means we should comfortably coast through the vast majority of the game's content locked at the target frame-rate - likely the effect that Remedy intended.

...
 
So a 14% gap here average, nice to see that little overclocked GPU doing the good work.
But could photo mode trigger a downclock of the GPU here ?
 
So a 14% gap here average, nice to see that little overclocked GPU doing the good work.
But could photo mode trigger a downclock of the GPU here ?
16% over 21 test samples!

In this case, I am not sure why photomode would underclock the GPU. On PC - when you turn on photomode, the game's CPU related framerate skyrockets as the entire simulation threads stop. Here I actually think we are looking at a thermal situation on PS5, by means of inference, where the CPU is doing very little and the GPU has all the power it wants for full clocks.
 
PS5 keeping that gap closer than it is on paper.

It's a shame more modes aren't offered, as suggested a 1080p 60 with RT as an option would have been nice...
 
So a 14% gap here average, nice to see that little overclocked GPU doing the good work.
But could photo mode trigger a downclock of the GPU here ?
don't think so, but photomode probably is good representation of pure gpu difference while real game in addition is also more cpu/api limited
 
PS5 keeping that gap closer than it is on paper.

It's a shame more modes aren't offered, as suggested a 1080p 60 with RT as an option would have been nice...
for 1080p tv/monitor sure but to be honest on my 55 lg cx control in 1440p it's not especially sharp
 
PS5 keeping that gap closer than it is on paper.

It's a shame more modes aren't offered, as suggested a 1080p 60 with RT as an option would have been nice...
It turns out in general, it doesn't work that way. And I was naïve earlier in 2019/2020 to make claims that it would hold a paper advantage.
I've tested my GPU locked it to 24% less clockspeed (1632 Mhz), which also in turn tossed away 24% compute power, fill rate, geometry etc. And even then the fully boosted configuration (2000Mhz boost which is nearly 99.9%) only managed to get 13.5% higher frame rate results. Comparing across families of GPUs we also see this trend happen as well. It's just simply not linear because games are touching all aspects of the pipeline. If you manage to make a game that is 100% compute shader based, the paper difference may hold as it should become a bottleneck of largely just compute and bandwidth.

my results here
 
It turns out in general, it doesn't work that way. And I was naïve earlier in 2019/2020 to make claims that it would hold a paper advantage.
I've tested my GPU locked it to 24% less clockspeed (1632 Mhz), which also in turn tossed away 24% compute power, fill rate, geometry etc. And even then the fully boosted configuration (2000Mhz boost which is nearly 99.9%) only managed to get 13.5% higher frame rate results. Comparing across families of GPUs we also see this trend happen as well. It's just simply not linear because games are touching all aspects of the pipeline. If you manage to make a game that is 100% compute shader based, the paper difference may hold as it should become a bottleneck of largely just compute and bandwidth.

my results here
part of frame time is purely cpu computing so thats one of the reason edti: don't know what type of benchmarking you use, so maybe only gpu calculaton than my remark doesn' count
 
It turns out in general, it doesn't work that way. And I was naïve earlier in 2019/2020 to make claims that it would hold a paper advantage.
I've tested my GPU locked it to 24% less clockspeed (1632 Mhz), which also in turn tossed away 24% compute power, fill rate, geometry etc. And even then the fully boosted configuration (2000Mhz boost which is nearly 99.9%) only managed to get 13.5% higher frame rate results. Comparing across families of GPUs we also see this trend happen as well. It's just simply not linear because games are touching all aspects of the pipeline. If you manage to make a game that is 100% compute shader based, those arguments may hold however I suspect as this becomes a bottleneck of largely just compute and bandwidth.
Well yes, this was how I expected it to play out...by the time there's any type of consistent XSX advantage it'll be too late and the die will be cast.
 
To be clear, I said typically it's not a hardware issue; looking at historical precedent, we see tons of high powered rigs suffer from similar hitching none of which should be attributable to the hardware.
What's the source of data for the cause of the problem on "high-powered rigs" and why are these PCs running full desktop operating systems, with fat API layers and drivers, in any way a good baseline for performance from the cheapest AMD silicon Microsoft and Sony can squeeze into a plastic case for $500? :|
 
"Here I actually think we are looking at a thermal situation on PS5, by means of inference, where the CPU is doing very little and the GPU has all the power it wants for full clocks."

Do i understand this correctly as right now we see 16% (approx) difference between XSX and PS5 in the situation where all power goes to GPU on PS5 side, but if we added on top of that CPU intensive task would we be still looking at 16%? Or CPU will draw away power from GPU and gap could be larger? Or i am completely confused and it dosen't work that way?
 
"...but if we added on top of that CPU intensive task would we be still looking at 16%? Or CPU will draw away power from GPU and gap could be larger? Or i am completely confused and it dosen't work that way?
we have benchmarks of this situaton in prevous df video and ps5 is winning with xsx (read Control gameplay) ;) tough some suggesting it will be draw and stutter on xsx is bug related
 
Interesting benchmark. I can't recall which sports game, but PS5 photomode (maybe it was camera angle related?) didn't exhibited stuttering or framedrops like the XBSX. Definitely workload and scene dependent.
 
Interesting benchmark. I can't recall which sports game, but PS5 photomode (maybe it was camera angle related?) didn't exhibited stuttering or framedrops like the XBSX. Definitely workload and scene dependent.
Not sure if you're referring to NBA 2K21

if so, not photo mode however, it's a particular camera angle during play with the backboard transparency that causes frame dips.
 
16% over 21 test samples!

In this case, I am not sure why photomode would underclock the GPU. On PC - when you turn on photomode, the game's CPU related framerate skyrockets as the entire simulation threads stop. Here I actually think we are looking at a thermal situation on PS5, by means of inference, where the CPU is doing very little and the GPU has all the power it wants for full clocks.
I think you are wrong about it. First thinking there will be a noticeable downclocking of the GPU (enough to see a noteworthy difference in framerate) and that the XSX could perform better in more CPU limited scenes (if it's what you are implying here).

I think we already have scenes being both CPU and GPU demanding: in gameplay scenes. Don't you agree that those scenes would be more demanding on the CPU? Because we already have plenty of data about those scenes. And when the settings are the same on both machines those are usually showing performance advantage on PS5 like: AC Valhalla, Destiny 2, NBA 2K21 or even the last COD.

Also Cerny stated that except in the case of using specific CPU instructions the scenes more busy (with more polygons) should actually be less demanding on the GPU. I actually think (since a long time) that XSX should have the advantage when it's GPU / Bandwidth limited (like in the photomode, cutscenes or bandwidth limited scenes like maybe RT or native 4K) and because of its design the PS5 should have the advantage during gameplay when it's equally demanding between CPU and GPU and less demanding on the bandwidth (for instance at <=1440p). And this is about what the data shows us until now.
 
What's the source of data for the cause of the problem on "high-powered rigs" and why are these PCs running full desktop operating systems, with fat API layers and drivers, in any way a good baseline for performance from the cheapest AMD silicon Microsoft and Sony can squeeze into a plastic case for $500? :|
I'm not sure what we're discussing here. I must have lost your point in the discussion. I was just saying that typically these types of hardlock freezes are not attributable to the GPU being unable to keep up with workload demands and that typically hard freezing is often a sign of something else. We've never been told why hitching occurs, PC players often just wait for a patch to clear the issues up. And they don't affect everyone equally.
 
we have benchmarks of this situaton in prevous df video and ps5 is winning with xsx (read Control gameplay) ;) tough some suggesting it will be draw and stutter on xsx is bug related
The photomode benchmarks reminds me of that super hero from Mystery Men that can turn invisible but only when people aren't looking.
 
Status
Not open for further replies.
Back
Top