Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
It completely depends on what type of calculations it's doing.
I am baffled how the game could not run at a solid 30fps on Series S but not a solid 60fps on a modern (12/13xxx) i5, i7 or i9. Faster single core clocks, larger cache, generally much lower latency DDR4/DDR5 memory, Assuming Starfield is CPU limited in the first place.

Except where it was the fundamental engine or scripts with the issues, i.e. Fallout 3, Oblivion, New Vegas, launch Skyrim - but Fallout 4, Fallout 76 and Skyrim Special and Anniversary Editions have faired better on your average gaming PC spec than console.
 
Last edited by a moderator:
For testing purposes, I actually locked Star Citizen at 30fps, just to see how space exploration felt at such a framerate. Needless to say, I just couldn't do it. That being said, maybe some 30fps gaming a day or so before the game's launch would help Series owners with the transition.

What's the input latency in SC at that framerate?
 
What's the input latency in SC at that framerate?

Without any official testing method, I can only say, it feels like shit, IMHO. It's a smooth 30fps experience as far as locked framerate is concerned, but dog fights and first-person battles have that delayed feel/response when coming from 60-120fps gaming. I simply can't do it.

Edit: Also, I noticed a lot of high frequency details look less sharp or refined at 30fps. Could be my old eyes though, additional ghosting. :ROFLMAO::cry:
 
Last edited:
It might not even be because of that, there's a not a single CPU on PC that offers 2x increase in single threaded performance over what Series-X has (A highly tuned 13900k will get close but that's it)

So if the game can drop to 30fps due to a CPU bottleneck on Series-X you won't lock it to 60fps on any PC.

It completely depends on what type of calculations it's doing.

Yeah this really depends. Zen 3/4 3D chips and Alder Lake/Raptor Lake CPUs can for instance can achieve >2x the performance in Fallout 4 compared to the Zen 2 CPU/APUs in the challenging scenes.

If past BGS games are anything to go by then performance is highly going to be scene dependent. For example FO4 official recommended CPU is a 4790 on the Intel side (the AMD rec is even worse). Was it a 60 fps CPU? Well kind of, in the earlier and especially outdoor areas of the game yes pretty easily. In the challenging sections of them, especially moving into the denser areas of Boston and places like Diamond City or once you get large settlements? It was more of a 4x FPS CPU.

Also what scenes might be more depending might not fit with expectations. I remember notably with Skyrim for example the intro sequence outside with more actors and scripted actions with the dragon blasting everything was fine on my setup at the time, what however tanked performance (by almost half) was actually when you first went indoors in some sections of the cave with the bear.
 
Last edited:
Regarding Starfield's 30fps cap on console and Bethesda's previous titles...

I've mentioned it before, but dug up some older Fallout 4 performance charts for discussion - even the older title is incredibly CPU and memory bandwidth/latency bound, to the point where if you have a choice between an i3-4360 with high end enthusiast-tier RAM and an i7-4770k with basic RAM at JEDEC standard timings, you want the i3, even when looking at minimum framerates!


And the thing is, the PS5/Xbox Series X have a lot of memory bandwidth on paper, but they're competing with the GPU and everything else in the system for that same memory bandwidth, whereas a desktop PC has a completely dedicated pool just for the CPU.

In addition, they are unique in using GDDR6 for the the unified memory which has a significant latency penalty versus the DDR3/4/5 on the PC side.

And then the last thing that the consoles lack is the one thing that would reduce the memory bandwidth/latency pressure on the CPU side - a large L3 cache.
4MiB L3 per CCX is not exactly generous compared to 16MiB per CCX on the desktop Zen2 SKUs.

A Bethesda-style open world game is basically the absolute worst case scenario for performance on this generation of consoles, IMO, which is why 30fps doesn't shock me at all.
 
And the thing is, the PS5/Xbox Series X have a lot of memory bandwidth on paper, but they're competing with the GPU and everything else in the system for that same memory bandwidth, whereas a desktop PC has a completely dedicated pool just for the CPU.

In addition, they are unique in using GDDR6 for the the unified memory which has a significant latency penalty versus the DDR3/4/5 on the PC side.

As I recall from a hardware type here, GDDR itself has similar latency to DDR, but the memory controllers its paired with prioritise bandwidth over latency. Not that it changes your point - CPU performance may potentially be impacted by the memory setup.

PS4 at least prioritised CPU over GPU, but the down side of this was CPU cost the GPU about double what the CPU was consuming. I'd guess it jumped to the front of the cue, maybe aborted some accesses that were nicely scheduled, and caused some bubbles (probably not the proper term) on the bus that resulted in lost opportunities for access.
 
As I recall from a hardware type here, GDDR itself has similar latency to DDR, but the memory controllers its paired with prioritise bandwidth over latency. Not that it changes your point - CPU performance may potentially be impacted by the memory setup.

PS4 at least prioritised CPU over GPU, but the down side of this was CPU cost the GPU about double what the CPU was consuming. I'd guess it jumped to the front of the cue, maybe aborted some accesses that were nicely scheduled, and caused some bubbles (probably not the proper term) on the bus that resulted in lost opportunities for access.

Great point, one other thing I forgot to mention is we actually have data for this, thanks to the harvested semi-defective console APUs being sold on the secondary market:

That's where my 'roughly double' comment came from regarding memory latency.
It's also important to note that the 4700S has the iGPU completely fused off, so the result is the best possible scenario with the CPU getting all of the memory to itself.

Now I kind of want to find one just to benchmark Fallout4 / Fallout4 VR on it.

AIDA 64 Latency MeasurementsAMD 4700S - GDDR6Ryzen 7 4750G - DDR4
Memory Latency145ns74ns
 
As I recall from a hardware type here, GDDR itself has similar latency to DDR, but the memory controllers its paired with prioritise bandwidth over latency. Not that it changes your point - CPU performance may potentially be impacted by the memory setup.

PS4 at least prioritised CPU over GPU, but the down side of this was CPU cost the GPU about double what the CPU was consuming. I'd guess it jumped to the front of the cue, maybe aborted some accesses that were nicely scheduled, and caused some bubbles (probably not the proper term) on the bus that resulted in lost opportunities for access.
And we know XSX has even more memory contention problems due to their specific splitted architecture. This could be a problem if the game is demanding with CPU bandwidth.
 
And we know XSX has even more memory contention problems due to their specific splitted architecture. This could be a problem if the game is demanding with CPU bandwidth.

Well that's not been determined by folks like us yet.

Series X has a 320-bit bus, but the GPU optimal and none GPU optimal areas have different memory striping setups. Accesses in the 6GB area will be spread across fewer channels. How the balance works out we don't know, all we know is that MS did a whole lot of profiling and decided that the GPU optimal 320-bit and 'other' 192-bit area was a definite win for them.

And, of course, either processor can access either area.
 
One additional thing to think about WRT Starfield on PC is that users will be able to tweak settings that are unchangeable on console. That can greatly increase or decrease CPU or GPU load.

Not everyone needs or wants to run every setting at max.

Edit: Also, I noticed a lot of high frequency details look less sharp or refined at 30fps. Could be my old eyes though, additional ghosting. :ROFLMAO::cry:

Nope not your imagination or old eyes. Motion resolution (which will affect high frequency details and everything else) takes a nose dive going down to 30 FPS from 60 FPS. Screenshots at 30 FPS look great, gameplay at 30 FPS always looks worse due to lower motion resolution than gameplay at 60 FPS. Granted not everyone might notice it, I could certainly see some people being unable to resolve details well when things are in motion versus when things aren't in motion regardless of framerate (IE - like real life where "framerate" is effectively infinite).

In other words, some people may just be unable to properly resolve things in motion as well as they can with things that aren't in motion so they won't notice a drop in detail in a 30 FPS game once you move the camera or there's motion on screen.

Regards,
SB
 
Surprising how subtle difference override mode makes
It's very scene-dependent.

Cyberpunk_2077_Screenshot_2022.09.24_-_21.10.16.55.png

Cyberpunk_2077_Screenshot_2022.09.24_-_21.10.59.23.png


That's just regular RT vs Ultra settings with no RT by the way. Not even RT overdrive.
 
Surprising how subtle difference override mode makes
They might look similar at the first glance and on general views, but the more you play with the Overdrive mode, the more often you will notice a difference.

I would separate a few areas where Overdrive RT does make a dramatic difference.

First is the shadowing and lighting from the indirect light sources. Let's take this pair of screens as an example.

Here you can see how direct light hits a darkened corner of a street:
Cyberpunk-2077-Screenshot-2023-06-16-10-55-50-09
https://ibb.co/6t01znJ
With probes, there are no additional bounces in this environment.

Cyberpunk-2077-Screenshot-2023-06-16-10-55-22-51
https://ibb.co/MRQhbrt
With per-pixel 2-bounce GI, you can see how the brightly lit corner lights up the environment around it, creating a directional shadowing and lighting setup.

Here is another example where the difference is obvious:
Cyberpunk-2077-Screenshot-2023-05-15-18-01-50-05
https://ibb.co/sCFB7Ff - probes
Cyberpunk-2077-Screenshot-2023-05-15-18-01-23-06
https://ibb.co/0tfpZ38 - per pixel GI

And a few more examples:
https://ibb.co/ykfSNR4 - probes
https://ibb.co/KGtRjKR - per pixel GI
https://ibb.co/3YjJXmb - probes
https://ibb.co/SR1qYqh - per pixel GI

Another significant area of improvement involves fixed normals and lighting; it seems there are plenty of issues with them in Cyberpunk.
https://ibb.co/bLp8k3T - no RT
https://ibb.co/cbN0fGJ - Overdrive

Yet another major area of improvement stems from specular (reflections) and lighting (GI) occlusion, where light does not leak through objects.
https://ibb.co/fCtG3qt
https://ibb.co/DrNVqkR

Shadows from all light sources in scenes also help with light leaking, thanks to RTXDI.
https://ibb.co/xfPbqsM
https://ibb.co/kH66pW6

And the diffuse lighting from area lights makes a difference too.
https://ibb.co/0GBVMpS
https://ibb.co/09xhVw4

Offscreen reflections, of course, represent another key area of improvement, and they help prevent light leakage as well.
And the last one just for fun)
https://ibb.co/19qtB8r
https://ibb.co/6Bvj9fN
It's interesting how some random character models in the game can be insanely detailed, while others, often including main characters, aren't.
 
Status
Not open for further replies.
Back
Top