Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
He’s making mention that loading assets is not necessarily always the bottleneck for loading time.

In this case it is likely cpu bound level loading. For instance for Starfield, every object in the world, a good percentage of them can be picked up. They made environment design heavily interactavle, where most games would have had things like playing cards and tweezers static items built into the level, Starfield has them as objects with their own physics, weight, size, and each of them can be affected separately by the dynamic systems.

Because of this, loading levels with more densely packed objects would take longer because we are doing significantly more than just loading textures.
See, this is actually useful information. That there is more than asset decompression goes without saying. Although I'm still puzzled as to why there's no mention of DirectStorage which can only help.
 
See, this is actually useful information. That there is more than asset decompression goes without saying. Although I'm still puzzled as to why there's no mention of DirectStorage which can only help.
I’m sure it does help, especially if it’s the bottleneck. For most games it should be I think.
 
It just needs to be as powerful as the Steamdeck and have DLSS if it had that it would be firmly next gen.
Also there have been no updates to the Matrix Reloaded demo on consoles since it was released and there have been multiple iterations of UE5 since then. I suspect it would run better on all system now if they were to update it especially after the latest Fortnite release.
As for the camera, after Pokemon Go, I think there is room for Nintendo to do some interesting AR things.
 
Last edited:
Thank you for the insight!
It’s so easy to get caught up looking for blood because everyone is convinced there is no way it could run so poorly relative to its graphical output, everyone is looking for thst gotcha.

The public has a general lack of trust for corporations a developers when they think they understand the performance of their hardware better than the people doing the actual work.
To be clear, I'm not claiming that big issues don't make it through testing... as we all know by this point the launch period is often chaotic with changes coming in late and (sometimes major) regressions slipping through. I also have no problem with people doing technical work to help root cause and work around issues. The problem comes when - often a 3rd party - grabs random technical info they don't really understand and posts it to a broader audience of angry folks who also don't understand it and instead just jump onboard because it seems like technical language that supports their position. If we could just inject a giant dose of humility and shift the attitude from angry mob to helpful then this process might actually end up being useful. Sadly I think these situations often reveal the real motivations of folks, namely to complain and score internet points rather than actually help improve anything.

In this case in particular I'm still guessing there's a strong possibility that the game mainly needs some driver tweaks on NVIDIA, as most big AAA games often do.

if this is a trend that continues, then it’s not that developers are poor at their job, it’s just that baked technology is just that effective at producing that visual performance trade off and we’ve been used to it for so long that most people won’t accept less.
I don't think people realize the magnitude of the difference to be honest. Baked lighting is like... a texture lookup. It's effectively free at this point. Contrast that to maintaining multiple big scene data structures (BVH, probes, surface caches, SDFs, etc. etc.) and doing expensive updates, sampling and denoising of them. Hell a lot of folks are even surprised how expensive just dynamic shadows are before they even get to dynamic GI. Obviously some recent games could probably have used baked lighting still, but I assume Starfield has a full time of day system, making that effectively impossible.
 
Not only is this post rude but also utterly pointless and offers no insight whatsoever. There is no mention of DirectStorage anywhere whether it's on console or PC. Can you actually provide a reasoning or you'd just rather do sarcastic drive-by posts?
I'm sorry if you found that rude. It wasn't my intention. I've been posting about the fact that IO isn't always the limiting factor in loading for some time. What I did intend in that post wasn't to sound sarcastic or attack you in any way, but to nudge you into finding the answer to your question. You have, in your post, pointed out that the PC version loads faster regardless of the Velocity Architecture (which includes Direct Storage) on Xbox. I was attempting to point out that IO speed is not only not always the limiting factor, with faster drives and better IO APIs, it's less often the limiting factor. Thus having you come to the conclusion of what might be happening yourself, rather than just shouting it at you.

Direct Storage, the Velocity Architecture, and PS5's custom IO solution are all efforts to remove IO as the bottleneck in loading and offer some relief on CPU usage by way of hardware decompression done on either the GPU on PC or specialized hardware on console. But IO bandwidth isn't always the limiting factor, and the CPU is often doing things other than just decompression. And with these advancements, which are of course important, IO bandwidth is now often not the limiting factor. As an example, think about some of the more meme-y Starfield examples of people spawning thousands of potatoes or watermelons. If you are loading in a thousand of the same item, you would in theory only be loading in the texture and model once from storage, which should take nearly no time on modern hardware. But all of those videos show a large performance drop or even a stall measurable in seconds. This is because each of these items have to be processed in some way when they appear in the world. This could be something like defining the physics state of the object, calculating it's position in game space, setting up it's collision, or even updating NPC behavior to take into account a change in the world. It really depends on the game.

This has always been true. If you play Half-Life on period correct hardware, a slow HDD and a CPU like a Pentium 266mhz or a K6, you would have loading times when you walk between sections in that game, as it was at the time. You could play the game off the same storage device on a system with a Pentium 4 or an Athlon and the load screens are reduced to more of a studder.

As for why there is no Direct Storage mentioned, I'm not sure any console game mentions it. Direct Storage is considered part of the Velocity Architecture, so I don't know they would ever put a DS badge on the box. On PC, it probably doesn't use it. Very few games do.
 
Last edited:
I'm sorry if you found that rude. It wasn't my intention. I've been posting about the fact that IO isn't always the limiting factor in loading for some time. What I did intend in that post wasn't to sound sarcastic or attack you in any way, but to nudge you into fining the answer to your question. You have, in your post, pointed out that the PC version loads faster regardless of the Velocity Architecture (which includes Direct Storage) on Xbox. I was attempting to point out that IO speed is not only not always the limiting factor, with faster drives and better IO APIs, it's less often the limiting factor. Thus having you come to the conclusion of what might be happening yourself, rather than just shouting it at you.

Direct Storage, the Velocity Architecture, and PS5's custom IO solution are all efforts to remove IO as the bottleneck in loading and offer some relief on CPU usage by way of hardware decompression done on either the GPU on PC or specialized hardware on console. But IO bandwidth isn't always the limiting factor, and the CPU is often doing things other than just decompression. And with these advancements, which are of course important, IO bandwidth is now often not the limiting factor. As an example, think about some of the more meme-y Starfield examples of people spawning thousands of potatoes or watermelons. If you are loading in a thousand of the same item, you would in theory only be loading in the texture and model once from storage, which should take nearly no time on modern hardware. But all of those videos show a large performance drop or even a stall measurable in seconds. This is because each of these items have to be processed in some way when they appear in the world. This could be something like defining the physics state of the object, calculating it's position in game space, setting up it's collision, or even updating NPC behavior to take into account a change in the world. It really depends on the game.

This has always been true. If you play Half-Life on period correct hardware, a slow HDD and a CPU like a Pentium 266mhz or a K6, you would have loading times when you walk between sections in that game, as it was at the time. You could play the game off the same storage device on a system with a Pentium 4 or an Athlon and the load screens are reduced to more of a studder.

As for why there is no Direct Storage mentioned, I'm not sure any console game mentions it. Direct Storage is considered part of the Velocity Architecture, so I don't know they would ever put a DS badge on the box. On PC, it probably doesn't use it. Very few games do.
Thank you. This is very insightful.

DirectStorage was actually mentioned to be used for Skorn and the dev was making quite a big deal about it. Later on, it was dropped from PC and only used on Xbox. Starfleld almost assuredly doesn't use it on PC and it's doubtful it uses it on Xbox. It doesn't even seem to be that commonly used on Xbox games either. In games like Forspoken, you can instantly see the benefits. I know it's a game-by-game thing but you'd think the Xbox would be able to load a lot faster than this given the emphasis Sony and Microsoft put on fast storage and faster load times.
 
Thank you. This is very insightful.

DirectStorage was actually mentioned to be used for Skorn and the dev was making quite a big deal about it. Later on, it was dropped from PC and only used on Xbox. Starfleld almost assuredly doesn't use it on PC and it's doubtful it uses it on Xbox. It doesn't even seem to be that commonly used on Xbox games either. In games like Forspoken, you can instantly see the benefits. I know it's a game-by-game thing but you'd think the Xbox would be able to load a lot faster than this given the emphasis Sony and Microsoft put on fast storage and faster load times.
Having no/short loading time is a design choice most of the time. One of Microsoft's shortcomings has always been getting developers to adapt their solutions, and this includes themselves. This is in contrast to what Sony and Nintendo have been able to do. Think about the Wii Remote, and how with a simple collection of games (Wii Sports) Nintendo perfectly demonstrated how perfectly games can be enjoyable with motion controls. Even when adoption isn't universal, we have examples like HD rumble in 1-2-Switch, and the features of DualSense in Astrobot on PS5. Now try to think of a single game that effectively used Microsoft's rumble triggers introduced in Xbox One. Maybe you can think of one or two. Now test the same game on PC and it's a crap shoot if it supports it on there.

This might be academic, but I also think every Xbox Series game uses Direct Storage. Unless I'm understanding things wrong, and that's possible, the IO system of Xbox Series is the Velocity Architecture, a superset of Direct Storage. I don't believe there is any fallback that allow you to not use DS. But using DS doesn't mean that you are leveraging DS.
 
Good thing the Starfield team had AMD onhand to help the FSR2 integration, as along with with wrong lodbias, this is how it handles motion blur and small specular detail.

Once again, my two pleas to devs when implementing reconstruction:

1) Pay attention to lodbias.
2) Test all the post-process effects!

1694457271058.png

 
Last edited:
I don't think people realize the magnitude of the difference to be honest. Baked lighting is like... a texture lookup. It's effectively free at this point. Contrast that to maintaining multiple big scene data structures (BVH, probes, surface caches, SDFs, etc. etc.) and doing expensive updates, sampling and denoising of them. Hell a lot of folks are even surprised how expensive just dynamic shadows are before they even get to dynamic GI. Obviously some recent games could probably have used baked lighting still, but I assume Starfield has a full time of day system, making that effectively impossible.
Also the coarse hacks like light-probes were relatively low demands for reasonable results. The outcome is a far higher step in diminishing returns, where to elevate lighting from what looked okay last gen to what would look significantly better requires several orders of magnitude more power. That's the bit people are missing. Just as light is exponential, a 10% increase in perceived brightness requiring a doubling of light energy, graphical improvements that are perceived as marginally better are being driven by exponentially higher costs, in hardware power and software engineering. Trying to measure by 'what you see' is never going to be insightful.

This is why this conversation - the state of modern game dev relative to prior generations - needs clearer AB comparisons rather than generalised "last gen didn't look much worse and ran far better". That and real data points rather than unreliable recollection. I'm recalling Uncharted 4 looking amazing, and also incredibly pants at times where the lighting broke completely. We can do that with a lot of games that people may remember as being all-round impressive while skirting over their forgotten shortcomings.
 
It just needs to be as powerful as the Steamdeck and have DLSS if it had that it would be firmly next gen.
Also there have been no updates to the Matrix Reloaded demo on consoles since it was released and there have been multiple iterations of UE5 since then. I suspect it would run better on all system now if they were to update it especially after the latest Fortnite release.
As for the camera, after Pokemon Go, I think there is room for Nintendo to do some interesting AR things.
talking of console successors, dunno if DF staff heard about this rumour going around regarding the PS5 Pro which is, allegedly, a 23 teraflops monster with double the performance in RT. It seems AMD is actually becoming good at RT. 😑 Fanboys are over excited.

fs2MrLp.jpg
 
agreed. Motion blur is an artificial unnatural thing. It doesn't exist in real life. If you turn your head fast you still see clear, and if something is moving fast in front of you, there is no blur, more like an "astigmatism" effect.

First off...what?! That is absolutely not the case.

Secondly, even though that's not the case, how exactly things are perceived in 'real life' is of questionable relevance to a game. Most people are gaming on 60-120hz displays, and certainly the vast majority cannot reach the latter with their hardware that often. The purpose of motion blur is at least partly to increase that perceived fluidity (outside of games that use filmic effects as a matter of style), we have to work around the display technology we have at the time which does not have the response time and clarity of the human eye. Maybe once we get displays that are like looking out a window, then we can talk about nixing added motion blur for good. In the meantime though, as an option, no.

I think camera motion blur, especially on sample and hold LCD displays that do their own by default, is of questionable value, sure. Object motion blur however, is decidedly not. I notice a big difference with it on/off, at least on a 60hz display.


Regardless, reconstruction can work with these effects just fine. This is just an oversight. The 'solution' to this is to fix it, not to say 'well don't hold your arm like that'. These graphical effects exist as an option for a reason, it's annoying when they are effectively 'disabled' if they're dependent on another graphical feature to work properly.

talking of console successors, dunno if DF staff heard about this rumour going around regarding the PS5 Pro which is, allegedly, a 23 teraflops monster with double the performance in RT. It seems AMD is actually becoming good at RT. 😑 Fanboys are over excited.

Note those rumoured specs from Zuby_Tech are from May of this year, it just keeps getting re-sent around as another outlet discovers it and promotes it as the 'newest leak'. Possible in late 2024? I'm extremely skeptical of a 320bit bus on an advanced process node at reasonable cost, to say the least.
 
Last edited:
Note those rumoured specs from Zuby_Tech are from May of this year, it just keeps getting re-sent around as another outlet discovers it and promotes it as the 'newest leak'. Possible in late 2024? I'm extremely skeptical of a 320bit bus on an advanced process node at reasonable cost, to say the least.

To say nothing of the dodgy AI accelerator and the fact that RDNA 3 would be pushing way more TFLOPS than that with 72CU's at that clock speed.
 
Really good discussion in the segment on the 7770/7800XT in the latest DFDirect, spurred by a viewer observation on how many reviewers just basically equate DLSS/FSR because they both fall into a 'reconstruction' checkbox and how this is a problem for the traditional way PC hardware is reviewed.

1694470307239.png

Timestamped here:


Also lol at John's anecdote on how an AMD rep asked the DF crew months ago what they thought a reasonable price would be for the 7800XT, and John replied '$499 would be a good price'. AMD apparently sarcastically responded with "well wouldn't $299 be a great price", basically insinuating John's suggestion was far too low, and welp we see the actual retail price. Once again AMD had to be dragged kicking and screaming into actually delivering a competitive product that will sell. :)
 
That’s been a problem for a long time in GPU reviews. They aren’t product reviews in any real sense. Benchmarking is just one part of a proper review but we mostly just get benchmarks.
 
That’s been a problem for a long time in GPU reviews. They aren’t product reviews in any real sense. Benchmarking is just one part of a proper review but we mostly just get benchmarks.
The older I get the more I like the idea of reviewing the product like I think HardOCP did back in the day. I might be misremembering the site, but someone stopped doing frames per second shootouts and started reviewing cards by determining what settings you needed to have to achieve playable framerates. So a new card would come out, and they would play half a dozen games on it, and report what settings were playable, which caused problems, and perhaps had a bit of speculation about the future. Like, Unreal engine games run better on this card than Quake engine games, so it should be good for future games using that engine.

At the time, I thought they were nuts.
 
Status
Not open for further replies.
Back
Top