Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
So AMD was apparently helping devs get the most out of FSR with Starfield and they...forget to account for the mip bias issue.
It's important to remember that applying a universal negative LoD bias isn't a panacea for upscaling. For some sets of textures, a negative LoD bias can cause temporal aliasing so without careful manual tuning for each of those textures, severe shimmering artifacts can manifest within the ingame graphics ...
 
HardOCP reviews were hated by most back then IIRC.

The reality is that the primary engagement for hardware reviews has never been (well at least not for arguably decades now, it's not just a new tech tuber phenomenon) from people looking for purchase guidance from a gaming experience stand point but for hardware enthusiasts to examine the merits of said hardware and more importantly how they compare against each other.

This is why the HardOCP style of reviews or anything similar that looks beyond it being a pure numbers competition (and why things like reconstructions and even the ray tracing/raster split) are problematic for the target audience. It's basically a game onto itself that people use to debate the hardware in question.
 
Last edited:
HardOCP reviews were hated by most back then IIRC.
I mean, I hated them back then. It's funny because I remember reading them thinking "What am I looking at all of these bars are basically the same?" because they normalized things by framerate and adjusted the settings to get there. Now I understand what they were trying to achieve. If you bought this card, you would have to play at these settings to get this playable framerate. That's a data set that helps you make a decision.

Even today, you see a fair amount of reviews testing with upscaling off to "keep a level playing field". Or disabling the GPU PhysX or nVidia Gameworks features because AMD Cards can't use them, and having them all on tanks performance on nVidia cards. But, as Starfield's recent upscaling issues have shown, people with nVidia cards really want DLSS. And personally, I'll turn on some of the Gameworks features if the performance is high enough for me to enjoy it. If DLSS is available without image quality issues, why wouldn't someone use it? If tessellation tanks the performance of the Arkham games on an AMD graphics card why would someone turn it on?
 
To be clear, I'm not claiming that big issues don't make it through testing... as we all know by this point the launch period is often chaotic with changes coming in late and (sometimes major) regressions slipping through. I also have no problem with people doing technical work to help root cause and work around issues. The problem comes when - often a 3rd party - grabs random technical info they don't really understand and posts it to a broader audience of angry folks who also don't understand it and instead just jump onboard because it seems like technical language that supports their position. If we could just inject a giant dose of humility and shift the attitude from angry mob to helpful then this process might actually end up being useful. Sadly I think these situations often reveal the real motivations of folks, namely to complain and score internet points rather than actually help improve anything.

In this case in particular I'm still guessing there's a strong possibility that the game mainly needs some driver tweaks on NVIDIA, as most big AAA games often do.


I don't think people realize the magnitude of the difference to be honest. Baked lighting is like... a texture lookup. It's effectively free at this point. Contrast that to maintaining multiple big scene data structures (BVH, probes, surface caches, SDFs, etc. etc.) and doing expensive updates, sampling and denoising of them. Hell a lot of folks are even surprised how expensive just dynamic shadows are before they even get to dynamic GI. Obviously some recent games could probably have used baked lighting still, but I assume Starfield has a full time of day system, making that effectively impossible.

Thanks for the insightful reply.
Would you be able to offer any explanations for the non-aligned memory allocations that are also mentioned in that post?
I'm more of a CPU person, and anything high perf for me, and especially anything that will be thrown at SIMD,
will always be allocated in an aligned address.
Is it different for GPU's? Is memory alignment less important?

Honestly i sort of assumed that the GPU driver wouldn't let users/the API allocate non-aligned memory.
Although i do recall that i've had to do some special stuff to get CUDA allocations with the correct alignment?
(although that was more for writing a RDMA driver / functionality )


Cheers,
 
Would you be able to offer any explanations for the non-aligned memory allocations that are also mentioned in that post?
I couldn't really find enough info as to what the issue is there and the reddit reposts don't make a lot of sense. Presumably it's based on the line about Starfield workarounds from the release notes here (https://github.com/HansKristian-Work/vkd3d-proton/commit/88e4f300cc0b5b6f0880c1233d562cf506b546fb), but there's not really enough info for me to comment on there even. Possibly this is an issue specific to the Proton layer where it causes additional issues in terms of how it gets mapped to Vulkan.

[edit] Dug a little more just out of curiosity and I'm guessing it refers to this bit in the placed resource allocation stuff: https://github.com/HansKristian-Wor...233d562cf506b546fb/libs/vkd3d/resource.c#L903
Without digging too far into this code it seems like the issue is that Vulkan drivers are allowed to report required alignments that exceed D3D12's default alignments, so the Proton layer tries to add padding internally. Apparently this causes some sort of issue with the way Starfield allocates some of its resources so there is a workaround there to avoid doing this for Starfield.

Seems like a pretty specific detail in any case... I imagine this kind of thing that is pretty normal in these sorts of API translations layers. From the extensive comment in the blocks there you can get the idea of how much wiggle room there is in the specs and implementations of graphics APIs and drivers :)

Honestly i sort of assumed that the GPU driver wouldn't let users/the API allocate non-aligned memory.
Generally it is indeed more of a requirement than a performance consideration on the GPU side, with the APIs often requiring least-common-denominator type alignments that must work across all implementations, but then sometimes having query-able support for smaller alignments in practice. In a lot of cases the exact alignment of a resource is not even really controlled by the application developer and is instead managed by the OS and graphics driver.
 
Last edited:

Great analysis from Oliver!

Also a great example of how your average gamer would not notice whether the game had RT or not as in most cases it's very subtle and you wouldn't see it if you didn't actively look for it.

I could also see how some people might prefer the greater contrast of the non-RT shots to the more realistic lighting of the RT version. Looking at Starfield you can see there's mods to increase contrast as the artist's vision of realistic lighting conditions wasn't contrasty enough for some players.

Removing RT from the performance mode was definitely the way to go. Basically the same thing I would do for solid performance on PC if I owned the game. Turn off RT.

Regards,
SB
 
Also a great example of how your average gamer would not notice whether the game had RT or not as in most cases it's very subtle and you wouldn't see it if you didn't actively look for it.

I could also see how some people might prefer the greater contrast of the non-RT shots to the more realistic lighting of the RT version. Looking at Starfield you can see there's mods to increase contrast as the artist's vision of realistic lighting conditions wasn't contrasty enough for some players.

Removing RT from the performance mode was definitely the way to go. Basically the same thing I would do for solid performance on PC if I owned the game. Turn off RT.

Regards,
SB
either that or you can use a mod like this, which is what I am doing. My 3700X can't cope up with default RT in the game, it barely goes past 33fps, but with this mod it runs quite better.

 

Ampere and Turing are also tested despite the title. Turing especially suffers from a high compute cost during what appears to be texture operations and might partially explain the poor resolution scaling I saw earlier.

It's worth a read.
 
Finally fixed guys!

Nah, just kidding. It's still shit.


Great video. I'm always amazed at the level of detail Alex is able to pick out of these games.

And great to highlight that this game is still a mess too. I agree with John that we should have a top 10 worst ports of the year/Hall of shame at the end of the year. And possibly a smaller (say top 3) most improved games.
 
God that's brutal. Even managing to lock it to 30fps(!!!) can't cure the animation stuttering. Between Dead Space and this, think we finally have to let Fromsoftware off the hook as the worst PC porting studio.

BTW on a more positive note, at least reading good things about the Lies of P release. The demo had some obnoxious shader stuttering, the release has a shader compiling stage at the start. Some studios do listen!
 
Last edited:

I like that they only replace something if the replacement looks as good or better than what they are replacing and will run acceptably. That's a much better way to approach things rather than just forcing a new tech just to have the new tech because it's "theoretically" better. If it doesn't run well enough that it ends up looking as good or better at the required performance, then it's just not used versus an older tech.

Basically for shadows, for example, if a person looks at the baked version versus say an RT version but to be able to run the game at 30/60 FPS it ends up looking worse than the baked version then they keep the baked version.

So, it took them a long time before they started to replace some baked items, because the replacements just didn't look as good as the baked stuff at the performance they needed.

Hence, at least on console, only AO and reflections get the RT treatment in gameplay on the XBS-X.

It'll be interesting to see if this carries over to PC (keep it the same as consoles) or if they end up having toggleable settings for more RT.

Also interesting that during development the initial releases of DLSS wasn't considered good enough to use but it's now at a point where they might consider using it.

Regards,
SB
 
Last edited:
Also interesting that during development the initial releases of DLSS wasn't considered good enough to use but it's now at a point where they might consider using it.
That is not what is said in the interview - Chris Tector was talking about bog-standard TAA there, not FSR2 or DLSS or anything. (I was in the interview too).
From what I know from the interview, there was an internal microsoft contribution from the Halo Infinite team regarding TAA that they ended up finding suitable for their game.
 
I agree with John that we should have a top 10 worst ports of the year/Hall of shame at the end of the year.
That would probably do well for them, but personally I think we have enough negativity surrounding this stuff as is. Dedicated videos from DF highlighting issues seems enough, and anymore than that and I feel it starts to get into outrage-for-views and bashing territory. Digital Foundry seem to like to think their analysis videos provide useful, constructive criticism and are not meant to shame developers or be in any way mean-spirited. I'd say such a video would step more towards the latter than former.
 
Status
Not open for further replies.
Back
Top