Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
So explain why we have games on PC that don't have any stutter?
Because they're less demanding games with less hardware demands?

This isn't complicated. This problem genuinely only presents itself as soon as a developer wants to use high quality shader materials or any large amount of them.

And we literally have consoles as a comparison point to prove that fixed hardware can fix the problem, making it a PC-specific issue.
 
UE5 has a non parallelised render thread (single threaded) talking to Unreal people behind the scenes, that May be the core issue whitnessed Here (pun not intended). The one core that is 65% there while most of the rest are 30ish.

Game thread, Render thread, then a bunch of jobs, right?

That is not great. Is that a legacy problem, or is the renderer built that way because of technical issues surrounding nanite or lumen? Is there something about nanite and lumen on the cpu-side that makes it hard to parallelize?


The renderer is definitely one thread.

Edit: Long story short: Whenever I play UE5 games I'm going to end up turning most of the visuals down to have the kind of experience I want (ideally 120 fps minimum).

Edit2: https://docs.unrealengine.com/5.2/en-US/parallel-rendering-overview-for-unreal-engine/

There is a single Render Thread is a frontend which generates commands into a command list, which can be processed by the native back-end for the platform (Vulkan, D3D, console) in an appropriately parallel way for that platform. So maybe generating the command lists is a bottleneck, even though the back-end is parallel.
 
Last edited:
Good video, but you can't really just compile a project in UE5.2 and expect there to be no compilation stuttering. What 5.2 allows is for developers to better manage when shaders will be compiled, and which ones, as well as providing a better fallback solution for when things might not be quite ready yet. It's another tool in the box, essentially. It was never going to be "set it and forget it" like we had hoped.. but as said, it's a combination of knowing which shaders will need to be compiled ahead of time, and doing a better job of not needlessly compiling shaders which will never be used.. and it could go a long way in helping to optimize pre-compilation times.

What I'd like them to work on is making the actual method of collecting and building the PSO caches as streamlined as possible. There's currently a lot of steps you have to do just to ensure your project will collect the PSO data, generate the appropriate files, and then move those files around to be able to re-cook that data back into the project. I think a lot of that stuff could be automated so that developers wouldn't have to worry about it. That's what I originally thought this "Automatic PSO gathering" stuff was going to be.. something that just happens automatically without devs having to do a bunch of tedious shit. I always expected they'd have to play the games to generate the PSO caches.. but just that it would do it all seamlessly without having to mess around.

I'm content though. There's been a massive amount of attention brought to the issue from the public and figures in the industry lately.. and things are continually happening. Next order, would be to work on asset streaming to reduce traversal stuttering. Things are moving in the right direction, I think.
 
PSO is indeed hard to manage. The static ones are fine, you can loop through all the materials and enumerate all of the combinations. But then there are the features that get dynamically enabled at runtime, which is difficult to collect beforehand. And you couldn't just include all of them cuz that's way too many variants.
Of course, assuming infinite development time, devs can manually recognize all the dynamic variants and add them to the pre-compilation list, but this is something hard to plan ahead. Though at the very least, even precompiling part of the shaders is better than nothing.
 
PSO is indeed hard to manage. The static ones are fine, you can loop through all the materials and enumerate all of the combinations. But then there are the features that get dynamically enabled at runtime, which is difficult to collect beforehand. And you couldn't just include all of them cuz that's way too many variants.
Of course, assuming infinite development time, devs can manually recognize all the dynamic variants and add them to the pre-compilation list, but this is something hard to plan ahead. Though at the very least, even precompiling part of the shaders is better than nothing.
Challenging for sure to do the right thing about it, but it is also a *necessary* challenge I would say to actual release a complete product on PC. As not doing the hard work means you are releasing a pretty bad product usually for all PC users.
 
Sounds like overall, the PC platform as a whole is just not equipped to deal with the increased complexity and quantity of shaders in games in any kind of ideal way.

Chock this down to another win for consoles this generation on how fixed hardware has some pretty significant advantages.

Not to say PC gaming is bad in comparison, it also still has plenty of advantages, but it's not going to be this 'consoles but inherently better' situation that it's been for the past decade or so.

This seems like an overly negative take. The new async compilation and skip draw between them seem to do a pretty good job at cleaning up the worst of the stutter already. The valley of the ancients stutter for example which was similar to what we saw in Calisto Protocol at launch went from unplayable to merely annoying IMO which is a huge win. And as Flappy said this is before any developer pre-compilation step is implemented. It's hard to imagine that with a decent pre-comp step (which usually cleans up most if not all shader comp stutter on it's own already, and is itself made more effective and easier to implement in UE5.2), plus these new features that PC games would be discernably different from their console counterparts in this regard. Sure developers can always not implement those features, but shitty ports exist on all platforms and aren't a fundamental platform problem - although granted the additional complexity of the PC makes it more vulnerable to them.

The proof will of course be in the pudding but it looks to me like the tools to 'solve' this problem in the PC space may now exist in pretty good shape in UE5.2.


Quick summary: Still issues with core utilization, and while the asynchronous shader cache+ skip draw helps with shader stutter, it is not a solution by itself - it can still produce significant stutters on a cold cache, so games will likely have to combine a precompute step along with asynchronous - hope most will!

There is also still traversal stutter. Overall, improved but...not exactly encouraging.

Outside of the stutter aspects I also found the hardware Lumen performance to be very encouraging - both better quality and faster than software Lumen when GPU limited - although presumably still scene dependent.

Regarding the traversal stutter, I wonder how much if any of this can be solved by GPU decompression and how well UE5.2 supports Direct Storage 1.1 at present? Alex noted that framerates dropped when traversing certain points in the game and then slowly increased back to normal over time. That looks just like the behavior we saw in TLOU which was attributed to the CPU decompression requirements. GPU decompression could potentially lessen that issue or even remove it entirely if CPU limited.
 
Well even Alex there made it clear that these new tools in 5.2 are really only bandaid solutions, not a proper fix that will get rid of the problem. And we're still gonna have to have these long compilations on startup which is absolutely a downside.

I think a couple of you are getting very defensive, but it's ok to admit PC isn't the 'perfect' platform. I still love it, but it seems foolish to ignore when there are actual downsides.
 
Well even Alex there made it clear that these new tools in 5.2 are really only bandaid solutions, not a proper fix that will get rid of the problem. And we're still gonna have to have these long compilations on startup which is absolutely a downside.

The improvements in UE5 may not be a perfect fix insofar as they don't automatically remove all shader compilation stutter without any developer intervention, but they do provide tools to developers who want to remove all shader compilation stutter to effectively do so. Or at least it appears that way on the face of it.

Regarding the compilation times at start up, I really don't see an issue there. We wait for the game to download, we wait for the game to install, who cares about a few more minutes for the shaders to compile as well? If that were hidden within the download/install process no-one would be complaining.

I think a couple of you are getting very defensive, but it's ok to admit PC isn't the 'perfect' platform. I still love it, but it seems foolish to ignore when there are actual downsides.

I'm not sure anyone is trying to argue that. I'm simply disputing your earlier statement that the "platform as a whole is just not equipped to deal with the increased complexity and quantity of shaders in games in any kind of ideal way". Maybe you didn't mean for it to sound so negative but that to me reads as a serious fundamental flaw with the PC as a gaming platform to which there is no solution. Where-as the reality is that at a platform level, the tools exist to provide an experience that is basically identical to the console experience in terms of shader comp stutter, with the price from a user perspective (in some, not all cases) being a shader pre-compilation stage for a few minutes on the first run of the game. I just don't see that as equating to some sort of significant disadvantage, I'd classify it as a minor inconvenience at best.

The real disadvantage is that because they aren't automatic, these tools won't always be properly used by developers which can result in games that are significantly worse from a stuttering perspective than their console counterparts. But that's certainly quite different to the 'platform as a whole not being equipped to deal with the increased complexity and quantity of shaders in games'.
 
The improvements in UE5 may not be a perfect fix insofar as they don't automatically remove all shader compilation stutter without any developer intervention, but they do provide tools to developers who want to remove all shader compilation stutter to effectively do so. Or at least it appears that way on the face of it.

Regarding the compilation times at start up, I really don't see an issue there. We wait for the game to download, we wait for the game to install, who cares about a few more minutes for the shaders to compile as well? If that were hidden within the download/install process no-one would be complaining.

This is commonly suggested, but the problem is that it's not a one-time thing - shaders have to be recompiled after every driver update and often after game patches. You saw people complaining constantly on the Steam forums for TLOU1 after every patch because their cache was invalidated with each one. With ~8 patches, for some people with midrange CPU's, that meant they were probably looking at several hours of texture compiling if they started up the game after every patch, looking to see if things had improved. This high CPU usage is also a mark against including it as part of the download process, downloading large games is something I do then go about using my PC - people would likely be wondering wtf if their PC's CPU suddenly spiked to 100% for 3-20 mins at the tail end of game's installation.

However, it is of course preferable to in-game stuttering, and as Naughty Dog showed, there's also plenty of room for optimization - the cache size for The Last Of Us is a fraction of its size now as compared to when it first launched, and it compiles far quicker. That and shader compiling is very multi-core friendly to boot, it's one aspect of gaming that will scale very well going with future processors, unlike potential issues with traversal stuttering.

Well even Alex there made it clear that these new tools in 5.2 are really only bandaid solutions, not a proper fix that will get rid of the problem.

Until we get perhaps get more fundamental changes to how the API handles these, perhaps like Vulkan is doing, these will have to be 'band-aid' solutions in the interim, but this misunderstands what DF's video was showing. Alex was testing specifically a feature of UE5 to handle shaders that aren't pre-compiled, it's meant to be an additive feature, not a single bullet, but he's isolating it to demonstrate it's effectiveness when strained to the limit. Even with this used in a way that is likely not the intended method, the stuttering was massively reduced as compared to a similar situation.

It's good to draw attention to it at this point as not a complete fix, because it isn't - that's why UE5 includes improvements to the amount of materials it can also now gather for precompilation. What it means is that there is not one single method that will get 99.5% of potential shader stutters, which is not exactly new - Horizon Zero Dawn does this as does Uncharted, performing pre-compilation with an asynchronous method is quite common. UE5 just makes the fall-off from not bothering to perform any pre-compilation less steep. The tools are just vastly improved now.

And we're still gonna have to have these long compilations on startup which is absolutely a downside.

TLOU1, and perhaps Detroit: Become Human (though it's likely a few mins on modern CPU's now) are really the outlier here though. With most games that get a shader pre-compilation step added, it's usually occurring during the opening loading screen.

I think a couple of you are getting very defensive, but it's ok to admit PC isn't the 'perfect' platform. I still love it, but it seems foolish to ignore when there are actual downsides.

You're getting pushback for deciding this video, showing significant improvement, is now the canary in the coalmine for the platform wrt shader issues, when really it's the opposite. UE4 and the rise of DX12 was the worst-case scenario, an API suddenly pushing the responsibility of shader compiling wholly on the developer, and an engine that produces massive amounts of material shaders with an opaque - and flawed - method to capture them beforehand. Combine those with the constrained development environment in the covid era and you get the disaster that some releases exhibited on this front. This video is proof that focus on this issue, by developers, DF, gamers and other outlets, is having tangible results.

It's not perfect as it shows, I don't know if anything will ever be when you have JIT compilation as a necessary step, but it's odd to get the reaction the platform is fundamentally flawed in this one aspect when we're now seeing more attention and shipping approaches to actually solving this than ever before. I see some concerning things revealed in that video, but frankly shader compiling seems to be the least of them.
 
Last edited:
You're getting pushback for deciding this video, showing significant improvement, is now the canary in the coalmine for the platform wrt shader issues, when really it's the opposite. UE4 and the rise of DX12 was the worst-case scenario, an API suddenly pushing the responsibility of shader compiling wholly on the developer, and an engine that produces massive amounts of material shaders with an opaque - and flawed - method to capture them beforehand. Combine those with the constrained development environment in the covid era and you get the disaster that some releases exhibited on this front. This video is proof that focus on this issue, by developers, DF, gamers and other outlets, is having tangible results.

It's not perfect as it shows, I don't know if anything will ever be when you have JIT compilation as a necessary step, but it's odd to get the reaction the platform is fundamentally flawed in this one aspect when we're now seeing more attention and shipping approaches to actually solving this than ever before. I see some concerning things revealed in that video, but frankly shader compiling seems to be the least of them.
All I said was that it proves PC has some downsides as there are clearly fundamental problems that wont be fixed, only improved.

You yourself admit it's not perfect, and that's all my real point was. We went from a generation where the PC basically *was* perfect and could do everything consoles can do and better, but it's clear this generation will be a bit different and consoles will have a couple distinct technical advantages.

If this upsets you, then yes, you are simply getting defensive over some matter-of-fact statements. I love PC gaming, but I can also appreciate when consoles are doing things PCs cant, or at least not as well. I'm a hardware and gaming enthusiast at the end of the day, not a platform warrior.

Also, in terms of compilation times, they will never be quick enough to do 'during the opening loading screen', because load times are also meant to be increased dramatically, so either way, we're talking about missing out on advantages and having to wait much longer than consoles will have to.

EDIT: Meant to say loading times will be decreased dramatically...
 
Last edited:
You not liking what I said is different from it being 'stupid', but I'm aware you struggle with understanding that kind of thing.

There's more than enough games around that push the technical limit that completely trash any weight and true you think your comment has.

Sounds like overall, the PC platform as a whole is just not equipped to deal with the increased complexity and quantity of shaders in games in any kind of ideal way.

Lets apply that poor take to the consoles shall we....

After seeing various games release with extremely poor performance and low internal resolutions it..........."Seems like overall, the console platforms as a whole are just not equipped to deal with the increased complexity and quantity of shaders in games in any kind of ideal way"
 
This is commonly suggested, but the problem is that it's not a one-time thing - shaders have to be recompiled after every driver update and often after game patches. You saw people complaining constantly on the Steam forums for TLOU1 after every patch because their cache was invalidated with each one. With ~8 patches, for some people with midrange CPU's, that meant they were probably looking at several hours of texture compiling if they started up the game after every patch, looking to see if things had improved. This high CPU usage is also a mark against including it as part of the download process, downloading large games is something I do then go about using my PC - people would likely be wondering wtf if their PC's CPU suddenly spiked to 100% for 3-20 mins at the tail end of game's installation.

Yes that's fair enough, I did consider mentioning the "re-compilation" requirement after updates but didn't want to stray too far from my central point in that post. To me that's pretty much a non-issue because I encounter it so rarely. TLOU is a pretty big exception there as I've deliberately jumped into it straight after every update to re-test the performance. Most of the time I buy games long after the initial patch flurry has completed and I also tend to upgrade my drivers only when starting a new game that benefits from it (so not mid game). I can see how it could be an annoyance for people who buy games early in their lifecycle though, or for multiplayer games where they are updated regularly over their life and are played for a long period of time.

You yourself admit it's not perfect, and that's all my real point was. We went from a generation where the PC basically *was* perfect and could do everything consoles can do and better, but it's clear this generation will be a bit different and consoles will have a couple distinct technical advantages.

If this upsets you, then yes, you are simply getting defensive over some matter-of-fact statements. I love PC gaming, but I can also appreciate when consoles are doing things PCs cant, or at least not as well. I'm a hardware and gaming enthusiast at the end of the day, not a platform warrior.

I still don't agree with this framing. Consoles obviously can do things that PC's can't, e.g. quick resume, better/more uniformly implemented haptics, slicker patching and update processes, unfragmented game stores etc... but stuttering in games, or lack thereof is absolutely not something that as a platform, the PC is not able to do as well as consoles. It simply requires more developer effort on PC's to get it to the same level as consoles. Effort that unfortunately isn't always applied. So I can absolutely agree that the PC as a platform is at a disadvantage in terms of general game quality because it requires more effort than consoles to achieve the same level of quality, and shader comp stutter is one way that additional developer effort requirement can negatively manifest. But in the context of the Digital Foundry video, which is demonstrating the new tools available to make it easier for developers to reach the same quality level as consoles with regards shader comp stutter, I don't think it makes sense to claim that as a platform, the PC in unable to attain that same level of quality. Plenty of games have already proven it is and this video is just showing how attaining that is getting easier (much easier in the case of Unreal Engine which is the main culprit to date for shader comp stutter in the first place).
 
This is commonly suggested, but the problem is that it's not a one-time thing - shaders have to be recompiled after every driver update and often after game patches. You saw people complaining constantly on the Steam forums for TLOU1 after every patch because their cache was invalidated with each one. With ~8 patches, for some people with midrange CPU's, that meant they were probably looking at several hours of texture compiling if they started up the game after every patch, looking to see if things had improved. This high CPU usage is also a mark against including it as part of the download process, downloading large games is something I do then go about using my PC - people would likely be wondering wtf if their PC's CPU suddenly spiked to 100% for 3-20 mins at the tail end of game's installation.

However, it is of course preferable to in-game stuttering, and as Naughty Dog showed, there's also plenty of room for optimization - the cache size for The Last Of Us is a fraction of its size now as compared to when it first launched, and it compiles far quicker. That and shader compiling is very multi-core friendly to boot, it's one aspect of gaming that will scale very well going with future processors, unlike potential issues with traversal stuttering.



Until we get perhaps get more fundamental changes to how the API handles these, perhaps like Vulkan is doing, these will have to be 'band-aid' solutions in the interim, but this misunderstands what DF's video was showing. Alex was testing specifically a feature of UE5 to handle shaders that aren't pre-compiled, it's meant to be an additive feature, not a single bullet, but he's isolating it to demonstrate it's effectiveness when strained to the limit. Even with this used in a way that is likely not the intended method, the stuttering was massively reduced as compared to a similar situation.

It's good to draw attention to it at this point as not a complete fix, because it isn't - that's why UE5 includes improvements to the amount of materials it can also now gather for precompilation. What it means is that there is not one single method that will get 99.5% of potential shader stutters, which is not exactly new - Horizon Zero Dawn does this as does Uncharted, performing pre-compilation with an asynchronous method is quite common. UE5 just makes the fall-off from not bothering to perform any pre-compilation less steep. The tools are just vastly improved now.



TLOU1, and perhaps Detroit: Become Human (though it's likely a few mins on modern CPU's now) are really the outlier here though. With most games that get a shader pre-compilation step added, it's usually occurring during the opening loading screen.



You're getting pushback for deciding this video, showing significant improvement, is now the canary in the coalmine for the platform wrt shader issues, when really it's the opposite. UE4 and the rise of DX12 was the worst-case scenario, an API suddenly pushing the responsibility of shader compiling wholly on the developer, and an engine that produces massive amounts of material shaders with an opaque - and flawed - method to capture them beforehand. Combine those with the constrained development environment in the covid era and you get the disaster that some releases exhibited on this front. This video is proof that focus on this issue, by developers, DF, gamers and other outlets, is having tangible results.

It's not perfect as it shows, I don't know if anything will ever be when you have JIT compilation as a necessary step, but it's odd to get the reaction the platform is fundamentally flawed in this one aspect when we're now seeing more attention and shipping approaches to actually solving this than ever before. I see some concerning things revealed in that video, but frankly shader compiling seems to be the least of them.

Is this a pc issue or a poor port issue. The last of us was a shit show at best to be kind.

For the shader cache what needs to be taken into account in terms of the pc specs ? Is it just based on the graphics card ? If so why wouldn't the solution be to look at the most popular cards on steam and take the top 20-30 and just have the developer compile those shaders and send them out in the patch conditional to what graphics card you have ?
 
Great video - one note is that this looks like the classic case where there's a bottleneck somewhere that isn't CPU / GPU compute resources, especially seeing the loads spread across all the cores nice and evenly, but utilization extremely low, in some cases <50% on all cores.

If you're memory subsystem bound, you can throw as many CPU cores and even CPU frequency at the problem as you want, but you won't get much faster, you just have different things stalling waiting for data.

I'd be really curious to re-run the same tests either with:

1) Raptor Lake CPU, with fast/slow DDR4 and fast/slow DDR5
2) AMD platform, two nearly identical SKUs with and without V-Cache (or a 7950x and test each CCD separately)

My hunch is that it's going to scale with memory latency/bandwidth more than it will with cores.
Waiting on memory isn't going to manifest as low CPU utilization--a process where every memory access misses the cache will still show up as using 100% CPU.
 
Also, in terms of compilation times, they will never be quick enough to do 'during the opening loading screen', because load times are also meant to be increased dramatically, so either way, we're talking about missing out on advantages and having to wait much longer than consoles will have to.

They often literally do do it during the splash screens though, many times even on console games (or rather, more often) they're not skippable options (Insomniac games are good at this and allow you to skip them if you've booted it up recently).

Regardless, the point was that games like TLOU1 are extreme outliers in this respect, at least so far. The vast majority of games that do have a precompilation step are far quicker, and some like Horizon and Uncharted also allow you to get into the game immediately like TLOU does, however they don't have the same impact to gameplay with background compiling. A delay of a few minutes, if that, after every driver update is hardly a significant detriment to the platform. Sure, if ~20 minute precompilation stages become the norm, that's another matter - but we have no indication that really is going to be the case.

Of course there are advantages and deficiencies to every platform, you're not presenting some bold hard truths here people are just unwilling to accept (well, most) - it's how much a deficiency you're saying exists, and when you're doing it. To imply the PC is irreparably screwed because of shader compilation in response to the investigation of an engine's significant improvements in this respect is very odd timing, and especially considering the video was specifically designed to stress test just one aspect of an engine's attempt to deal with this problem. It is not the only approach in the engine itself, and it's not the only approach the industry is trying as a whole to tackle it.

Saying "some platforms have differences" is very different than saying "Sounds like overall, the PC platform as a whole is just not equipped to deal with the increased complexity and quantity of shaders in games in any kind of ideal way" and "This problem genuinely only presents itself as soon as a developer wants to use high quality shader materials or any large amount of them" (which also is not accurate - there's a reason even games made a decade + ago were compiling shaders before gameplay and during loading screens). The implication is quite clear - PC gamers will have to accept stutter or exceedingly long compile times for any modern game with material complexity. There is just no basis for this kind of hyperbole at present, and certainly not based on a quick look at what improvements and engine has made vs. it doing jack-shit before. After all, this is why we're even discussing this now and why #stutterstruggle was a thing - it's largely due to UE4 games not doing anything to deal with this, less than most games did when shaders were a new concept. Even just adding in a ~1 minute compile step, even with UE4's limited utility in this respect, was enough to change games from launch disasters such as Sackboy, into stellar ports.

Like I said, I think the PC as a platform does have some unique challenges, but I'm more concerned with traversal stutter, single-threaded bottlenecks and seeing real-world uses of DirectStorage than I am about shader compilation. I think it's probably the one area that's getting some major traction atm.

Yes that's fair enough, I did consider mentioning the "re-compilation" requirement after updates but didn't want to stray too far from my central point in that post. To me that's pretty much a non-issue because I encounter it so rarely. TLOU is a pretty big exception there as I've deliberately jumped into it straight after every update to re-test the performance. Most of the time I buy games long after the initial patch flurry has completed and I also tend to upgrade my drivers only when starting a new game that benefits from it (so not mid game). I can see how it could be an annoyance for people who buy games early in their lifecycle though, or for multiplayer games where they are updated regularly over their life and are played for a long period of time.

This is exactly where this async + draw skip approach would come in very handy though - for those times when you don't want to wait. You avoid the huge constant stutter-fest, but you may get the occasional smaller hitches. If you want the smoothest possible experience though, you can wait. Ideally as well if you chose to jump right in, the game will be churning away at the cache in the background on low-priority threads too.
 
Last edited:

To ensure a smooth dimension-hopping experience, our team implemented DirectStorage 1.2 in Ratchet & Clank: Rift Apart on PC, including GPU decompression.

Richard van der Laan, Senior Lead Programmer at Nixxes Software, explains:
To enable quick loading and instant transition between dimensions, the game needs to be able to load assets quickly. DirectStorage ensures quick loading times and GPU decompression is used at high graphics settings to stream assets in the background while playing. Traditionally, this decompression is handled by the CPU, but at a certain point there is an advantage to letting the GPU handle this, as this enables a higher bandwidth for streaming assets from storage to the graphics card. We use this to quickly load high-quality textures and environments with a high level of detail.

Principal Programmer Alex Bartholomeus:
For Ratchet &amp; Clank: Rift Apart on PC we added adaptive streaming based on live measurement of the available hardware bandwidth. This allows us to tailor the texture streaming strategy for the best possible texture streaming on any configuration. With DirectStorage, the use of a fast NVMe SSD and GPU decompression, this results in very responsive texture streaming even at the highest settings.

DirectStorage is developed to fully utilize the speed of fast PCIe NVMe SSDs, but the technology is also compatible with SATA SSDs and even traditional hard disk drives. This means Ratchet &amp; Clank: Rift Apart on PC can use the same technology for loading data, regardless of the storage device in your system.
 
Hopefully you can toggle DirectStorage on/off and GPU texture decompression on/off. Would love to see some comparisons on CPU utilization.
 
The Last Hope on Nintendo Switch gained notoriety for a trailer that revealed a sub-standard iterative take on The Last of Us... but what if we told you that none of the trailer content was actually in the game? What if it we told you it was a whole lot worse? In fact, this is a cynical disaster area of the game with no redeeming features whatsoever - not even at its $0.99 sale price. John Linneman reports.
:ROFLMAO:
 
Hopefully you can toggle DirectStorage on/off and GPU texture decompression on/off. Would love to see some comparisons on CPU utilization.

The fact that they specifically mention it's a part of "high" graphics settings indicates so. That it's also classified as a 'high' graphics settings implies there's a potential GPU hit, which of course there would be but it's going to be interesting to see how much as that's been a concern raised about it (based on demo scenarios that are not necessarily representative of an in-game workload mind you). Having it being toggable between the CPU/GPU is great though and afaik how DS 1.2 is meant to be presented in games.
 
Last edited:
Status
Not open for further replies.
Back
Top