Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
Finally finished DF's weekly video and regarding their talk about FSR2.

I'm a little surprised that we're 2.5yrs into this generation and we haven't seen any AI based upscaling on the Series consoles after the fuss that was made over their ML performance.

I've always been on the boat that their ML performance in reality isn't high enough for upscaling and still feel that maybe the reason.
 
Finally finished DF's weekly video and regarding their talk about FSR2.

I'm a little surprised that we're 2.5yrs into this generation and we haven't seen any AI based upscaling on the Series consoles after the fuss that was made over their ML performance.

I've always been on the boat that their ML performance in reality isn't high enough for upscaling and still feel that maybe the reason.

AMD haven't produced an ML solution for their cards yet. Performance aside, if they're not going to do it, who else is going to bother with that research and development?
 
Last edited:

Attachments

  • Screenshot 2023-06-20 at 12.07.25.png
    Screenshot 2023-06-20 at 12.07.25.png
    420.5 KB · Views: 11
Finally finished DF's weekly video and regarding their talk about FSR2.

I'm a little surprised that we're 2.5yrs into this generation and we haven't seen any AI based upscaling on the Series consoles after the fuss that was made over their ML performance.

I've always been on the boat that their ML performance in reality isn't high enough for upscaling and still feel that maybe the reason.
It’s a $$$ thing, and a developer thing, not a hardware thing. You gotta train a model that works well on a console that is performant which costs $$$$. Then it’s got to be implemented across all hardware configurations on PC.

To never make back any profit on it would be bad. I suppose you could lock it to directX, but honestly I can’t see MS wanting to go this route, but it is an avenue to compete, it just ain’t cheap.

XeSS is already out there, why not run that ?
 
Oblivion was considered a graphical showpiece when it came out.
Yep I bought a launch X360 for that game. And the game was (still is) great too.
It’s a $$$ thing, and a developer thing, not a hardware thing. You gotta train a model that works well on a console that is performant which costs $$$$. Then it’s got to be implemented across all hardware configurations on PC.

To never make back any profit on it would be bad. I suppose you could lock it to directX, but honestly I can’t see MS wanting to go this route, but it is an avenue to compete, it just ain’t cheap.

XeSS is already out there, why not run that ?
Exactly. Currently machine learning inference is actually more used by Sony devs on PS5 (Insomniac on Ragnarok and Santa Monica Studio on Spider-man games) while AFAIK no MS developers have used it in their own games yet. This is paradoxical because a couple years ago the company touting the ML card was actually Microsoft, not Sony.
 
AMD not producing one wouldn't stop developers from making one for Series consoles.

We generally don't see developers investing their own AA solutions, nevermind ML ones. You only have to look at FSR2.0 adoption. For whatever criticisms you can level at it, even well funded developers still implement that over rolling their own, possibly ML based tech. My AMD comment was reflecting the reality a solution coming from elsewhere is unlikely.

Curious as to id do on the AA front. They did roll their own AA solution last gen and seem like a prime combination of talented performance whores* and Xbox first party.

*There's probably a better way to describe them! 🙂
 
Last edited:
Didn't Rich say that Microsoft showed DF an ML-based upscaling solution on Gears when they visited them before the console launched?

IIRC, John mentioned that VRS allowed The Coalition to save something like 12% of rendering time per frame, thus, allowing Gears 5 to have a higher average resolution on keeping closer to 4K imagery.
 
It’s a $$$ thing, and a developer thing, not a hardware thing. You gotta train a model that works well on a console that is performant which costs $$$$. Then it’s got to be implemented across all hardware configurations on PC.

To never make back any profit on it would be bad. I suppose you could lock it to directX, but honestly I can’t see MS wanting to go this route, but it is an avenue to compete, it just ain’t cheap.

XeSS is already out there, why not run that ?

I wonder however in regards to that. MS has a lot of series x hardware sitting there waiting for cloud instances to be spun up by users. Shouldn't they be able to use that capacity to continuously train those models on the hardware and even rent it out or lend it out for other developers ?
 
I wonder however in regards to that. MS has a lot of series x hardware sitting there waiting for cloud instances to be spun up by users. Shouldn't they be able to use that capacity to continuously train those models on the hardware and even rent it out or lend it out for other developers ?
It’s not just the hardware requirement, of which training is significantly more work than running; it’s the labour requirement.

MS would be more interested in licensing or buying out a company that does this; I can’t see an internal team taking on this one role. What happens when you finish? What do you work on next?

Seems like an investment that can be made by middleware companies like we see with the one in Spider-Man, or developers can attempt to do this themselves like they did with Gears or War.
 

To provide some info on the content:

  • Competent port overall, but uninspiring. So, you know, Capcom.
  • Has an interesting approach to shader compilation. For weaker CPUs, like the R5 3600, the option to 'Compile Shaders At Startup' is enabled automatically, and you will get a ~12 minute shader compilation process. However, if the game detects a more powerful CPU, like the 12900K, this option is disabled automatically. The presumption being - if this worked as it should - that the game would be using the extra cores/threads to compile the shaders in the background while the game is running (aka like Horizon Zero Dawn) but this doesn't seem to be how it operates - it appears it then uses a just-in-time system, where the compilation only happens when the effect is called. So you will get stutters on any system with this disabled. A shame, but the shader caching step takes ~3 minutes on such a CPU, so just keep it enabled.
  • The same awful screen space reflections as in all ReEngine games, with no option for RT to fix them. Alex just recommends to turn them off.
  • The settings menu has no preview window and no estimation of performance penalty for enabling certain features, so this is a regression compared to the RE games.
  • The PC's Highest texture setting will consume just over 8GB, and seems to have higher texture quality than the PS5 version - but Alex also notes the PS5's textures, and sometimes on the PC as well, don't seem to be streaming in properly. So there might be texture streaming bugs.
  • In terms of graphics scalability, there's not much - most settings should be left at maxxed, with the exception of shadows which can provide a benefit on lower end cards without impacting detail that much. That and of course, turn off SSR which looks like crap anyways. The most scalability is found in the max # of players viewed in the pre-arena area which relates more to the CPU side, setting that from 60 to 40 can provide big benefits on lower end CPU's
  • Might as well keep the low-latency option enabled, has no performance penalty.
No mention in the video I think but from my time with the demo, this also doesn't support DLSS. Not sure if that changed in the release. Again...Capcom.

Alex's basic summary - "I can't get too mad at this, as it's not a disaster". :) It works, it can scale 'well enough' to lower end systems, but doesn't do much for higher end ones. Much like RE4, some annoying quirks and lack of embracing the PC's strengths, but it runs well.
 
Last edited:
We were talking about ML AA solutions, rather than ML general.

In that case, don't expect a solution from AMD? AMD worked and is working fairly diligently on FSR. They made a point to develop a tech that didn't need ML so why provide one now? The visual performance gap between DLSS and FSR is far too small to warrant an additional solution from AMD.

MS would have to build out a solution mostly targeted at their 1st party devs. But again we have FSR, no need for additional investment for minimum gain.

MS talked about upscaling textures with ML, so maybe MS can use the ML hardware to upscale older games that weren't developed with any super resolution support.
 
<thor_is_it_though.jpg>

With XESS, DLSS and FSR available, how many devs do you think would be interested in supporting a 4th solution?

XESS/DLSS when warranted but FSR for everything else. Versus, DLSS for Nvidia RTX, XESS for Intel Arc, AMDLSS for its highend solutions/X series consoles, and FSR for the PS5, Intel/AMD IGPs and miscellaneous cards missing the necessary hardware for ML solutions.
 
Last edited:
With XESS, DLSS and FSR available, how many devs do you think would be interested in supporting a 4th solution?

I took this as originating from what improvements they could make for consoles though, so that's what I'm referring to. If you're talking just about the PC, it would be an uphill battle to get a fourth reconstruction solution having widespread support, yeah.

For consoles though, especially with this supposed technology being part and parcel with their SDK's, and also likely required to provide a meaningful performance lift with any potential midgen refresh, I can see an opportunity.

Whether it can actually happen is another matter of course, it's not like just picking an available ML upscaling algorithm off the shelf, this stuff isn't easy! Sony/MS have some expertise here but so does AMD and they haven't given any indication yet they can do this.

My main contention though is with the insinuation that FSR is "close enough", I don't think it is - especially when it comes to the Performance modes, where you really want to compare ML reconstruction technologies as that's the advantage they bring over purely temporal solutions or techniques like checkerboarding. While I certainly don't think DLSS is flawless in its performance modes either, the gap becomes pretty significant when you move away from the Quality modes IME.
 
Status
Not open for further replies.
Back
Top