Digital Foundry Article Technical Discussion [2024]

Counter-Strike 2 is unshackled from console limitations.

Factorio is unshackled from console limitations.

Escape from Tarkov is unshackled from console limitations.

Total War is unshackled from console limitations.
 
Sounds like simplifications will continue to be used in HWRT then until hardware evolves if that is the case (staggered update, simpler materials, lesser geo). It is not like the SWRT in Lumen/any other game is going to be attempting to maintain material/geometric parity with primary view once these things go online for AAA games.
Indeed, was going to say the same thing. Your point is certainly true @Lurkmass generally, but it's more a reply to folks who posit that we're going to raytrace primary rays in the near future. I actually have no doubt that in the long run primary rays will be in the noise of the performance equation to the point that you might as well just trace them, but for the near future there's still some question marks both with high detail BVHs and volumetrics as you note (although RT is obviously a powerful tool for the latter as well). The scene representation in the RT structure is probably going to lag the features in the primary/shadow rays by a little bit for a while, but I have no reason to believe we won't continue to add similar features to that representation as well over time.

But definitely neither front is standing still. We want high quality geometry with tesselation, displacement and (ultimately) deformation for primary rays, but compromises can be made for many types of secondary rays. Shadows are definitely in an awkward middle position where the near future is probably messy. Big area lights clearly need to be RT, but we also need the full geometry detail for contact shadows, but there's not a nice division of the two in reality of course as it depends on the spatial relationships. We'll probably see a mix of both for now, with local lights being more quickly moved to raytracing (tend to cast softer shadows and can amortize the costs of BVH building nicely while shadow-map-per-light eventually runs into a wall), but directional lights likely being more difficult as you want those high frequency shadows and you can't cheat as much with objects in the distance.
 
Counter-Strike 2 is unshackled from console limitations.

Factorio is unshackled from console limitations.

Escape from Tarkov is unshackled from console limitations.

Total War is unshackled from console limitations.
And coincidentally these are PC exclusives and exceptions.

Curiously how well did these games do, besides Counter Strike2 which has a huge cult following?
 
Factorio's developers said that after Christmas in 2022 they have sold 3.5 million copies. I think that's quite good for a game of this kind.
I can't find concrete numbers for Escape from Tarkov, but judging from its popularity I think it's safe to assume at least a few million copies sold.
The first Total War game probably didn't sell extremely well but it did kickstarted a successful series.
 
We want high quality geometry with tesselation, displacement and (ultimately) deformation for primary rays, but compromises can be made for many types of secondary rays. Shadows are definitely in an awkward middle position where the near future is probably messy. Big area lights clearly need to be RT, but we also need the full geometry detail for contact shadows, but there's not a nice division of the two in reality of course as it depends on the spatial relationships. We'll probably see a mix of both for now, with local lights being more quickly moved to raytracing (tend to cast softer shadows and can amortize the costs of BVH building nicely while shadow-map-per-light eventually runs into a wall), but directional lights likely being more difficult as you want those high frequency shadows and you can't cheat as much with objects in the distance.

Yeah and that’s kinda why virtual geometry isn’t a threat to RT. There are some fundamental things you can’t do without shooting rays so the end game is still dense geo + RT working together in harmony. I’m guessing Nanite tessellation & displacement is done on the fly once per cluster just prior to rasterization and there’s no need to cache the result. With RT you would want to push the result back into the BVH. That could have serious implications for memory usage. Really interesting problem.
 
I'm looking forward to the video on FSR3.1 and where it will fall in the reconstruction hierarchy Alex has laid out. :)

And then I'm looking forward to how Nvidia responds ;)
very curious about that one, and how it improves over FSR 2, there is so much a hand made algorithm can do. My guess is that there will be important improvements and there will be also instances where apparent improvements lead to little flaws that didn't exist in FSR 2. We shall see.

Alex nailed it again, although that 1 year and a half old video was like my first love for these new AI techs -XeSS-, along with the Wolfenstein New Blood one -DLSS-.

What I couldn't replicate is using a more advanced version of XeSS in Shadow of the Tomb Raider. I tried a year ago or so to do that copying the XeSS file of the 1.2 version into the game's main folder, but the game ended up not detecting XeSS for some reason.

But now I am going to try with 1.3 version, mimicking Alex and see if it works.
 

DP4a XeSS continues to improve. Yet more evidence that the Series consoles have more than enough 'TOPs' to do an AI powered version of upscaling significantly better than FSR2 - especially when it comes to things that move. (Things that move being important in video games).

MS continue to be AFK.
 
DP4a XeSS continues to improve. Yet more evidence that the Series consoles have more than enough 'TOPs' to do an AI powered version of upscaling significantly better than FSR2 - especially when it comes to things that move. (Things that move being important in video games).

MS continue to be AFK.
I doubt MS would commit to XeSS. They are such a small player at this point in time.
 
I doubt MS would commit to XeSS. They are such a small player at this point in time.
I think it's not so much that MS should adopt XeSS so much as it's just embarrassing that MS haven't come up with their own superior solution to FSR2. They clearly had plans for AI reconstruction in public slides for XSX processor design and we know they had DirectML and all that. Microsoft also have massive AI resources in general and software is their wheelhouse. You'd think this would have been an easy area for them to have developed an advantage, yet instead, it looks like they're gonna get left behind by everybody. It's bewildering. Heck, even Nintendo will likely be using a better solution soon...
 
Perhaps a prior undisclosed agreement with AMD/Nvidia/Intel has, so far, prevented MS from developing their own AI reconstruction tech.
I don't think that's very likely, but there has to be some explanation for it.
 
I feel like something went wrong on the Microsoft side where they had plans for things to take advantage of their little ai tweaks (int4, int8 ?) on the xbox and they didn't work out. Either that or they straight up just made tiny enhancements without any real plans because machine learning and artificial intelligence are the big buzzwords. It has been really strange to see Microsoft largely absent from any kind of machine learning based improvements for gaming, and directml seems to be a graveyard.

Edit: xess looking pretty nice now. Hopefully the new FSR is a big bump.
 
Last edited:
Phil Spencer just strongly believes in the image as the developer intended and native resolutions.

In all seriousness I don't feel it's as simple as it's being made out as.

In hindsight (thanks to Nvidia I guess) these ML scaling techniques are now (well kind of) showing worth as differentiators and marketable but that sentiments kind of developed more so over time. The opening of this post certainly wasn't an uncommon reaction to DLSS, and the sentiment is still prevalent for reconstruction in general. Also with hindsight we now know games are going to struggle in some cases to even match last gen resolutions and upscaling is needed.

At the moment we have no idea what the cost is to actually develope these ML scaling techniques. You're looking at both dev costs and training costs for the model that need to be ammortized over what you sell somehow. I think a lot of discussions on this, and software add on features, by the general public just seem to assume the costs are trivial or something (and as a byproduct hate any sort of hardware feature locking). Sony can ammortize costs over through the PS5 Pro, there is no avenue here for Microsoft to do so.

Any gatekeeping of a solution by Microsoft might also be problematic. People are going to accept Sony gatekeeping PSSR to the PS5 Pro likely, but what about Microsoft in this case? One of the xbox's? Another xbox? Xbox only and not Windows? Controversey all around here.

When ML hardware was added for the Xbox my guess was it was a catch all as the direction of where this was going wasn't established yet. ML being used directly for graphics I don't think was in most peoples ideas of where the most prevalent and impactful use case in games would be. It was likely the thought was more so in term of actual game play impacting uses and determined by the developers. From what I remember at the time the prevelant concepts were of ML being used for things like game AI, or adapting the game world to the player (not the extent of the current LLM integrations).

This was a MS Devblog post with respect to machine learning and gaming back in 2018 prior to Turing/DLSS - https://devblogs.microsoft.com/directx/gaming-with-windows-ml/

And remember even with Turing and DLSS (1) at the time Nvidia wasn't exactly sure where to take this technology either. Remember at the onset it was per game training for instance as opposed to a more universal model.

As an aside we might also want to see PSSR in practice first. I don't know if it's the best idea to just assume it's easy to match DLSS and XeSS.
 
Next step for DLSS will be something that works like foveated rendering.

Native quality in the middle of the screen, then quality mode outside of that and then balanced or performance mode on the very outside 👀
 
I'd be skeptical of such an approach unless it can fine grained vs. steps and with no eye tracking.

Achieving especially the former in terms of complexity relative to the gains at current resolutions also not be a very balanced trade off.
 
I'd be skeptical of such an approach unless it can fine grained vs. steps and with no eye tracking.

Achieving especially the former in terms of complexity relative to the gains at current resolutions also not be a very balanced trade off.

I'm thinking more of for VR as that's an area they can target on PC.
 
This was a MS Devblog post with respect to machine learning and gaming back in 2018 prior to Turing/DLSS - https://devblogs.microsoft.com/directx/gaming-with-windows-ml/

And remember even with Turing and DLSS (1) at the time Nvidia wasn't exactly sure where to take this technology either. Remember at the onset it was per game training for instance as opposed to a more universal model.
NVIDIA was already looking for a variety of opportunities to apply deep learning to real-time graphics even before announcing Volta..
Just letting you know. Link
 
Back
Top