Starfield to use FSR2 only, exclude DLSS2/3 and XeSS: concerns and implications *spawn*

I'm a little uncomfortable with this kind of militant, uncompromising support for an industry near-monopolist's closed box, proprietary technology.

Just weeks ago PC reviewers including DF were commenting on how Nvidia were using DLSS to justify small, underpowered, vram starved GPUs being sold at premium prices. Now we've suddenly got to a point where Nvidia's play has seemingly been endorsed by a large proportion of gamers.

In the short term I can absolutely understand people who've bought cards for DLSS (amongst other things) being frustrated that Nvidia's proprietory hardware and middleware isn't being used. In the longer term, I see the entrenchment of Nvidia's near monopoly being harmful for PC gaming.
The PCGaming market is one of the few in which second rate technology can survive. FSR in Jedi Survivors and F1 23 (no clue what Codemaster did here) is absolut useless.
Only one guy from Respawn can implement DLSS upscaling and FrameGeneration to make this game much better. Normally every company wants happy customers.
 
Just weeks ago PC reviewers including DF were commenting on how Nvidia were using DLSS to justify small, underpowered, vram starved GPUs being sold at premium prices. Now we've suddenly got to a point where Nvidia's play has seemingly been endorsed by a large proportion of gamers.
Consumers rarely see the big picture and what you are observing is people who are personally impacted by some of these business decisions. In the grand scheme of everything going wrong in the world, a videogame not running as well as it might, is fairly minor. Nonetheless, this is hitting people where it hurts; their game's resolution and framerates. It's testament to human rights crimes to some. :runaway:
 
The reaction is huge this time because AMD doesn't really have the marketshare to force something like this, when you are at 15% marketshare and you upset the majority of customers (remaining 85%) you can't expect to get away with it unscathed.

The only difference between these games is AMD.
One defining factor of these sponsored AMD games, is that they never get DLSS/XeSS support after launch, like many other games do. Most NVIDIA sponsored titles just get FSR either at launch or post launch (even after many months), but AMD sponsored titles never do.
 
Last edited:
The reaction is huge this time because AMD doesn't really have the marketshare to force something like this, when you are at 15% marketshare and you upset the majority of customers you can't expect to get away with it unscathed.
That and new Bethesda Games Studio releases increasingly feel like once-in-a-decade event, Starfield has been hyped to hell and back, and as much as people complain about Bethesda games's visual presentation, their engines are often doing much more than your average game and because of that, they require more hardware to do visually less so eeking every possible performance advantage out of your hardware to deliver Bethesda's visually ambitious next generation title is highly desirable.

There is no 'ideal' game to have this situation apply, but this is possibly the worst title that AMD could have chosen for a marketing exclsuive given the possible performance impact. I just hope there is enough popcorn to see me through until the Early Access week! :runaway:
 
"The Fabled Woods" - small Indie game, one person or so - has been updated to UE5.1. And this guy has archived the impossible: Support of DLSS 3.

No excuses, no sugarcoating anymore: Developers and publisher not supporting DLSS should be called out.
Does it include XESS and FSR also?
 
  • Like
Reactions: JPT
No, just DLSS. But you can asked him to implement it: https://steamcommunity.com/app/1299480/discussions/

Consumers rarely see the big picture and what you are observing is people who are personally impacted by some of these business decisions. In the grand scheme of everything going wrong in the world, a videogame not running as well as it might, is fairly minor. Nonetheless, this is hitting people where it hurts; their game's resolution and framerates. It's testament to human rights crimes to some.
:runaway:
I think it has more to do with the fact that one person can implement DLSS without having access to the source code within a few days. We are not talking about Raytracing or so. Think about shader compiling. Most games fixed this problem within a few days, too...
 
Last edited:
That and new Bethesda Games Studio releases increasingly feel like once-in-a-decade event, Starfield has been hyped to hell and back, and as much as people complain about Bethesda games's visual presentation, their engines are often doing much more than your average game and because of that, they require more hardware to do visually less so eeking every possible performance advantage out of your hardware to deliver Bethesda's visually ambitious next generation title is highly desirable.

There is no 'ideal' game to have this situation apply, but this is possibly the worst title that AMD could have chosen for a marketing exclsuive given the possible performance impact. I just hope there is enough popcorn to see me through until the Early Access week! :runaway:

Yeah it's so bizarre, it's such an obvious flub that should have been so easy to predict. This restriction only flies under the radar for smaller games, but those don't give you the PR win you want. But...if you get that huge title and an expected feature that could benefit a large swath of the market is not there, it's just going to result in exactly what has happened - a huge spotlight, and a lot of questions. Ultimately, this restrictive approach is just unworkable for a minority player in the market.
 
I know I'm late to the party here.... But.

Your totally correct.
NV should open source their algorithm,
The true genius in NV hardware is the tensor cores,

Instead of making it an argument / discussion about software,
open source the software, and let people see how much better DLSS runs on NV hardware v AMD hardware.

This is the approach Intel has taken with XeSS, where they provide an Impl optimized for their own hardware, AND one that works on most modern GPU's too.
This is probably the ideal solution!

Ideally, yeah. One open standard that latches onto whatever custom hardware your GPU has to enhance its quality. That's ultimately the end goal.

Problem with using Xess as the model approach though, is that outside of Arc GPU's, it...sucks? There is no reason to choose it in a game over FSR, or heck sometimes even just bilinear upscaling from my experience. As it stands now, Xess basically advertises against itself if you're not using it on an Arc GPU.

The question then becomes is that vast quality difference between the DP4a/MXM versions due to Intel just not bothering to devote enough time to how the DP4a code path works, or is that inherently a problem of using one API that tries to adapt to significantly different hardware? It's possible DLSS could have a non-Tensor core, machine-learning version that looks as good, or better than FSR does. But if that can only come about by literally having another version, in that it's another API that devs either have to code against or alter their DLSS code to accommodate for how it operates, then you're basically right back to where you are now - having to code against a different API to target the rest of the market. That's just replicating FSR.

I really have no idea, I'm just saying Xess isn't exactly the best advertisement for this approach atm. Radeon 8000 series comes out and has AI cores that significantly enhance FSR and it requires no additional work from developers to take advantage of it, and also FSR2 perf/quality as it stands now isn't degraded, then you've got some pressure. If people actually see an open standard delivering the goods, then they will start to ask why the vendor with the proprietary approach isn't adopting it.
 
Last edited:
Ideally, yeah. One open standard that latches onto whatever custom hardware your GPU has to enhance its quality. That's ultimately the end goal.

But doesn't this devalue the software side? This circles back to a debate that was had when DLSS was first introduced (as well a broader issue in how to value software features on a given hardware). Without rehashing it so much let's just assume at the moment that signficant resources went into the development and ongoing maintenance of DLSS (including training) resulting in it being the "best" solution, without any sort of hardware tie-in and offering the software completely free to all parties seems like that effectively places a direct value of 0 on that development/maintenance that results in it being the "best" solution.
 
But doesn't this devalue the software side? This circles back to a debate that was had when DLSS was first introduced (as well a broader issue in how to value software features on a given hardware). Without rehashing it so much let's just assume at the moment that signficant resources went into the development and ongoing maintenance of DLSS (including training) resulting in it being the "best" solution, without any sort of hardware tie-in and offering the software completely free to all parties seems like that effectively places a direct value of 0 on that development/maintenance that results in it being the "best" solution.
I also agree with this comment.

There is significant value in the result that DLSS gives, and i can totally understand Nv not wanting to share the internals.
However in an ideal world, ( yes i know i'm in dreamland now ) we would have a version of DLSS that implements a CPU based "software" version of the algorithm,
and then GPU vendors would be able to ( perhaps for a price ) implement support for the algorithm in a hardware accelerated manner on their GPU's
OR - perhaps even better, Engine developers could implement support within the engines themselves.

But if DLSS takes 2ms on a 3080, and 20ms on a 6800XT due to more performant and appropriate HW in the 3080, then it's all a bit of a moot point.
The thing is, right now we just dont know.
Based on very rough calculations, it looks like DLSS is doing somewhere between 3- 10x as much math as FSR2.

If thats the case it might even be in NV's favor to release a CPU version for analysis, it only makes them look good, and there hardware look even better.
 
Are there any reviews / investigations into XeSS on intel vs AMD/Nvidia?
I know the underlying codepath is different, but i always thought it should produce the same result?

I guess since it's open source anyone could in theory make a CPU based "reference" implementation, that wold at least provide a set of reference images after scaling?
 
Nvidia is not going to release a software version that’s freely available for other competitors to use.

One the software version on regular shaders would be slow in comparison to the tensor version. Second nothing would stop competitors from using the software to engineer their own hardware version that mirrors DLSS.
 
The question then becomes is that vast quality difference between the DP4a/MXM versions due to Intel just not bothering to devote enough time to how the DP4a code path works, or is that inherently a problem of using one API that tries to adapt to significantly different hardware?
DP4 is not really a sufficient interface to tensor cores (I don't actually know that those operations necessarily even go to the tensor cores at all to be honest). Arguably there is no public API right now that someone could use to implement one of these algorithms on another piece of hardware even if the hardware was identical or reasonably compatible (which is almost certainly the case with NVIDIA and Intel right now at least). If NVIDIA were to expose such an interface to their hardware I imagine Intel might actually give us a real XeSS implementation on NVIDIA. The opposite would almost certainly not happen unless the industry takes a stand on this.

Honestly I'm not sure why Intel did the DP4a version of XeSS and made it produce different results. I'm guessing that implementing the "real" version without tensor cores would have been extremely slow (I think I recall DP4a is 4x slower than using tensor cores), so they went for something that might actually be fast enough to use. I agree in practice that path just confuses the issue and given that most current users are going to be exposed to it via running the shittier version of XeSS slowly on NVIDIA hardware they are going to come away with the impression that XeSS itself sucks.

If people actually see an open standard delivering the goods, then they will start to ask why the vendor with the proprietary approach isn't adopting it.
The issue is that for this approach to even be possible it requires that pressure from users/press/microsoft/etc. The IHVs have no real incentive to make any of this compatible with each other unless games stop supporting the vendor specific paths, as they did with graphics APIs (glide, etc) and other things in the past.

Again I think it's fine to have a transition time where things remain proprietary before there are standardized interfaces (i.e. enough of an API that someone could implement an equivalent to DLSS in user mode, not using private interfaces), but it's not a fine end state and thus we have to be seeing progress which frankly, I don't think we have been. I'd love to be proven wrong and have someone come confirm that you could implement DLSS/XeSS efficiently on top of DirectML or something, but I imagine if that was true we would have seen it at least from Intel, who typically like standards a lot.

Aside: what the hardware does is not really a big secret... it's mostly exposed in CUDA (https://developer.nvidia.com/blog/programming-tensor-cores-cuda-9/ for instance). Intel's stuff is also public and very similar hardware. Unfortunately neither CUDA interop nor Intel's SYCL stuff is sufficient for interacting with the rendering pipeline efficiently.
 
Last edited:
Update June 30, 2023: We've had a back and forth with representatives from AMD and so far AMD has chosen not to comment on whether Bethesda is completely free to add in support for other upscalers alongside FSR2 in light of its AMD partnership. And we have not had any response at all from Bethesda to a similar request for comment on the situation.

 
I don't think open source the implementation of DLSS2 is really that important. Its basic theory is fully explained and Intel has quickly developed their own version clearly inspired by DLSS2. At least I feel this is more of a hardware issue (performance) than the actual implementation itself to AMD.

As for the industry open standard. It always sounds good but we all know what's the reality -- remember how DirectX took over OpenGL? For an industry that heavily relies on innovation, waiting for agreement from third parties is a huge disencouragement. I don't see any good for Nvidia to do this as well. You simply can't "wish" a business to give up its advantages, especially when the advantages are not coming from improper competition but hard work.
 
I don't think open source the implementation of DLSS2 is really that important. Its basic theory is fully explained and Intel has quickly developed their own version clearly inspired by DLSS2. At least I feel this is more of a hardware issue (performance) than the actual implementation itself to AMD.

As for the industry open standard. It always sounds good but we all know what's the reality -- remember how DirectX took over OpenGL? For an industry that heavily relies on innovation, waiting for agreement from third parties is a huge disencouragement. I don't see any good for Nvidia to do this as well. You simply can't "wish" a business to give up its advantages, especially when the advantages are not coming from improper competition but hard work.
I'd like to see a happy medium.

IMO, DirectX, OpenGL, Vulkan, should all just standardize the inputs and outputs of the upscalers at the API level and let them be black boxes, but standard-compliant and interchangeable black boxes.

There's been some talk, if not in this thread, then others about needing and wanting to see behind the curtain of the black box, and somehow be assured that the output pixels of the upscaler were deterministic and identical between vendors, but that seems needlessly limiting to me.

Heck, we didn't even get identical output pixels for something like anisotropic filtering between vendors until relatively recently. I'll have to find some comparisons that escape me at the moment, but if memory serves even some very recent (within the last 5 years) GPU designs still had some angle dependency and different outputs between vendors. Nobody complained that ATI/AMD and NVidia didn't publish the exact mathematical algorithm used in their AF filtering kernels.
 
Update June 30, 2023: We've had a back and forth with representatives from AMD and so far AMD has chosen not to comment on whether Bethesda is completely free to add in support for other upscalers alongside FSR2 in light of its AMD partnership. And we have not had any response at all from Bethesda to a similar request for comment on the situation.
It's very bad. I think AMD looses a lot of trust currently, also because of not delivering frame generation competition as promised.
So i guess they have two good options: Delivering DLSS3 competition on time so the good news overshadow the doubts. (won't happen, since they already announced FSR2 not more)
Or allowing DLSS with Starfield. (guess won't happen either)

The bad option is to keep blocking DLSS without confirmation or explanation.
But i don't see any winner here. Usually David gets some sympathies for being smaller and weaker than Goliath. If David does not play fair however, what we get is increasing distrust on the HW industry as a whole.

Though, i can also see how AMD got into this bad situation: GPUs are hardware, and drivers to enable API support is all a chipmaker had to do. Things like TAA and other image processing (including upscaling) was responsibility of game devs.
NV has changed this with introduction of DLSS. Intel is strong with research and could compete seemingly easily, but AMD obviously has a hard time.
That's not really fair either i think, at least if we think further considering development of AI solutions is pretty restricted to an elite of mega corps.
But to my consolidation: The singularity is near, and mega corps will loose control over the upcoming AI masterrace too anyway. :D

IMO, DirectX, OpenGL, Vulkan, should all just standardize the inputs and outputs of the upscalers at the API level and let them be black boxes, but standard-compliant and interchangeable black boxes.
Yeah, this would be nice.
Also because some smaller indie devs with custom engines could add support more easily. And some of those guys still exist.
 
AMD put themselves in a weird position for no reason. They have open sourced pretty much all of their technologies and even published their ISA so devs can optimize shaders for their architecture. My understanding is the Nvidia ISA is not available for developers so they cannot optimize shader output as well as you can for AMD. So AMD is basically sitting in a position where devs are optimizing for their platform anyway, because it's open. There's essentially no reason to make deals that block the use of other technologies. I can fully understand making a deal that provides an optimal experience on AMD. So do that and let the performance speak for itself. Seems weird to block DLSS etc when you have an opportunity to make sure the base level performance is top notch on your own hardware.
 
There's essentially no reason to make deals that block the use of other technologies.

The biggest mystery in all this is what AMD is hoping to accomplish. FSR2 isn’t a selling point for their hardware because ironically they’ve made it run on anything. And they have no market share to protect. Add this to a growing list of examples where instead of competing head on AMD is choosing to run interference and hope one day they’ll catch up.
 
Back
Top