Trying to spin my argument as not based on reality is disingenuous at best and malicious at worst. As GPUs have gotten more powerful, we almost always move away from crutch rendering techniques created to bridge the inadequate power of the hardware. We move to techniques with less compromise than before. We moved from bilinear/trilinear filtering to anisotropic as example and the same vain, the industry will move away from DLSS when the time is right.
I don't think this is true which is largely what Function is trying to point out here. The industry moves away from things that no longer have a place in rendering, we don't move away from them because we necessarily have more power. If that is so, SSAA would have dominated but it's not. The reason we don't use SSAA despite how old it is, is that power could be better used elsewhere instead of supersampling down. In the same method where we are hitting a cross over point where the cost to increase graphical fidelity using traditional T&L methods is now like more expensive than going the RT route, which is why there is now a emergence of RT accelerators. The cost of the compromise got too high, so we are instead just moving to incorporate ray tracing now.
From that perspective the cost of running DLSS, or CBR, or any other up sampling technique should not cost more than native, so these up sampling techniques by default should not go away unless there is a more superior upsampling technique. So even if you have enough power to run 4K native; up sampling techniques can be made to render 8K and 16K respectively. As long as we have a drive to increase the resolution of screens, while having a physical cap on power output, up-sampling techniques are unlikely to go away, if anything are likely to become more abundant.
Technology advances in leaps followed by a period of stagnation and then the process repeats itself. In 100 years, the use of DLSS and many rendering techniques used today will be non-existent due to technological advances. We always move to techniques with less compromise as technology advances.
With regards to the prevalence of DLSS, as long as it’s not open source, it’s period of relevance is drastically limited. It’ll eventually be replaced by an open source equivalent at some point and we’re already seeing evidence of that with intels proposed solution. I’m don’t think DLSS is useless. It’s quite useful but, it has its very evident flaws. I guess I take strong offence to people parading around spewing out the marketing speak of their favourite hardware manufacturer. I’m not saying you’re doing that but certain people here are quite guilty of it.
Perhaps, given a long enough time this may be true given a framework like DirectML is present for this. But the effort is not so easily replicated. Nvidia can continually improve the performance of DLSS, as they have been, much faster than developers will be able to develop newer non-ML based up-sampling techniques. And if other companies are competing in ML based up-sampling and AA, then there will be a variety of competition on models anyway. Think on how long we've been iterating on TAA, TAAU, and MSAA now, compared to how quickly DLSS is iterating in such a short time. MSAA is still around because some games still require forward rendering for instance with minimal blur.
The power of DLSS is not in the hardware, on the contrary, the power is in the software behind DLSS itself. We are unlikely to see Nvidia let DLSS go, as that product and other ML based graphical solutions are likely to be worth more than the silicon they produce as time goes on.
Each technique will find it's place. Calling it a crutch is perhaps quite crude to what it is. It's a tool much more than it is a crutch. Besides if up sampling is really not your thing, Nvidia still offers DLAA for those seeking a different anti-aliasing approach.