Re: Options
It's good to have options, yes! That's why we put up with the hassles of PC as a gaming platform - often brought about by all these options.
Things that have inherent compromises in bringing about a potentially better experience are always best as an option. People being 'against the technology' is such a silly strawman that's been erected in this thread. Once again, this notion that this technology exists in some vacuum that's not tied to a commercial product (and one that's been marketed as the prime selling feature of enthusiast-class cards that the majority of gamers will not have access to, especially in this economic climate) and thus we shouldn't employ the requisite skepticism is so odd. This is especially so when the company producing this technology has made a considerable effort to obscure the generational uplift you are receiving without employing this 'choice'.
I think what some see as 'against the choice' of DLSS3 is rather skepticism towards the notion that these cards - the 4080 16/12Gb in particular, are significant jumps over their predecessors
because of DLSS3 and as such, their asking prices are therefore warranted. It's not either/or, you can be interested and even quite positive about the development of motion interpolation in games, but still somewhat balk at being excited this "choice" you've been given
because it's currently tied to what is seen as a considerable price hike over previous new generations. When DLSS3 is more 'democratized' (in a sense) by being being available on midrange cards at some point in 2023, then the argument of "well just ignore it if you don't want to use it", and the comparisons to DLSS whereby it's a cost-saving measure for your res/fps target will be more applicable. Currently though, this technology is joined at the hip with the most expensive debut of any new generation of GPU's.
Although not necessarily on this forum, the other argument I've seen is that DLSS3 is actually
reducing choice due to the silicon budget devoted to this. Basically, the argument is that Nvidia
could have made considerable advancements in non-reconstructed rendering at the same, or lower price points if they just weren't so wed to their 'obsession' with AI and reconstruction tech in general. This is hardly compelling to me either, as it seems to me we are facing hard limits in available bandwidth, and the choice is either devote an assload of die space to very fast cache, or...? Myself and others have arguments against Nvidia's product segmentation choices with Ada, but this theory will be taken to task in just over a month - it basically requires you to believe AMD has had a breakthrough in their Infinity Cache/chiplet architecture that will completely blindside Nvidia/Intel from a price/performance perspective.
So we'll see. But yeah, I'd say skepticism on that front is uh, more than warranted too.