Disagree. A while back someone here pointed ot the cost of DLSS at 4K (upscaled from 1440p IIRC) was about 6ms. That's really a lot, but i don't know if this number is similar for other games.I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.
What would be the cost to do bicubic upscale and then something like CAS? I guess 2 ms or less. Would it look much worse, or worse at all? Likely not. How much time does it take to develop this? If you take AMDs code a day. Or a bit more if you do fancy temporal reconstructun to increase quality. And you save the weeks to wait on results from AI company.
With upscaling still being the only application of tensor cores, their existance remains questionable. Upscaling is too simple to require NNs, human written code can do it more efficiently.
I would prefer a smaller and cheaper chip, or more general purpose performance (which could do that tiny bit of ML we need in games as well).
Either this or finally a real application that sells me those cores.