I don't think it's of any value to talk about technologies so far out there that largely may never see existence in the customer marketplace. It seems like a quite a sideline discussion to the realities of today's problems.I think the disagreements we’re encountering are based on entirely on the scope at which we view the discussion. Your arguments are predicated on the limitations and understanding of technology today. I’m not thinking about computer graphics in a 10 or 20 year scope, I’m thinking about how it will evolve over the next few hundred years. The tv of today is not the same tv that was invented in 1927. Neither is the phone the same as the first phone invented. The same can be said for computers, computer chips, and rendering techniques.
Computers and computer graphics is a field in its infancy compared to other professionals fields of study. I expect drastic changes as the field continues to mature. We won’t rely on silicon forever and we’re are already exploring alternatives like graphene nanoribbons, etc. Even the way we make chips will change. Photonic processors is an area that is being heavily researched.
If we are talking about the future of computation, the reality is, machine learning is much more the future of computation than brute force is.
AlphaGo is an AI that can decimate all other AIs and humans in GO, where the number of permutations in a GO board game exceeds the number of atoms in our known universe (http://norvig.com/atoms.html). AKA, it's computationally impossible for an AI to solve through brute force. Yet it's swiftly annihilates any existing AI and person on the planet, and typically humans annihilate GO AI because of the number of possible permutations. Solving GO is one of the best applications of machine learning because of this reason - it's not computationally possible to solve.
This really shows the power of approximation, and if you're talking about the future of power, approximation can go a significantly far way in modelling something without needing to calculate it. For this reason, while you see DLSS going away due to brute force, I see machine learning taking a greater position in it. The more things you want to model with additional complexity, the harder brute force calculation becomes. You will need approximation. I don't think it's worth while to have a debate over the idea that these algorithms are useless 10-20 years from now because we've somehow managed to unlock personalized quantum computing for home use.
Machine learning is computationally heavy, the more computational power we give it the better our models become in a given time frame. That means for a great deal of many tasks that don't require insane precision and accuracy, but has a very large open ended amount of possibilities, machine learning will always scale better than brute force.
Last edited: