AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
That is like claiming hand writing is better than letterpress because it would be unfair to the authors to do their work. Most developers dont care about upscaling. They want an allround solution. So Epic has every incentiv to provide best in class upscaling.
Only paid publisher and developer will optimize outdated tech for their product. The same reason why raytracing exists - it democratizes best in class rendering and graphics.

And you just made the same point that I made. Epic provides a good generalized solution for developers to use. They do not and cannot provide the best upscaler because that depends on optimizing the solution for each given project. As you noted, that's not something all developers are interested in or are even capable of doing. They provide the best hardware agnostic generalized upscaler that they can within the budget that they've allocated to it and geared towards ease of integration (another thing that hinders absolute best in class upscaling results) into any given UE project.

You should also note that hand writing is the generalized low cost solution. Anyone and everyone can do it with virtually no effort required other than picking up a pen and finding a piece of paper. Letterpress requires hand placing each and every letter. If you want different fonts, it requires choosing each letter of the different font. It requires manually spacing each letter as well if you want it both left and right justified. It is better because it requires massively more effort and time than handwriting generally does.

Perhaps what you meant was handwriting versus say a word processor. But then you get into all sorts of things like what printer is being used. So then yes, a good word process with a good printer can in some cases produce a better result but also at a higher cost than just handwriting it. But then someone with good handwriting can produce a document that is just as pleasing if not more pleasing to look at than a document generated in a word processor. And when it comes to calligraphy, machines don't come close to what a good calligrapher can do as a machines (at least thus far) can't put in the small nuances and flourishes that good calligrapher does in order to make it more visually pleasing to the human eye. Perhaps in the future an AI trained machine calligrapher could achieve similar results, but since it doesn't yet exist, we'll just have to wait and see.

Regards,
SB
 
Last edited:
How so? Using ML is inherently worse than hand tuned algorithms as long as you have the time and engineering power to come up with the algorithm(s) producing identical results

Where is this coming from? Actually using ML is inherently besser than hand tuned algorithms. It's the same as for speech recognition and other fields. ML solutions show better results than hand made stuff. That's the reason TV Upscalers changed to AI from hand made upscaling. It's not that they don't try.
 
Where is this coming from? Actually using ML is inherently besser than hand tuned algorithms. It's the same as for speech recognition and other fields. ML solutions show better results than hand made stuff. That's the reason TV Upscalers changed to AI from hand made upscaling. It's not that they don't try.
Because AI won't end up with the optimal result, it's the new brute force. But for most it's good enough and throwing the engineering power and time to beat it isn't worth it.
 
AI is the opposite of "brute force". A similar "static" algorithm is what would likely be a "brute force" to reach the same result.
I said "new brute force", not that it is the same as brute force but as in its "rough" solution to get the work done. Algorithm could either hit same quality with less work or even better results, but it requires disproportionate amount of time and talent to achieve
 
I said "new brute force", not that it is the same as brute force but as in its "rough" solution to get the work done. Algorithm could either hit same quality with less work or even better results, but it requires disproportionate amount of time and talent to achieve
This is a chart showing how image classification accuracy on the ImageNet benchmark evolved over time.

imagenet-2-milestone-web1.gif


You can see that human-written algorithms had effectively saturated, but DL models took off rapidly. The traditional CV models (SIFT, SURF and family) were the best that humans across academia and industry could come up with. There is no existence proof, or even a freaking trajectory showing potentially better results on the human-written side even with disproportionate time and effort. We humans are intuitively pretty good at these recognition tasks, but we failed miserably at translating those intuitions into code. There's no evidence we could have done much better.

The same is true of natural language understanding. This is one of the oldest challenges in computer science, and Transformer models are in a completely different league from anything that humanity was able to achieve before them. There's nothing rough or brute-forcey here. It's the best tool we have for the job, bar none.

Note that there is a decent amount of human effort in developing an AI based solution for real-world problem. But that effort goes into careful decomposition of the problem into learned-vs-hand-authored kernels, tuning models and hyperparameters for the learned sections, and many many rounds of iteration. The total human effort hasn't necessarily been reduced, it has simply been redirected into problems where humans can more effectively translate their intuition into code.
 
I said "new brute force", not that it is the same as brute force but as in its "rough" solution to get the work done. Algorithm could either hit same quality with less work or even better results, but it requires disproportionate amount of time and talent to achieve
I presume when you saiid "algorithm could..." you're talkiing about a human hand-crafted algorithm? Because that's 100% false.

Machine learning is precisely about having a machine iterate through hundreds, thousands, millions, even billions of algorithms until it finds precisely the correct answer -- it's literally doing what you described humans needing "disproportionate amount of time" and collapsing that time into minutes or hours instead of decades.

DLSS is the epitome of this. We know precisely what any rastered scene in any game would look like at 120Mpx because we can literally raster every scene at that resolution. Thus, the machine-learned upscaling system has reference frames for everything -- both low and high rez -- and can iteratively work through various upscaling algorithms until it finds the one which most closely matches. And it can do in an infinitesimally small fraction of time versus some random shmuck ditzing around with various tunable parameters.

This is the entire reason ML exists in the first place, and it's damned good at its job. Scary good, actually...
 
I said "new brute force", not that it is the same as brute force but as in its "rough" solution to get the work done. Algorithm could either hit same quality with less work or even better results, but it requires disproportionate amount of time and talent to achieve

You should really have a look into research before stating baseless assumptions. AI is showing better results than 30 years of algorithm research in many fields. The brute force approach would be to try to get the best picture with algorithms, which needs much more computational power. AI isn't the holy grail in every field, but it has shown already that it can be much better than anything humans can do by algorithm as neckthrough also stated.
 
AI is the opposite of "brute force". A similar "static" algorithm is what would likely be a "brute force" to reach the same result.

Well ML training is certainly brute force. It’s literally running a gazillion guesses and reverse engineering the right answer. ML inference though is likely much more efficient and accurate than any handwritten algorithm could hope to achieve. Humans simply don’t have the ability to decompose problems with large numbers of variables.
 
AMD FidelityFX Super Resolution 2.0 to launch on May 12 with Deathloop, 10 more games planned - VideoCardz.com

Today AMD confirmed that the following games will receive FSR 2.0 update:

  1. DEATHLOOP
  2. Asterigos, Delysium
  3. EVE Online
  4. Farming Simulator 22
  5. Forspoken
  6. Grounded
  7. Microsoft Flight Simulator
  8. NiShuiHan
  9. Perfect World Remake
  10. Swordsman Remake
  11. Unknown 9: Awakening
AMD has said that FSR2.0 will require at least 3 days to implement for games that already support NVIDIA DLSS technology. The competing upscaling technique is supported by more than 150 titles.
 
And it can do in an infinitesimally small fraction of time versus some random shmuck ditzing around with various tunable parameters.

I'm writing compressors, which are parameterized algorithms. Since around 30 years I don't tune the parameters based on intuition, but based on datasets. I write a metric to minimize, the evaluation code, get a dataset and then let the machine crunch through a couple of computation-years. This makes the algorithm generic over the trait of the training set. Like photos or english text and so on. I guess that makes me a ML researcher since 30 years.
I don't want to redicule the achievements, just want to say that the principle of what's declared "ML" isn't new, it's not some new innovative paradigm. It's I think clear for everyone that is programing, that for stuff of a certain complexity "fitting" starts to be the only practical way. It's the magnitude of the fitting that's changed, and I would argue that's simply because of computational power and memory. If you have a computational constraint, then you can't conduct ML in the required manner. None of the terabyte large NNs could have been run on a 50s super computer. These algorithms work in their comfort space, and don't outside. Outside a human has to design algorithms.
In a way to me this is what "new brute force" resonates with. Today you have a massive program, with massive state, that let's say runs many hundreds of orders of magnitude longer than earlier simpler approaches to "fitting". Sometimes you can extract the working essence of that as a result into a static (offline) program that then obviously can't "fit" stuff anymore, but apply the lessons. So, we have now a "fitted" parameterization of our algorithm ... but the algorithm was made by a human, so I don't see any principle difference between prior "fitting" and modern "fitting" besides it being *massive* when called "ML"/"AI" nowadays.
If you have a online "fitting" requirement, the condensation is not possible, the applying machine actually needs this massive computational time and program size and memory to function, all the time.

What, to me, traditionally meant "AI", was a computer or program to invent algorithms and their structures unsupervised. And I can't really see that in reality. While for example Google alpha is an algorithm that can learn the / some goals and rules of the "game" unsupervised, it itself isn't. To the contrary, it's a meticoulously hand-crafted (and tuned) algorithm that is not universal, as it can not learn every goal and every rule and understand every "game", and can't change / mutate itself. You might argue, that's something a human can't either (we can't on-the-fly change our brain's DNA / program and mutate to access otherwise unavailable problem solving skills etc. pp.), but I would hope we're a somewhat universal solution able to deal with stuff we've never seen before - which is what I guess we call intelligence.
Point being, a human designs a *specific* NN for say super-resolution, and it's useless for english text. You'd have to radically change the structure of the setup, because the machine can't by itself.
We're basically talking second deriverative here, the human itself is too slow to do a task and creates a machine, and programs an algorithm to help him. If I write a least squares solver, it's first derivative; if I write a least squares solver generator, its second derivative. Philosophically you can say programs are parts of peoples thinking, externalized. Not in the native language of peoples thinking but in the machine's language. That program or thought is not intelligent, the human is (well, at least call that :°) ).

I agree that hidden variable detection and correllation detection and so on through NN structures etc., is a really important innovation, but's not a tool that is small or fast or free of redundancies or efficient (in a rate / distortion sense).
 

Interesting, according to them in terms of quality and performance at the Quality preset for both DLSS and FSR 2.0, they are roughly equal. Some things DLSS does slightly better some things FSR 2.0 does slightly better. DLSS has slightly more distracting artifacts while FSR has a slightly more ghosting. So basically a toss up.

Performance mode seems to be better on DLSS. But then for me, I wouldn't touch performance mode on either if I was forced to use either solution, so a non-factor for me but might be something to consider for someone willing to take the IQ hit of performance mode.

Of course, this is only one game. So it'll be interesting to see how they compare as FSR 2.0 gets implemented in more games.

FSR 2.0 does have one huge advantage, however. You can use it on just about any performant GPU. So, if I wanted to, I could use it on my GTX 1070 while DLSS is certainly not an option there.

They specifically mention that FSR 2.0 supports dynamic resolution scaling and that it's pretty seamless. While it's unlikely that I'd use FSR 2.0 (same goes for DLSS), if a game forces DRS then it might be something I'd consider if it mades dynamic resolution scaling less noticeable.

Regards,
SB
 
Well ML training is certainly brute force. It’s literally running a gazillion guesses and reverse engineering the right answer. ML inference though is likely much more efficient and accurate than any handwritten algorithm could hope to achieve. Humans simply don’t have the ability to decompose problems with large numbers of variables.

Well this post aged poorly. Turns out non-experts don’t know a whole lot about ML.
 
So, completely ignoring the text of the article, to my eyes DLSS has a fair bit better texture quality (fairly evident looking at the concrete ground in the first screenshot), while in motion (skip to 2 mins in the YouTube vid as the rest is stills), FSR still has some moire patterns going on in the tank(?) tracks going on that DLSS suppresses.

It's definitely much better than FSR 1, but still has a fair bit of catching up to do
 
So, completely ignoring the text of the article, to my eyes DLSS has a fair bit better texture quality (fairly evident looking at the concrete ground in the first screenshot), while in motion (skip to 2 mins in the YouTube vid as the rest is stills), FSR still has some moire patterns going on in the tank(?) tracks going on that DLSS suppresses.

It's definitely much better than FSR 1, but still has a fair bit of catching up to do
Can't watch the video properly at nightshift with my phone, but does deathloop offer separate sharpening settings for DLSS? if it doesn't, it might apply some sharpening, while on FSR you need to manually enable it. Could be culprit behind texture quality difference.

Edit: their screenshots suggest no sharpening controls for dlss which suggests it's always sharpening, see how it looks when you take far+sharpening vs dlss
 
I think it's good enough for the masses which is all it needs to be. If most people deem it as good enough to use and they use it, then Nvidia's got a problem on their hand. It'll just be like G-Sync vs Freesync. G-sync was better but most people purchased free sync monitors because it was good enough and much cheaper.
 
Back
Top