AMD RDNA4 potential product value

TPU has updated their test suite removing older games and adding newer ones and have retested existing cards with latest drivers. The 9070XT has a bit of a hill to climb.


This is a great system and suite of games. People should reference TPU for upcoming gpu launches.
 
Maybe it’s 10% partly because AMD currently lacks ML upscaling.
Yes, but nothing has holding AMD back to make a proper FSR 2/3 using ML upscaling. It was cleary a dishonest move to take away the spotlight from nVidia because their hardware was and is (up to RDNA 3) outdated.
Base PS5 and Xbox users suffer from the same fate.

I'm really interessting how AMD would explain why FSR4 can not run on RDNA3. RDNA3 has "AI accelerators" and an "AI accelerated" workload should run just fine on these GPUs.
 
AMD does have a bit of a marketing maneuvering ahead of them to explain why ML suddenly matter and why it doesn't work on anything but the newest GPUs after explaining the opposite for about five years.

But I don't think that it's going to be very hard because honestly Nvidia had them beaten on that anyway and them essentially admitting that will likely be accepted just fine even by their own "fanbase" at this point.
 
AMD does have a bit of a marketing maneuvering ahead of them to explain why ML suddenly matter and why it doesn't work on anything but the newest GPUs after explaining the opposite for about five years.

But I don't think that it's going to be very hard because honestly Nvidia had them beaten on that anyway and them essentially admitting that will likely be accepted just fine even by their own "fanbase" at this point.

Yeah Nvidia, Intel and Sony have already set the stage. It’s not going to be a big deal at all.
 
Do you not think it would be a big deal for the existing customer base? Bought a RDNA3 card and now finding out that you do not have access to ml upscaling which is possible on a six years old RTX2060?
 
Do you not think it would be a big deal for the existing customer base? Bought a RDNA3 card and now finding out that you do not have access to ml upscaling which is possible on a six years old RTX2060?

Those same people have been living without ML upscaling and made a conscious decision to buy AMD in the past. Why would it matter to them now? All that’s happening is they’re getting an upgrade.
 
For upscaling via machine learning, you need matrix cores and this exists in RDNA 3. It's the same thing as Nvidia's Tensor Cores.
Nvidia's TCs do this exclusively, while AMD's are merged with the general computing cores.
The AMD AI cores got backported from CDNA 2 and are advertised in APUs with RDNA 3.5 with AI acceleration power.

What happens is that currently the FSR does not make use of these matrix cores, so they are underutilized. However, FSR 4 is likely to work well on RDNA 3, even better on RDNA 4. Previous generations will be able to use it, but the loss of performance may not be worth it.
 
Those same people have been living without ML upscaling and made a conscious decision to buy AMD in the past. Why would it matter to them now? All that’s happening is they’re getting an upgrade.
Why introduce a ml upscaling feature at all? AMD sold hardware based on their general purpose solution. Why advertising "new AI accelerators" with RDNA3 when these cant run "AI accelerated" workloads?
 
Why introduce a ml upscaling feature at all? AMD sold hardware based on their general purpose solution. Why advertising "new AI accelerators" with RDNA3 when these cant run "AI accelerated" workloads?

I’m not sure what you’re trying to say. AMD shipped hardware without ML acceleration in the past so they’re forever banned from doing so in the future?

It’s not like they ran some massive anti-ML marketing campaign. It’s been quite the opposite for their datacenter ambitions.
 
Yes, they actually did this:

No machine learning?​

Machine Learning (ML) is not a prerequisite to achieving good quality image upscaling. Often, ML-based real-time temporal upscalers use the model learned solely to decide how to combine previous history samples to generate the upscaled image: there is typically no actual generation of new features from recognizing shapes or objects in the scene. AMD engineers leveraged their world-class expertise to research, develop and optimize a set of advanced hand-coded algorithms that map such relationships from the source and its historical data to upscaled resolution.

The FidelityFX Super Resolution 2 analytical approach can provide advantages compared to ML solutions, such as more control to cater to a range of different scenarios, and a better ability to optimize. Above all, not requiring dedicated ML hardware means that more platforms can benefit, and more gamers will be able to experience FSR 2.

FSR 2 was a waste of time for everyone. They should have done a proper investment in a forward looking solution.
 
Do you not think it would be a big deal for the existing customer base? Bought a RDNA3 card and now finding out that you do not have access to ml upscaling which is possible on a six years old RTX2060?
Those people bought RDNA3 knowing it can’t use ML upscaling. AMD never advertised ML upscaling for RDNA3.

AMD does have a bit of a marketing maneuvering ahead of them to explain why ML suddenly matter and why it doesn't work on anything but the newest GPUs after explaining the opposite for about five years.

But I don't think that it's going to be very hard because honestly Nvidia had them beaten on that anyway and them essentially admitting that will likely be accepted just fine even by their own "fanbase" at this point.
If I had to guess FSR2 is almost certainly going to continue to be maintained (at the very least for consoles) so the move here is that every AMD card released in the past 6-7 years can easily use the old stuff, while new cards can use the new stuff. The market eventually accepted that DLSS3 is Ada exclusive, especially since there was a technical reason for that (similar situation here).

Even if there are those that dislike RDNA2 or whatever not supporting FSR4, those guys will be vastly outnumbered by the potential marketshare gain they can get from creating a really good ML upscaler that can compete with Nvidia.
 
FSR 2 was a waste of time for everyone. They should have done a proper investment in a forward looking solution.
FSR2 was fine, even good, though it was obviously outclassed by DLSS. It still almost always looks better than straight upscaling, and current gen consoles were never going to get performant AI upscaling.

Ultimately their marketing was somewhat correct, particularly for the time: when FSR2 released Ampere and Turing could do DLSS and the former was basically impossible to buy without botting and the latter was also hard to find second hand and was expensive on release compared to Pascal. FSR2 filled the niche of the 1060/1080ti owner (or even 5700XT lol) who wasn’t about to drop $1000 on a 3080ti yet but nonetheless wanted a decent upscaler. Consoles as well, we aren’t getting new console for a while and until then FSR2 will fill the gaps (and with a TV at 6 feet away or so ngl FSR2 doesn’t look nearly as bad).
 
FSR 2 was a waste of time for everyone. They should have done a proper investment in a forward looking solution.

This. AMD is now five years behind Nvidia, and even two years behind Intel. When you look bad compared to Intel you know you missed the turn in Albuquerque.
 
Those people bought RDNA3 knowing it can’t use ML upscaling. AMD never advertised ML upscaling for RDNA3.
But AMD advertises "new AI accelerators". So i would think that every "ml whateverworkload" will be supported by at least RDNA3.
FSR2 was fine, even good, though it was obviously outclassed by DLSS. It still almost always looks better than straight upscaling, and current gen consoles were never going to get performant AI upscaling.

Ultimately their marketing was somewhat correct, particularly for the time: when FSR2 released Ampere and Turing could do DLSS and the former was basically impossible to buy without botting and the latter was also hard to find second hand and was expensive on release compared to Pascal. FSR2 filled the niche of the 1060/1080ti owner (or even 5700XT lol) who wasn’t about to drop $1000 on a 3080ti yet but nonetheless wanted a decent upscaler. Consoles as well, we aren’t getting new console for a while and until then FSR2 will fill the gaps (and with a TV at 6 feet away or so ngl FSR2 doesn’t look nearly as bad).
FSR2 was never "fine" or "even good". It was released 2 1/2 years later than DLSS 2 and provided worse image quality. At the end of its lifetime FSR2 is basically a meme.
And what has changed between "2022" and "2025" that AMD's marketing isnt correct anymore? ML would be better now than 2 1/2 years ago even when DLSS 2 was already superior? And why doesnt it matter anymore that your new software isnt even supported by any graphics card outside of these new announced RDNA4 one?!

I really cant see that AMD will gatekeep FSR4 to RDNA4. It would be a marketing nightmare.
 
Last edited:
But AMD advertises "new AI accelerators". So i would think that every "ml whateverworkload" will be supported by at least RDNA3.
AMD never advertized the ML accelerators in RDNA3 to be able to run any sort of ML upscaling. This is like getting upset that Volta GPUs can't run DLSS (or at least I don't think they can, not officially, since they don't have Turing tensor cores but an earlier version).

FSR2 was never "fine" or "even good". It was released 2 1/2 years later than DLSS 2 and provided worse image quality. At the end of its lifetime FSR2 is basically a meme.
I disagree. Of course it provided worse image quality, you get worse quality in exchange for broad compatibility. The comparison isn't DLSS vs FSR2, the comparison is bilinear upscaling to FSR2 as those are the options you have on anything that isnt Turing, Ampere or now Ada. This includes all 3 of the existing major consoles. There is room for more than one way of doing things and just because it's inferior doesn't mean it's worthless (indeed, most things on console are lower quality than a high end PC).

And what has changed between "2022" and "2025" that AMD's marketing isnt correct anymore? ML would be better now than 2 1/2 years ago even when DLSS 2 was already superior?
What has changed between 2022 and 2025 is 3 years lol. I'm not really sure what you guys are asking for at this point, FSR sucks supposedly so AMD tries to pivot to how Nvidia does it, but somehow that's wrong too? Do you want a broad, but inferior solution a la FSR2 or do you want an exclusive but superior solution like DLSS?

It goes without saying that yes, DLSS and other ML-based upscalers have improved drastically since 2022, while FSR2 has essentially plateaued.

And why doesnt it matter anymore that your new software isnt even supported by any graphics card outside of these new announced RDNA4 one?!
If you want an ML based solution that's fast and high quality it's not going to run on cards without capable ML hardware.

I really cant see that AMD will gatekeep FSR4 to RDNA4. It would be a marketing nightmare.
Seems to work fine for Nvidia who has come out with a new hardware exclusive feature almost every generation since Turing. I mean ideally yeah they'd figure out how to get FSR4 working on every GPU ever manufactured but it's a tradeoff of quality, performance and compatibility. You can choose two: DLSS chooses quality and performance, FSR2/3 chooses performance and compatibility, and XeSS (dp4a) chooses quality and compatibility. FSR4 will likely have to choose quality and performance, otherwise if they prioritize compatibility like they did with FSR2 they will end up in the same spot where their solution is inferior and nobody wants to use it anyway if they have a choice.
 
AMD never advertized the ML accelerators in RDNA3 to be able to run any sort of ML upscaling. This is like getting upset that Volta GPUs can't run DLSS (or at least I don't think they can, not officially, since they don't have Turing tensor cores but an earlier version).
Hm, now the question is if "ML upscaling" can be count as one of the "most popular AI applications":

This is even a dedicated webpage for these AI "capabilities" of RDNA3:

The Next Frontier of AI​


With AMD Radeon™, users can harness the power of on-device AI processing to unlock new experiences and gain access to personalized and fast AI performance.

I disagree. Of course it provided worse image quality, you get worse quality in exchange for broad compatibility. The comparison isn't DLSS vs FSR2, the comparison is bilinear upscaling to FSR2 as those are the options you have on anything that isnt Turing, Ampere or now Ada. This includes all 3 of the existing major consoles. There is room for more than one way of doing things and just because it's inferior doesn't mean it's worthless (indeed, most things on console are lower quality than a high end PC).
"Broad compatibility" which AMD may be killing today. Brings us back to the point: FSR 2 exists because AMD has sold outdated GPUs to different markets. Now with proper hardware all these marketing points do not matter anymore.

Instead of doing a non ML upscaling FSR 2 they should have gone just the right way and should have communicated that ML upscaling is the future and may be not as fast as on other hardware.
 
Last edited:
"Broad compatibility" which AMD may be killing today. Brings us back to the point: FSR 2 exists because AMD has sold outdated GPUs to different markets. Now with proper hardware all these marketing points do not matter anymore.

Instead of doing a non ML upscaling FSR 2 they should have gone just the right way and should have communicated that ML upscaling is the future and may be not as fast as on other hardware.
If one FSR API can support both methods, then I don't see the issue.
 
Hm, now the question is if "ML upscaling" can be count as one of the "most popular AI applications":
Nowhere in the marketing for RDNA3 did it say FSR4 would be supported or any ML upscaler.

"Broad compatibility" which AMD may be killing today.
Yes… because people are demanding they come out with a true DLSS competitor. What do you want them to do? You didn’t like their generic upscaler and now their hardware specific one isn’t generic enough?
 
Back
Top