AMD RDNA4 potential product value

These rumors are like Nigerian email scams. They literally say “Nvidia production line” six times in two paragraphs. Last time I checked Nvidia doesn’t have any factories.
Someone asked why ada price and availability has gone to hell, you can choose not to believe production has stopped. So what do you think the cause of low supply is?
 
Someone asked why ada price and availability has gone to hell, you can choose not to believe production has stopped. So what do you think the cause of low supply is?

My first assumption would be Christmas. But I have as much hard data to back up that assumption as the guy talking about “Nvidia production lines”, so you certainly shouldn’t take my word for it.
 
The fact that you were able to restate my point but with prices instead of naming means you understand exactly what I’m talking about: you used to get a lot more uplift for less money compared to now.

Yes perf/$ has gotten worse recently which is the opposite of what we should expect. Your original question though is whether we can imagine a future 70 class card besting the prior flagship. Those two things are unrelated. We can have terrible perf/$ while having a 70 card that beats the last 90 card because the positioning of the 70 card in any generation is arbitrary.

AMD naming scheme has been schizophrenic at best (see the 9070XT lmao) so yeah I would never go by what they do, every generation the segmentation is changed. However this wasn’t the case on the NV side for quite a while, with the only big change being the rebrand of the titan series to the 90 series.

Nvidia is just as bad.

The 4090 is 90% faster than the 4070 but 3090 is only 47% faster than the 3070. You can't identify trends based on names alone.
 
Yes perf/$ has gotten worse recently which is the opposite of what we should expect. Your original question though is whether we can imagine a future 70 class card besting the prior flagship. Those two things are unrelated. We can have terrible perf/$ while having a 70 card that beats the last 90 card because the positioning of the 70 card in any generation is arbitrary.
In general, 70 tier cards have occupied a specific price category and have performance expectations with regard to the previous generation (ie, we used to be able to expect a 70 tier card could beat the prior flagship, like you said). With the 40 series (and onwards, it seems) this is no longer the case, as 70 tier cards are now significantly more expensive than they previously were and are less powerful (in reference to the previous gen, of course).

Nvidia is just as bad.

The 4090 is 90% faster than the 4070 but 3090 is only 47% faster than the 3070. You can't identify trends based on names alone.
Well my point is that the 40 series deviated from 'how things used to be', so yeah, the fact that the 4090 was so much more powerful than basically everything else in the stack is kinda my point.

That said, Nvidia is not just as bad here lol. AMD sticks with a naming scheme for like three generations at most then abandons it for some reason or another (HD 7000 -> 2X0 -> 3X0 -> 4X0 -> 5X0 (refresh) -> 5X00 (??) -> 6X00 -> 7X00 -> 90X0 (???)). Nvidia has kept the same scheme for over a decade at this point.
 
Yes perf/$ has gotten worse recently which is the opposite of what we should expect. Your original question though is whether we can imagine a future 70 class card besting the prior flagship. Those two things are unrelated. We can have terrible perf/$ while having a 70 card that beats the last 90 card because the positioning of the 70 card in any generation is arbitrary.



Nvidia is just as bad.

The 4090 is 90% faster than the 4070 but 3090 is only 47% faster than the 3070. You can't identify trends based on names alone.
The GTX 680 also throws a wrench in his naming scheme assumptions.
 
They've already had 8x00 and 9x00 back in the early days and want to avoid duplicate numbers
Considering how often AMD duplicates numbers between Ryzen and Radeon this seems suspect. Plus it’s like I don’t think anyone is confusing these products with 15 year old cards.


The GTX 680 also throws a wrench in his naming scheme assumptions.
If you have to go back to the 680 to find outliers then this kinda proves my point lol. People generally know what a 70 class card is because Nvidia usually has it fit into a specific price and performance category.
 
Considering how often AMD duplicates numbers between Ryzen and Radeon this seems suspect. Plus it’s like I don’t think anyone is confusing these products with 15 year old cards.



If you have to go back to the 680 to find outliers then this kinda proves my point lol. People generally know what a 70 class card is because Nvidia usually has it fit into a specific price and performance category.
Agreed -- at this point it just seems like those arguing against this point are doing it purely for semantics, pedantry, and just for the sake of arguing. It's getting boring.

Your points are essentially capturing what Gamer's Nexus covered a few years ago:
and they followed up with a focus on Nvidia in this:

But yes: Can we get back to discussing the upcoming "RX 9070"?
 
In general, 70 tier cards have occupied a specific price category and have performance expectations with regard to the previous generation (ie, we used to be able to expect a 70 tier card could beat the prior flagship, like you said). With the 40 series (and onwards, it seems) this is no longer the case, as 70 tier cards are now significantly more expensive than they previously were and are less powerful (in reference to the previous gen, of course).

The 670, 770, 970 and 2070 didn’t beat the prior flagship at launch. The 3070 matches the 2080 Ti. Is this expectation based just on Pascal?

I’m not disagreeing with your general point that perf/$ has regressed but your assertions about 70 series performance expectations seem way off.
 
The 670, 770, 970 and 2070 didn’t beat the prior flagship at launch. The 3070 matches the 2080 Ti. Is this expectation based just on Pascal?

I’m not disagreeing with your general point that perf/$ has regressed but your assertions about 70 series performance expectations seem way off.
I've no stake in this argument but the 670 beat the 580, the 770 slightly beat the 680 (ignore the 690 since it's whack), and the 970 beat the 780. Although it is hard to tell if this was always the case in launch titles with launch drivers. Ultimately all these X70 class cards were superior to the previous X80 cards.
 
I've no stake in this argument but the 670 beat the 580, the 770 slightly beat the 680 (ignore the 690 since it's whack), and the 970 beat the 780. Although it is hard to tell if this was always the case in launch titles with launch drivers. Ultimately all these X70 class cards were superior to the previous X80 cards.

The 580 and 780 were not the flagships of their generations. The complaint was that the 5070 won’t beat the 4090.
 
The 580 and 780 were not the flagships of their generations. The complaint was that the 5070 won’t beat the 4090.
Close enough not counting the SLI abominations. Certainly the 970 was closer to the 780Ti or Titan than the 5070 will be to the 4090 :yep2:
 
The 3070 matches the 2080 Ti
Matches in raster but I think beats it in RT in some cases. Either way, I'd be fine with it matching, if we could get a 5070 that matches or exceeds the 2080ti that would be excellent in my eyes. However I think it matching the 4080S is probably more likely.

We also had the 3060ti matching the 2080 which I remember being a pretty good deal at MSRP, however couldn't find those prices.
 
Final words on the 9070XT: slightly faster than 7900XT, FSR4 will be exclusive to RDNA4 GPUs. Final price will be revealed on CES.
It would be the most anti consumer move to gatekeep FSR4 behind RDNA4 since nVidia's GTX970 scum. Over the last three years AMD has been praised to bring upscaling to non "TensorCores" GPUs. And now they would leave their customer base alone after they have supported AMD for the last six years? AMD has even advertising these RDNA 3 "AI Cores".

It would be a typical "jump the shark" moment for AMD. Dont know why any AMD customer would trust and support them anymore.
 
It would be the most anti consumer move to gatekeep FSR4 behind RDNA4
It's the only logical move, FSR4 relies on machine learning to increase performance, it needs ML cores to do it's thing, running on shader cores will make it slow, negating it's advantage. It's no more anti consumer than NVIDIA needing Ada to do DLSS Frame Generation, on Intel needing Arc to do their XeSS Frame Generation.
 
I'm more interested in what that would mean for previous AMD GPUs. Will FSR4 have a fallback or will developers need to implement both FSR3 and FSR4 now?
 
It's the only logical move, FSR4 relies on machine learning to increase performance, it needs ML cores to do it's thing, running on shader cores will make it slow, negating it's advantage. It's no more anti consumer than NVIDIA needing Ada to do DLSS Frame Generation, on Intel needing Arc to do their XeSS Frame Generation.
Maybe from a technologial standpoint, but from a business perspective it isnt. AMD built a customer base around the nonML approach with upscaling and frame generation over the compute units. These customers are used to a proper support from AMD. And AMD has advertised matrix acceleration units in RDNA3:
With the AMD RDNA™ 3 architecture featured on the AMD Radeon RX 7000 Series graphics cards, experience the next-generation advancements in GPU design with new AI accelerators on the world’s first chiplet GPU design for gamers.

nVidia did a clear cut with Turing. And because this happened six years ago everyone who has updated to a new GPU has access to DLSS upscaling.
 
Last edited:
Will FSR4 have a fallback or will developers need to implement both FSR3 and FSR4 now?
Yes, the FSR SDK will be upgraded to FSR4 , with a fallback to FSR3.1 on non supported hardware.
but from a business perspective it isnt. AMD built a customer base around the nonML approach with upscaling and frame generation over the compute units. These customer are used to a proper support from AMD
From a business perspective AMD has a 10% marketshare, the number of FSR supported games is significantly smaller compared to DLSS, so they are not losing much. Besides, all of those RDNA1/2/3 GPUs will still have access to compute upscaling, nothing has changed, just that ML upscaling will be exclusive to RDNA4, the same business model as XeSS DP4a and XeSS XMX.

If we are going to ask who is going to support FSR4 on RDNA4 with it's initial low market share, the answer would be that Intel managed to support XeSS on dozens of games with zero marketshare. Similarly, AMD will have no trouble supporting a model of legacy FSR3 + modern FSR4 on dozens of games.
nVidia did a clear cut with Turing. And because this happened six years ago everyone who has updated to a new GPU has access to DLSS upscaling.
I guess this is AMD's clear cut moment, they had to do it eventually, better sooner than later, before the burden is too big.
 
And you think these 10% will switch over because of FSR4? They would have done this already with a proper nVidia card. These 10% are here because they support AMD's generic approach to these problems.
 
And you think these 10% will switch over because of FSR4? They would have done this already with a proper nVidia card. These 10% are here because they support AMD's generic approach to these problems.

Maybe it’s 10% partly because AMD currently lacks ML upscaling.
 
Back
Top