AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
What hype? I followed and read every article and forum about big navi since the name was invented.


When the first RDNA2 benchmark leak appeared, IIRC it showed performance 15~20% above a very overclocked 2080 Ti, which at the time was the fastest GPU. Immediately the results were unanimously thought to be fake. The sentiment was that Big Navi would barely match a 2080 Ti, the current performance king. Buildzoid even made a video demonstrating how a 80CU GPU from AMD would not be possible under 500watts.

After that, Ampere was released with an absurd performance increase. No one ever believed with a straight face, Big Navi would fight with Ampere. This was the "hype" for RDNA2. Anyone saying otherwise is lying.


People only started believing performance would be good very close to release time. Even then, the reviews caught everyone by surprise.
I think you should be careful who you're talking about. There was no 'unanimous' beliefs on this stuff. And plenty of us just remained cautious on all really early rumors rather than taking any firm stances on anything.

There were also a whole lot of rumors/leaks to pick through.

Like, even early on, I had no issue with believing a larger RDNA2 GPU could do +20% of a 2080Ti. Frankly, it seemed somewhat inconceivable that they couldn't do this, even if it might take 300w or something. Those who thought RDNA2 couldn't even match a 2080Ti seemed to mostly be in the camp that Navi 10 was basically the best AMD could do, rather than realizing it was just a midrange part and they didn't even try a high end RDNA product yet. Basically, people who were fairly ignorant of what was going on.

I also was not convinced that Ampere would be some unreachable benchmark. AMD's claims that they were achieving the same 50% increase in performance-per-watt as they achieved going from GCN5 to RDNA1 seemed quite bullish, and while I was cautious on this at first, the console spec announcements made me believe they weren't BS'ing us, as these GPU specs just wouldn't have been possible without major efficiency improvements. And then combined with the rumors(and subsequent confirmation) of Nvidia only being on Samsung 8nm rather than TSMC 7nm, it definitely helped me believe RDNA2 wouldn't embarrass itself as a high end competitor.

You can accuse me of lying, but I have a long enough post history detailing all this stuff on Reddit if anybody wanted to waste their time confirming it.

And getting back to RDNA3 rumors, I'm definitely listening to some of them. I'll take them seriously from those with some kind of track record, though I'll always be cautious without having more of the picture come in to support them better. Just stay away from the Youtubers.
 
Yes there was a lot of skepticism over how much AMD could improve on the 5700xt in one go. There was also an immense amount of hype among the YouTube crowd on performance and availability. Particularly after Ampere dropped with less than stellar performance and availability and Infinity Cache was confirmed.

To answer the original question, reality is probably somewhere south of the rosy predictions. It’s been the case for nearly every release. See Ampere and it’s “30 teraflops”.

Some website and youtuber views exploded with each RDNA2 videos when the hype train was rolling. After that the views dipped. Some channels want them back and are already hyping up rdna3 even without new informations : /

I'm sure AMD like the attention, but not sure if they like the hype while the gpu is not close to release yet...
 
Yes there was a lot of skepticism over how much AMD could improve on the 5700xt in one go. There was also an immense amount of hype among the YouTube crowd on performance and availability. Particularly after Ampere dropped with less than stellar performance and availability and Infinity Cache was confirmed.

To answer the original question, reality is probably somewhere south of the rosy predictions. It’s been the case for nearly every release. See Ampere and it’s “30 teraflops”.

I still think you are misrepresenting the sentiment back then.

Do you recall the first rumors of PS5 "RDNA2" clocking above 2Ghz being utterly shot down? Or how any rumor saying big navi could clock above 2.4Ghz suffered the same treatment? Even the infinity cache was instantly branded as a "secret sauce" meme to make fun of the AMD crowd. The barometer of the sentiment back then was how the Internet was set on fire by surprise when Mark Cerny revealed the 2.23ghz clock speed of the PS5. 2.23 is a measly clockspeed compared to the 2.7ghz monsters we see regularly.

RDNA2 was so under-hyped its almost bizarre how much stronger the final product was. It took a while for people to get in their heads what happened.
 
Ehhh, RMB/PHX will start inching there pretty heavily I'd say.
Yeah, personally i think GPU might be good enough for what is needed.
But state of the art requires higher specs, and they surely could increase CUs easily. Power consumption does not need to be 15W for gaming.
 
You can accuse me of lying, but I have a long enough post history detailing all this stuff on Reddit if anybody wanted to waste their time confirming it.

And getting back to RDNA3 rumors, I'm definitely listening to some of them. I'll take them seriously from those with some kind of track record, though I'll always be cautious without having more of the picture come in to support them better. Just stay away from the Youtubers.
No one is accusing people of lying. I'm am accusing people of mixing their time frames when recalling the hype levels of RDNA2. It was not over-hyped. It was severely under-hyped almost until the release date. Its just how it was. See my post above.

People are projecting how they see AMD today into the past.
 
I still think you are misrepresenting the sentiment back then.

Do you recall the first rumors of PS5 "RDNA2" clocking above 2Ghz being utterly shot down? Or how any rumor saying big navi could clock above 2.4Ghz suffered the same treatment? Even the infinity cache was instantly branded as a "secret sauce" meme to make fun of the AMD crowd. The barometer of the sentiment back then was how the Internet was set on fire by surprise when Mark Cerny revealed the 2.23ghz clock speed of the PS5. 2.23 is a measly clockspeed compared to the 2.7ghz monsters we see regularly.

RDNA2 was so under-hyped its almost bizarre how much stronger the final product was. It took a while for people to get in their heads what happened.

I’m not sure why you’re only remembering the skeptics. There were just as many true believers.

I think people will be generally less skeptical of AMD’s engineering prowess this time around. They have multiple generations of chiplet architecture under their belt so we “know” they can build a chiplet based GPU. The question is how well will it scale.
 
I think AMD learnt the hard way that making Polaris range SKUs only is not good enough to evoke customer interest unless there is a halo SKU never mind the fact that most buyers will anyway buy the lower end SKU
This time they seem geared for handling that, some Halo multi die N5P unobtainium SKU, while the rest are smaller and cheapers SKU fabbed on N6




https://seekingalpha.com/article/44...esents-jpmorgan-49th-annual-global-technology
Finally. My dGPU on my XBO will be at last engaged. It’s been sitting dormant all these years waiting for official announcement.
 
Would you kindly, keep this thread about RDNA3 Speculation and drop all other non-relevant side discussions?

It's perfectly fine for threads to not have new posts when there is no new information or speculation related to it. There is no need to force a thread to go on by dragging in other discussions.
 
How legit were the RDNA2 rumors? Infinity cache was the real deal but everything else didn’t really live up to the hype.
They weren't so bad.
a) feature set parity
b) non DXR performance is fairly competitive
c) RT performance within grounds of 1st gen RT Cores

From my viewpoint, the RDNA 2 series is performing quite well.

imp, the overhype comes from whether people expect RDNA 3 to be as big of a jump as it was GCN -> RDNA.
There were 4 GCN versions, but each one was just a small evolution of the last.

RDNA 2 is an evolution of 1, with better performance per watt and feature set improvements.

Is the expectation of RDNA 3 too much? This part I'm not sure. If it's a chiplet / HBM strategy, does that become a new architecture entirely? If so is it still really RDNA3?

If it's just RDNA 3 then my expectations just looking at GCN numbering is much lower. I think we'll just see more evolution.
 
imp, the overhype comes from whether people expect RDNA 3 to be as big of a jump as it was GCN -> RDNA.
Mooooooooore, way more.
If so is it still really RDNA3?
Any gfx11 part is that.
MCP ones are just extra mean on top of that.
If it's just RDNA 3 then my expectations just looking at GCN numbering is much lower
This whole ordeal is led by AMD CPU people you see.
Every odd number is a fresh new swanky core redesign.
 
We can't just ignore that fact that xtor cost scaling is kinda fucking dead.
Which hurts GPUs like a lot.
We're all so used to dGP duo creaking out >450mm^2 parts like it's no one's business but that time is steadily coming to a close.
Yes. MCP GPUs will make costs sane per tile but the whole idea is throwing moar silicon at the problem thus we're back to square 1.

Well... I'd buy this for a GPU having to cost $900, but it's an excuse I wont' accept for >$2000 which is what you suggested before.
I'm not worried that the next 7900XT goes over $2000 for a "Titan-class" unobtainable halo product with both GPCs fully enabled. I'm worried that the next 7800XT goes over $1000 and the next 7800 goes to >$750.

I.e. I'm "worried" that we - the consumers - are getting progressively less performance upgrade per dollar on every GPU family iteration.
Yes, making high-end GPUs is getting super expensive, yes the packaging must be super expensive, and the memory, and the mm^2 on TSMC's top-end nodes. But I doubt this is making these graphics cards >$600 to produce.
I wouldn't take those excuses for Nvidia, and I won't take them for AMD either.

The prices are hiking because AMD is on a roll and they're probably promising QoQ and YoY record results that aren't obtainable unless they start milking their customers dry. Everyone needs to be an Apple nowadays because this infinite growth is something investors gradually learned to expect from tech companies.
That's fine and all, and AMD exists to make as much money as possible. My point here is they might be killing the cow this way. We're already seeing AMD officials acknowledging that the inability to buy dGPUs at MSRP for a long period of time is driving people away from PC gaming altogether (TBH, I'm one of those). How does that get better if they decide to hike the prices for their 2022 releases?


Sure, they won the Playstation and Xbox designs, but obviously the margins they get from those is much lower than what they get from dGPUs.
If anything, winning the console designs should have been a catalyst to drive more customers towards their dGPUs due to having more devs optimizing on their architecture, not the other way around.


That said, if RDNA3 isn't bringing more performance-per-dollar than RDNA2 at MSRP, then how do they become successful at all? The planet doesn't have an infinite number of whales..
It seems Nvidia learned that lesson during the costly Pascal -> Turing transition, but if what you're saying is true (and I understood it right), I think AMD wasn't paying attention.



No forgetti the allmighty substrates.
They're very much gold now.
Yes, another reason why Intel probably has their hands tied for coming up with significant competition to RDNA3 (and Lovelace?) cards. They're all bottlenecked by the same factors and apparently will be throughout most of 2022.



What i want is a nice cheap SoC for PC, not monster GPUs. Wonder if some contract with console makers prevents AMD from making one. >:/
IIRC this was mostly due to memory bandwidth limitations. Socketed APUs only use standard DIMMs and 128bit of DDRx was never comparable to GDDRx in wider buses we see on graphics cards.
Intel tried to go around that limitation with Crystalwell (after Apple pretty much demanded it, I believe) but without great results.
At some point AMD even put a couple of GDDR5 PHYs in Kaveri, but IIRC they made it to use with non-standard GDDR5M chips that were to be produced by Infineon (and that would probably need to be soldered on to specific motherboards), but that whole plan got discarded hen Infineon went under.
Had we seen GDDR5M motherboards in the market, we could have gone a completely different way in regards of what to expect from a desktop APU.



Oh well you know, now public!
So this is Shortcake, which you say it's also being used as a pipe cleaner for the stacked chiplet designs we're going to see in Navi 31?
 
I.e. I'm "worried" that we - the consumers - are getting progressively less performance upgrade per dollar on every GPU family iteration.
Welcome to Moore's Law being dead.
Yes, agressive chiplet usage keeps the costs sorta sane in places but we all want more while essentially fucking a corpse.
they're probably promising QoQ and YoY record results that aren't obtainable unless they start milking their customers dry
OH HELL YEAH THEY ARE.
AMD is so hilarious capacity limited atm you wouldn't even believe me.
That said, if RDNA3 isn't bringing more performance-per-dollar than RDNA2 at MSRP
Oh but it will.
Just not the funny MCP parts made out of expensive and wonky.
So this is Shortcake, which you say it's also being used as a pipe cleaner for the stacked chiplet designs we're going to see in Navi 31?
Yes.
Cores sitting on top of stacked pile of SRAM for LLC.
 
Is that reversed from what we normally have seen in the past? Which is memory stacked on cores?

If you're talking about smarphone SoCs, those are 100mW stacks of LPDDRx on top of <5W App Processors. And the Vita with Wide I/O.

These are completely different beasts, and it's SRAM not DRAM.
 
Status
Not open for further replies.
Back
Top