DegustatoR
Legend
It's not about how "real" they are, it's about people setting themselves up for unrealistic expectations going off some single metric like FP32 FLOPs or the number of CUs.Those are real for say
It's not about how "real" they are, it's about people setting themselves up for unrealistic expectations going off some single metric like FP32 FLOPs or the number of CUs.Those are real for say
I think you should be careful who you're talking about. There was no 'unanimous' beliefs on this stuff. And plenty of us just remained cautious on all really early rumors rather than taking any firm stances on anything.What hype? I followed and read every article and forum about big navi since the name was invented.
When the first RDNA2 benchmark leak appeared, IIRC it showed performance 15~20% above a very overclocked 2080 Ti, which at the time was the fastest GPU. Immediately the results were unanimously thought to be fake. The sentiment was that Big Navi would barely match a 2080 Ti, the current performance king. Buildzoid even made a video demonstrating how a 80CU GPU from AMD would not be possible under 500watts.
After that, Ampere was released with an absurd performance increase. No one ever believed with a straight face, Big Navi would fight with Ampere. This was the "hype" for RDNA2. Anyone saying otherwise is lying.
People only started believing performance would be good very close to release time. Even then, the reviews caught everyone by surprise.
Yes there was a lot of skepticism over how much AMD could improve on the 5700xt in one go. There was also an immense amount of hype among the YouTube crowd on performance and availability. Particularly after Ampere dropped with less than stellar performance and availability and Infinity Cache was confirmed.
To answer the original question, reality is probably somewhere south of the rosy predictions. It’s been the case for nearly every release. See Ampere and it’s “30 teraflops”.
Yes there was a lot of skepticism over how much AMD could improve on the 5700xt in one go. There was also an immense amount of hype among the YouTube crowd on performance and availability. Particularly after Ampere dropped with less than stellar performance and availability and Infinity Cache was confirmed.
To answer the original question, reality is probably somewhere south of the rosy predictions. It’s been the case for nearly every release. See Ampere and it’s “30 teraflops”.
The average (say, plebbit) opinion was: it sucks, TU102 perf blah blah.I think you should be careful who you're talking about.
"Hurr durr a picosecond boost!".Mark Cerny revealed the 2.23ghz clock speed of the PS5.
Yeah, personally i think GPU might be good enough for what is needed.Ehhh, RMB/PHX will start inching there pretty heavily I'd say.
Good thing fresher AMD IP scales pretty nicely with watts pumped into it.Power consumption does not need to be 15W for gaming.
No one is accusing people of lying. I'm am accusing people of mixing their time frames when recalling the hype levels of RDNA2. It was not over-hyped. It was severely under-hyped almost until the release date. Its just how it was. See my post above.You can accuse me of lying, but I have a long enough post history detailing all this stuff on Reddit if anybody wanted to waste their time confirming it.
And getting back to RDNA3 rumors, I'm definitely listening to some of them. I'll take them seriously from those with some kind of track record, though I'll always be cautious without having more of the picture come in to support them better. Just stay away from the Youtubers.
I still think you are misrepresenting the sentiment back then.
Do you recall the first rumors of PS5 "RDNA2" clocking above 2Ghz being utterly shot down? Or how any rumor saying big navi could clock above 2.4Ghz suffered the same treatment? Even the infinity cache was instantly branded as a "secret sauce" meme to make fun of the AMD crowd. The barometer of the sentiment back then was how the Internet was set on fire by surprise when Mark Cerny revealed the 2.23ghz clock speed of the PS5. 2.23 is a measly clockspeed compared to the 2.7ghz monsters we see regularly.
RDNA2 was so under-hyped its almost bizarre how much stronger the final product was. It took a while for people to get in their heads what happened.
Finally. My dGPU on my XBO will be at last engaged. It’s been sitting dormant all these years waiting for official announcement.I think AMD learnt the hard way that making Polaris range SKUs only is not good enough to evoke customer interest unless there is a halo SKU never mind the fact that most buyers will anyway buy the lower end SKU
This time they seem geared for handling that, some Halo multi die N5P unobtainium SKU, while the rest are smaller and cheapers SKU fabbed on N6
https://seekingalpha.com/article/44...esents-jpmorgan-49th-annual-global-technology
They weren't so bad.How legit were the RDNA2 rumors? Infinity cache was the real deal but everything else didn’t really live up to the hype.
Mooooooooore, way more.imp, the overhype comes from whether people expect RDNA 3 to be as big of a jump as it was GCN -> RDNA.
Any gfx11 part is that.If so is it still really RDNA3?
This whole ordeal is led by AMD CPU people you see.If it's just RDNA 3 then my expectations just looking at GCN numbering is much lower
We can't just ignore that fact that xtor cost scaling is kinda fucking dead.
Which hurts GPUs like a lot.
We're all so used to dGP duo creaking out >450mm^2 parts like it's no one's business but that time is steadily coming to a close.
Yes. MCP GPUs will make costs sane per tile but the whole idea is throwing moar silicon at the problem thus we're back to square 1.
Yes, another reason why Intel probably has their hands tied for coming up with significant competition to RDNA3 (and Lovelace?) cards. They're all bottlenecked by the same factors and apparently will be throughout most of 2022.No forgetti the allmighty substrates.
They're very much gold now.
IIRC this was mostly due to memory bandwidth limitations. Socketed APUs only use standard DIMMs and 128bit of DDRx was never comparable to GDDRx in wider buses we see on graphics cards.What i want is a nice cheap SoC for PC, not monster GPUs. Wonder if some contract with console makers prevents AMD from making one. >:/
So this is Shortcake, which you say it's also being used as a pipe cleaner for the stacked chiplet designs we're going to see in Navi 31?Oh well you know, now public!
Welcome to Moore's Law being dead.I.e. I'm "worried" that we - the consumers - are getting progressively less performance upgrade per dollar on every GPU family iteration.
OH HELL YEAH THEY ARE.they're probably promising QoQ and YoY record results that aren't obtainable unless they start milking their customers dry
Oh but it will.That said, if RDNA3 isn't bringing more performance-per-dollar than RDNA2 at MSRP
Yes.So this is Shortcake, which you say it's also being used as a pipe cleaner for the stacked chiplet designs we're going to see in Navi 31?
Is that reversed from what we normally have seen in the past? Which is memory stacked on cores?Yes.
Cores sitting on top of stacked pile of SRAM for LLC.
Had we?Is that reversed from what we normally have seen in the past?
Was test chips only iirc.Which is memory stacked on cores?
Is that reversed from what we normally have seen in the past? Which is memory stacked on cores?
right, I may have gotten that confused with the way HBM is setup.Had we?
SRAM vaults in anything mass market-ish is a first for now.
Was test chips only iirc.