We can't just ignore that fact that xtor cost scaling is kinda fucking dead.
Which hurts GPUs like a lot.
We're all so used to dGP duo creaking out >450mm^2 parts like it's no one's business but that time is steadily coming to a close.
Yes. MCP GPUs will make costs sane per tile but the whole idea is throwing moar silicon at the problem thus we're back to square 1.
Well... I'd buy this for a GPU
having to cost $900, but it's an excuse I wont' accept for >$2000 which is what you suggested before.
I'm not worried that the next 7900XT goes over $2000 for a "Titan-class" unobtainable halo product with both GPCs fully enabled. I'm worried that the next 7800XT goes over $1000 and the next 7800 goes to >$750.
I.e. I'm "worried" that we - the consumers - are getting progressively less performance upgrade per dollar on every GPU family iteration.
Yes, making high-end GPUs is getting super expensive, yes the packaging must be super expensive, and the memory, and the mm^2 on TSMC's top-end nodes. But I doubt this is making these graphics cards >$600 to produce.
I wouldn't take those excuses for Nvidia, and I won't take them for AMD either.
The prices are hiking because AMD is on a roll and they're probably promising QoQ and YoY record results that aren't obtainable unless they start milking their customers dry. Everyone
needs to be an Apple nowadays because this
infinite growth is something investors gradually learned to expect from tech companies.
That's fine and all, and AMD exists to make as much money as possible. My point here is they might be killing the cow this way. We're already seeing AMD officials acknowledging that the inability to buy dGPUs at MSRP for a long period of time is driving people away from PC gaming altogether (TBH, I'm one of those). How does that get better if they decide to hike the prices for their 2022 releases?
Sure, they won the Playstation and Xbox designs, but obviously the margins they get from those is much lower than what they get from dGPUs.
If anything, winning the console designs should have been a catalyst to drive more customers towards their dGPUs due to having more devs optimizing on their architecture, not the other way around.
That said, if RDNA3 isn't bringing more performance-per-dollar than RDNA2 at MSRP, then how do they become successful at all? The planet doesn't have an infinite number of whales..
It seems Nvidia learned that lesson during the costly Pascal -> Turing transition, but if what you're saying is true (and I understood it right), I think AMD wasn't paying attention.
No forgetti the allmighty substrates.
They're very much gold now.
Yes, another reason why Intel probably has their hands tied for coming up with significant competition to RDNA3 (and Lovelace?) cards. They're all bottlenecked by the same factors and apparently will be throughout most of 2022.
What i want is a nice cheap SoC for PC, not monster GPUs. Wonder if some contract with console makers prevents AMD from making one. >:/
IIRC this was mostly due to memory bandwidth limitations. Socketed APUs only use standard DIMMs and 128bit of DDRx was never comparable to GDDRx in wider buses we see on graphics cards.
Intel tried to go around that limitation with Crystalwell (after Apple pretty much demanded it, I believe) but without great results.
At some point AMD even
put a couple of GDDR5 PHYs in Kaveri, but IIRC they made it to use with non-standard GDDR5M chips that were to be produced by Infineon (and that would probably need to be soldered on to specific motherboards), but that whole plan got discarded hen Infineon went under.
Had we seen GDDR5M motherboards in the market, we could have gone a completely different way in regards of what to expect from a desktop APU.
Oh well you know, now public!
So this is Shortcake, which you say it's also being used as a pipe cleaner for the stacked chiplet designs we're going to see in Navi 31?