DegustatoR
Legend
They would never have gone with Samsung if the yields were bad to a point where the product couldn't even be produced. Internet is a hell of a fake news factory.Im not sure i subscribe to the ”yields are bad!” theory.
They would never have gone with Samsung if the yields were bad to a point where the product couldn't even be produced. Internet is a hell of a fake news factory.Im not sure i subscribe to the ”yields are bad!” theory.
They would never have gone with Samsung if the yields were bad to a point where the product couldn't even be produced. Internet is a hell of a fake news factory.
The one they allegedly solved by downclocking/undervolting via driver?
Your logic fails there.By that logic the past 3 gens of Radeon cards have had yield issues.
Your logic fails there.
Radeons haven't been crashing left and right at their default voltages and clocks and AMD hasn't downclocked or undervolted them later on. What users do is their business.
GeForce RTX 30s on the other hand were crashing left and right when hitting around 2040-2050 MHz (which many did at least on launch voltage/clockprofiles), which led to NVIDIA adjusting their voltage and clockspeed curves.
Enough cards for NVIDIA to adjust their voltage/clock curves.Firstly how many cards were crashing? Whats the percentage? If there are only 200 cards out in the wild, do you think the crashing percentage is the same as 10 000 cards in the wild? Secondly you are going on the record that 2 Ghz was the default clock rate? Because nowhere was it ever stated that was a normal frequency. That happened because of poor QA and lack of communication between Nvidia and AIBs
https://www.guru3d.com/news-story/g...ely-due-to-poscap-and-mlcc-configuration.html
Enough cards for NVIDIA to adjust their voltage/clock curves.
It's not that 2 GHz would be some sort of "default clock rate", the only official clocks are base and Boost which are nowhere near 2 GHz, but NVIDIA allows the cards to boost however high they can go given the limitations (voltage, heat, power).
With original voltage/clock curves, cards started crashing around 2040 or 2050 MHz, which many could reach with stock settings.
The caps could have been part of the fault, but the difference between what's supposedly the worst and best cap configuration is couple 10s of MHzs or so (Der8auer tested this by switching caps on same card)
Also none of the cards use POSCAPs, they're SP-Caps. The POSCAPs term just stuck for whatever reason after first reports used the wrong term.
It's speculation at this point, but, if 3080 20gb is a real thing, do you think that nVidia can EOL 3090 24gb ? If there is a 500-600$ difference between the two, I wonder how 3090 can still sell .
No-one is claiming Ampere yields are bad because users are undervolting the cards.Good so we agree that 2 Ghz was not an official boost rate but rather due to insufficent testing because of lack of time. That has nothing to do with yields and it doesnt even have anything with what i wrote. You setup a strawman and refuted it
the crashing is fixed now but people still undervolt Ampere because the frequency/voltage is still past the best efficiency curve. The same thing happened with Polaris and Vega but nobody claimed yields were bad at TSMC/Glofo because of that
NVIDIA had to adjust clock/voltage curves to prevent cards from crashing post-launch (crashing occured even on Founders Editions, so blame can't be shifted to AIBs), which could indicate yield issue with high enough grade chips forcing them to use worse binned chips than normally
And why is that?I'm beginning to believe that by the time RTX 30 is readily available, it might have lost much of it's intial appeal.
I'm beginning to believe that by the time RTX 30 is readily available, it might have lost much of it's intial appeal.