All cards were affected to some degree, whether they used mlcc capacitors or not. They came up with a zero cost fix in software without making any noticeable adjustments to the voltage vs frequency curve. The issue was not seen in their linux drivers. This looks like they windows driver was pushing the boosting behaviour a little too far in windows. You design the software around the hardware, not the other way around.
It seems people who purchased the card (day one) also had very good samples. They pushed the card with no problems (using the quadro driver).The main issue is the silicon. Nvidia had for testing some very good samples, thats why the thought they can push the the card to this limit. Now in production they have silicion which makes clearly issues with the voltage curve. Thats why you have good silicon on bad capcitators which runs without any failure and you have good caps but the card is still crashing. It's all about silicon lottery and that Nvidia this time go to the limit of the silicon.
That means nothing. A crash to desktop means usually that the Windows kernel complained about a triggered watchdog, and decided to kill the driver. Call it a lesson learned back from the times of Windows Vista, and a certain vendors drivers being responsible for the majority of crashes blamed on Windows.lA crashing card would not crash in Linux.
Serious note though; We are increasingly moving away from this because of how inefficient it is. See UE5, once this is mainstream the FF hardware is going to largely unused in comparison to how much it was used this generation. But if you’re benchmarking older titles; this matters a great deal.Well in some console forums, these two apparently don't matter anymore.
j/k
But what is the performance of the quadro driver? We dont know! Thats the issue!It seems people who purchased the card (day one) also had very good samples. They pushed the card with no problems (using the quadro driver).
@trinibwoy What you hear from the experts, everybody is saysing that we are havely shader bound. That's why i was suprised, that the real word scaling wasn't as good as data looked on the paper.
Quick pixel counting says about 455-460mm^2
Quick pixel counting says about 455-460mm^2
Oh it's mentioned there? Must have missed it, pixel counting from angled shot (even after correcting the angle somewhat) can easily be offNah, you are overcounting, there's no way it's anything but GA104 and it's 392.5mm2 according to the appendix in GA102 whitepaper. 17.4 billion transistors.
Hmmm the obvious answer is that Nvidia felt it had no choice but to push the envelope to fend off AMD. Clearly this decision was made a while ago given the new (and effective) cooler design that can handle a lot more watts.
Hmmm the obvious answer is that Nvidia felt it had no choice but to push the envelope to fend off AMD. Clearly this decision was made a while ago given the new (and effective) cooler design that can handle a lot more watts.
Given a 68 SM 3080 with 10GB GDDR6x pulls ~250w at 1850Mhz it'll be interesting to see how high the 220w 3070 boosts considering it's more svelte 46 SMs and 8GB GDDR6.
We don't know what Nvidia knows. They might have sacrifice efficiency for higher margins. Bet all on the performance leap over previous gen.
To me it looks more like really bad silicon lottery and Nvidia playing safe, we saw that from AMD in the past, tho the range of voltage in Ampere seams way higher than what I recall for i.e Fiji.
Are we seeing large variances in stock voltages on retail cards?
Sorry, I didn't mean that. Some outlets are getting 10% lower performance decrease when undervolting to 0.8V, like for example the hardwareluxx one above (still pretty good results), while others are getting less than a 1% decrease and there's even users who are reporting higher performance when undervolting mildly. Just the fact that you can undervolt by more than 20% and still keep it in the same performance range is crazy. Like I said I don't remember such a massive range for such small impact from any previous cards, including the ones that were notorious for it like Fury.
EDIT: And of course I don't believe Nvidia engineers are just idiots who put the voltage 10-20% higher than needed just for the sake of it. We might not be seing it but there's likely many dies which probably get a much bigger hit when undervolting.