Carry on then, my memory was wrong!Short term peaks where at 430-460 watts (depending on the card) with 2080 Ti also, given their lower TDP rating, relatively speaking, they did not peak substantially lower.
Carry on then, my memory was wrong!Short term peaks where at 430-460 watts (depending on the card) with 2080 Ti also, given their lower TDP rating, relatively speaking, they did not peak substantially lower.
Igor's lab testing showed spikes of over 500W at <1ms. Could it be that the firmware or drivers are missing some hard limiter that would stop the card from going that high? If memory serves Turing and Pascal wouldn't peak that high relative to their averages.
Update: after further investigation of the components, what is referred to as poscaps everywhere, in fact, are spcaps (an imperceptibly varying component responsible for the same functionality often named and referred to being the same). None of the RTX 3000 cards would use poscaps or spcaps. Igor Wallossek made a nice diagram, have a peek:
https://www.guru3d.com/news-story/g...ely-due-to-poscap-and-mlcc-configuration.htmlThe mlcc's (in green) are extra capable to filter high frequencies, therefore video cards with more mlcc's experience fewer problems than cards with spcaps (red). That is why the small, more difficult to solder mlcc's are also a lot more expensive. Some manufacturers have opted to use less or no mlcc's at all, and therein is the problem to be found. Manufacturers can choose this themselves. Nvidia's own Founders Edition uses four sp-caps. Currently, it is very silent at the AIB partners, but if all this information turns out to be the correct assumption, then AIBs will have to revise their design and release boards with a fix in place. For the current boards out there a quick solution would be to lower the Boost frequency with perhaps a 50 MHz lower frequency, diverting the issue.
...
In short: specific implementations with POSCAP/SPCAP design are suspected of creating instability specifically with a particularly high boost clock. That results in itself in-game driver crashes and the dreaded CTD (crash to desktop). The solve, reconfigure POSCAPs/SPCAPS, and add MLCCs.
...
I have yet to experience even one crash on any of my samples at hand, and that is the honest truth. Currently, we're also seeing reports of ASUS cards (using 100% MLCCs and founder edition cards using 100% MLCCs) with similar CTD behavior reported, that could be a placebo effect.
So I suppose this means Asus models can't be ruled out either, and instability reports for them are real.
Fully stable after switching to single-rail or did you still need to lower clocks/mess with P-states?I have 3080 FE I have been experimenting with. The card was very unstable until I switched my power supply (Corsair HX1000) from multi-rail 12V to single-rail 12V, so the very high short term power draw seems to be an issue. Max boost at stock looks like it is 2100 mhz (as reported by nvidia-smi), but I have never seen over 2055 reported. Capping max boost at below 2000 mhz seems to be stable. Also, locking the card at max P state seems to be stable (using nvidia-smi -lgc 2100). Idle power draw (TBP) goes from about 28 watts to 59 watts when locking the max P state.
Not just the issue of recalls which may not happen, but also the risk of a driver side patch slashing performance by an unknown amount. Which will apply even to those units which are running (mostly) stable.At this point it seems like to have the least headaches one should return their RTX 30x0 video cards since its still that return window? Otherwise you're at the mercy of whatever the manufacturers decide to do later on with recalls?
Could RTX 3080 based upon GA102 have been a placeholder for GA103?:
How many amp the 12v rail delivers ?
Right now, it's not 100% clear what is causing the instability, so don't change your PS right now imo...
Have you tried GPU-Z to measure the power draw?Unfortunately, I don't have the equipment to measure the peak power draw, but the evidence leads me to believe it is higher on the 3080 than any previous GPU I have run on the PSU. Igor's Lab has some peak power draw measurements in their 3080 review, but it looks like they only measured under a sustained load (peak was 486W for <1ms).
Could RTX 3080 based upon GA102 have been a placeholder for GA103?:
Let me guess, it'll require a 50 gigawatt power supply?