PCIe 12VHPWR and 12V-2x6 power connector issues

Unfortunately not. That text refers to the picture of a Molex style connector from a linked article.
Yeah, I think you're right, actually. I gotta say Igor's writing is often confusing to me. So he posts that picture and then under it writes 'So geht 12VHPWR richtig!' - while not actually being related to that picture at all. Heh.

I don't think the picture is Cablemod, either. It's just another shot of the further disassembled Nvidia adapter, specifically his poor sample with the 150V rated cables, not the 300V ones seen elsewhere like at Gamers Nexus' response video at the time.
 
Nice catch indeed. However, that will probably be misunderstood as "if my terminal isn't recessed, then the cable is good".

A recessed terminal is only one way it can have failed, it can also:
  • Loose pressure because the female part is effectively supposed to be a spring that may become worn out
  • The protruding, designated contact area (bumps or a lid) on the inside of the female terminal can wear down
  • Unnoticeable corrosion inside the female terminal
  • Unnoticeable corrosion on the male pin
  • For any reason mismatching geometry between male and female pin (thickness, aspect ratio, rotation, tempering, ovoid etc.)
And all those need to be in a better condition that the Molex specification requires to work...

While a recessed terminal like that is only an issue when it's bad enough that it doesn't go on at all.

Also he kept going on about the GND terminal - and the answer is: No, doesn't really matter, even though it really should. But only for the reason that Nvidias GPUs completely messed up the current they are returning via all the different available connections to ground. A lot of the current flowing back via GND is going via PCIe slot (significantly more than the expected 6.25A! PCI-SIG has only regulated 12V draw though, and not the return on GND...) and the case via the bracket.

Also: One bad pin isn't anywhere enough to fail. He's not really at immediate risk yet unless he's loosing at least two more 12V-pins.

His PMD2 was correcting the issue before it got to the GPU. That's a good catch.
Good chance that in his case the PMD2 is partially to blame... If the geometry of the pins on the socket isn't up to spec, then those are very likely to damage plugs. It's enough to be just sightly bent or twisted, not correctly beveled, wrong aspect ratio, ever so slightly to wide or too long, and then it will damage the plug on the assertion attempt. Pushing the nylon case into the socket almost needs more force than it takes to pop the terminal from the case...

Not saying that Corsair didn't possibly not meet the specifications regarding the forces that the female terminal needs to withstand either...

But what he did next - pairing an already used plug with another socket - is something you should also really avoid in the current situation.
 
Last edited:

He's not too happy with NVIDIA lowering the safety margin and removing the safety features of the Ampere generation. They created the perfect recipe for "user error".
 

He's not too happy with NVIDIA lowering the safety margin and removing the safety features of the Ampere generation. They created the perfect recipe for "user error".
Wish he'd posted anything new about the unbalanced wire he had demonstrated. This video was more about responding to people not being able to replicate his findings and touting his mission statement. Not much in the way of useful content.
 
Wish he'd posted anything new about the unbalanced wire he had demonstrated. This video was more about responding to people not being able to replicate his findings and touting his mission statement. Not much in the way of useful content.
I was hoping that's what this video would be about. We know how this is happening, but no conclusive evidence as to why the current is becoming unbalanced. There are many possible reasons. Which is why this should be accounted for in the product design, but that doesn't help us now.

BTW is there any reason why this couldn't be dealt with on the PSU side? It would complicate things for sure but damn it would be a selling point. I guess if the conductors are bridged inside the connectors it might not help much, but it's unclear to me if that's normal for a 12VHPWR or 12V-2x6 cable.
 
I was hoping that's what this video would be about. We know how this is happening, but no conclusive evidence as to why the current is becoming unbalanced. There are many possible reasons. Which is why this should be accounted for in the product design, but that doesn't help us now.

BTW is there any reason why this couldn't be dealt with on the PSU side? It would complicate things for sure but damn it would be a selling point. I guess if the conductors are bridged inside the connectors it might not help much, but it's unclear to me if that's normal for a 12VHPWR or 12V-2x6 cable.
It'd be real tough for the PSU to actively fix the issue - it could certainly sense the current draw down all 12 wires, and add resistance in series to keep everything balanced (and burn off some extra electrons as heat in the process.)

Problem comes with cards like the 3090Ti that are already doing active balancing on the GPU side, and what happens if each start to fight each other. Or, a lower powered card built with an oldschool static power partitioning scheme where like 4x12v pins feed the VCore VRM and the other two feed all the rest, or even worse, a low power card that doesn't draw power from all the pins to begin with, in a hypothetical future where the 6 and 8 pin PCIe connectors disappear entirely.

It would be relatively straightforward for a PSU, especially a high end one with management software like Corsair's high end ones to sense out of spec current draw from each of the pins and sound an alarm, power the PC off, etc., though. Exactly like Asus' Astral 5090 does, just on the other end of the cable, and that might be the best solution. At least it would eliminate cable, GPU, and PSU damage and risk of fire, although as time goes on, I suspect that alarm might get tripped an awful lot as all the parts age. I can already envision all the posts with less technically sophisticated people screaming into the void about how their fancy power supply keeps beeping and turning off while their friend's $75 bargain bin special works 'just fine' with the same components. It's the right thing to do, but I can easily see how difficult that would be for that PSU manufacturer's support and warranty department, as well as their brand identity and reputation onilne.
 
Last edited:
It'd be real tough for the PSU to actively fix the issue - it could certainly sense the current draw down all 12 wires, and add resistance in series to keep everything balanced (and burn off some extra electrons as heat in the process.)

Problem comes with cards like the 3090Ti that are already doing active balancing on the GPU side, and what happens if each start to fight each other. Or, a lower powered card built with an oldschool static power partitioning scheme where like 4x12v pins feed the VCore VRM and the other two feed all the rest, or even worse, a low power card that doesn't draw power from all the pins to begin with, in a hypothetical future where the 6 and 8 pin PCIe connectors disappear entirely.

It would be relatively straightforward for a PSU, especially a high end one with management software like Corsair's high end ones to sense out of spec current draw from each of the pins and sound an alarm, power the PC off, etc., though. Exactly like Asus' Astral 5090 does, just on the other end of the cable, and that might be the best solution. At least it would eliminate cable, GPU, and PSU damage and risk of fire, although as time goes on, I suspect that alarm might get tripped an awful lot as all the parts age. I can already envision all the posts with less technically sophisticated people screaming into the void about how their fancy power supply keeps beeping and turning off while their friend's $75 bargain bin special works 'just fine' with the same components. It's the right thing to do, but I can easily see how difficult that would be for that PSU manufacturer's support and warranty department, as well as their brand identity and reputation onilne.
Any active load balancing on the PSU side seems a poor idea for the reasons you mention, but the latter solution you described shouldn't interfere with any protections or balancing on the GPU side. The PSU could sense if a single pin is drawing well over its rated capacity and shut itself off. There could be an alarm that sounds before the shutdown to let the user know what is happening. I'm just repeating you but it seems a good enough idea to repeat :)
 
Load balance is ok in my case. All wires are far from 9.5A.

GPU: Palit RTX 4090 GameRock
Test load: FurMark, factory clocks
PSU: Seasonic SS-1000XP
12VHPWR cable: CableMod Basics B-Series 12VHPWR StealthSense PCI-e Cable for be quiet! (Black, 16-pin to Dual 12-pin, 60cm)
Clamp meter: UNI-T UT210D
In use: Since December, 2022
Condition: Functional, no burn marks, no issues.

ALL_HQ_5T_0.25.png
Furmark
Screenshot 2025-02-16 005814.png
12VHPWR cable 12V total current
T_12V.jpg
12VHPWR cable GND total current
T_GND.jpg
 
In a lengthy discussion Igor from Igor's Lab also pointed out, that the 4090 and 5090 are dumping a lot of high frequency and foremost high amplitude noise directly into the 12V-2x6 connector. There is a filter on the GPU that's eliminating noise >= 40kHz (so they do avoid Skin Effect which would result in a horrible raise in resistance), but everything below that goes completely unfiltered into the wire. The effects of the remaining ripple currents appear to amount to as much as 15% (4-5K difference out of the 30K expected heating under full load) of the total losses on the connector and wire, eating yet further into the non-existing safety margins on that connector.

Effectively that means that simply measuring DC current will already let you underestimate the actual load on the connector. This appears to be also the most plausible explanation why the connectors fail almost exclusively on the GPU side, despite the safety margins on the other end not being much different.

Kind of dubious if that explanation and measurements are sane though.
 
Last edited:
This appears to be also the most plausible explanation why the connectors fail almost exclusively on the GPU side, despite the safety margins on the other end not being much different.

Any noise at one end of a short wire is going to propagate instantly to the other end. That can’t explain why one end would fail more than the other.

Unless Igor has his MSEE he should stop where his skill set ends. High frequency noise in a 12v DC circuit in no way has enough amplitude to contribute 80 watts of power dissipation.
 
High frequency noise in a 12v DC circuit in no way has enough amplitude to contribute 80 watts of power dissipation.
Not 15% of the total dissipation, but only of the losses up to the connector... But yes, I don't even trust the measurement either. He claimed to have measured the high frequency noise by putting the current clamp as close to the GPU as possible, so he possibly picked up a lot of noise that wasn't even on the wire. He also acknowledged not to have any other type of suitable equipment at hand for verifying the measurements taken with the current clamp. And his experiment of plugging in an additional low pass with a much lower cutoff frequency between the cable and the GPU may have successfully reduced the temperature of the socket on the GPU as well as the measured noise for any number of possible other reasons, and be it just by shifting the location of the voltage drop to another segment or reducing the contact resistance due to a better fitting pairing.

Edit: double checked. His current clamp has an almost linear output in the relevant frequency range. But it's still easily affected by any exterior magnetic fields.
 
Last edited:
Back
Top