Well, there is such a thing as physics, and the properties of passing current through various metals is generally well understood. But yes, this is the precise reason why specifications exist and why they are very often extremely conservative.And as one usually can't do a negative proof (that's a general thing), the only way to show something actually will cause problems is to demonstrate them and try to narrow down the circumstances under which this will occur.
Well the specs were put together by the very companies that makes and sells the products and systems, they know what is best to keep their business stable from a warranty stand point. Its all about the money at the end.
You are assuming that no one bends the rules which in this case AMD is doing.
There have been lots of cases where recalls have happened when products are buggy. The Pentium divide bug or Nvidia's underfill problem caused products to be recalled or replaced.
I expect motherboard makers will not indemnify AMD and may in fact state the warranty is void if a PCI-e non-complient card is installed.
On average we saw total system power consumption of 437 watts when gaming. This was up from the 235 watts of the single card, and 240 watts of the GTX 1070. A lot of gamers couldn’t give two stuffs about how much power their rig uses when gaming, but I think it’s an important factor to consider that the cards will use 85% more power than the 1070 system.
Now a lot of you will be interested in how these cards fared against the GTX 1070 in particular. With two RX 480s, performance was on average 14% faster than our single card results with the 1070, where it was 35% slower without its twin. Out of interest, if you take away the games where performance was jittery, this average didn’t change. So I suppose for around the same money you can get slightly better performance from this pair of AMD cards, however keep in mind there are some major titles where you’ll be stuck with single card performance, and some others where the gameplay is far from smooth.
Some of these titles mentioned where Crossfire is flat out broken or jittery are very popular games, and if it came down to choosing between the slightly faster overall dual 480s, or the reliable and consistent 1070, then I’d honestly be leaning towards the faster single GPU option, as I always have. I just like to be able to KNOW that my expensive hardware is going to work well with virtually every game.
Or just take their thermal imaging camera, point it at the mobo and see if any of the traces are glowing. We are after all arguing the merits of moving enough current to liquify copper or something nearby.Wasn't Tom's review saying something along the lines of 'it's so bad we're afraid our high end mobo may blow up so we won't risk it". Well that's great, get a 30 EUR mobo and test it. If it blows up you just uncovered something equivalent to VW emission specs cheating in GPU space. Surely a massive recall would follow. Might even destroy AMD.
Hard to know without a proper diagram for the card. There could be a common plane biased with resistors and diodes, VRMs tied to the different rails, or a combination where one or more is shared. In theory someone with a card could test the input voltage of each VRM to likely figure it out. VRMs are basically switching power supplies.How does the distribution of load between different power inputs work?
Do they have, say, 2 VRMs on one and 2 on the other?
If anything it will burn out one of the traces or melt some plastic. They would need a high voltage line ran to their computer to really blow it up.Actually, the statement that it can lead to blowing up mainboards is very far from an established fact . It's pure conjecture at this point. If someone wants to show it is a real danger, he has to test it, preferably on the cheapest, lowest quality mainboard he can find.
Like taking an old analog phone on an aircraft to mess up cockpit communication or analog landing guidance systems? Maybe in some third world nations.This is important not because it would happen but because it can happen. Same thing as why you can't use your cellphone in a plane.
Different, not necessarily better. 14nm is the low power variant of what is basically the same thing.Just asking...but is AMD 14nm process 15% better than Nvidia 16nm process..as the numbers suggest?
Not sure #2 is something you can't miss deliberately. #3 is possible, might also be a shoddy component. #4, while possible, seems unlikely as the board should have been equal or biased towards the 6 pin.The over-spec PCIE power use seems pretty substantiated, but most people are focusing on anticipated dramatic consequences like blown motherboards.
More relevant is understanding why this situation occured. Some possibilities:
Each possibility has its own interpretation and consequences. AMD and its engineers are skilled professionals, so I would place my bet on #4 instead of an engineering failure like #1-3.
- AMD realized it was over spec, hid it from PCIE qualification, and decided not to fix it
- AMD did not realize it was over spec, PCIE qualification missed it, and only reviewers discovered it
- After manufacturing began, AMD realized it was over spec, is working on a fix, but shipped the first batch of out-of-spec stock anyway
- AMD was in spec, but a last-minute BIOS change to increase clocks/voltages pushed its power use over spec and nobody caught the PCIE consequences
There is a pretty old spec for the electromechanical design where that 75W (as the minimum requirement?) is stated. But the PCIe spec actually includes a slot capabilities register which should reflect, well, the capabilities for each slot in the system (and probably/hopefully set in a platform specific way by the BIOS). And apparently this capabilities register includes a value for the "slot power limit" with a range up to 240W in 1 W steps and then 250W, 275W, 300W and even some reserved values for above 300W. Would be interesting to check how this is configured on usual mainboards (as the spec stipulates the card has to limit its consumption to the programmed value as long as it wants to use more than the form factor spec [75W for PEG], it is allowed to use max[form_factor_spec, slot_power_limit] as I understand the spec). I would guess the very high values are used for these MXM like modules for the Tesla cards (where 250+W are supplied over the [non-standard] slots).What does the spec actually say? Has any mobo makers said anything? You'd think they be the most worried since the are the ones who have to serve the rma if a board does fail.
What does the spec actually say?
The official PCIE specification says that an x16 graphics card can consume a maximum of 9.9 watts from the 3.3V slot supply, a maximum of 66 watts from the slot 12V supply, and a maximum of 75W from both combined. Tom's Hardware measurement showed a 1-minute in-game average of 82 watts from the 12V supply, with frequent transient peaks of over 100 watts. The 3.3V draw stayed in spec.What does the spec actually say?
PCI-E only specifies a top ceiling of 75W at boot time, after which a series of negotiation between the card and the mobo determine how much power that specific slot will use (up to 300watts I think) the rest of the session. Motherboards will not burn.
Who's right?"A standard height x16 add-in card intended for server I/O applications must limit its power dissipation to 25 W. A standard height x16 add-in card intended for graphics applications must, at initial power-up, not exceed 25 W of power dissipation, until configured as a high power device, at which time it must not 30 exceed 75 W of power dissipation."
"The 75 W maximum can be drawn via the combination of +12V and +3.3V rails, but each rail draw is limited as defined in Table 4-1, and the sum of the draw on the two rails cannot exceed 75 W."
The problem with ratings is that one has to make assumptions because there are several design options and implementations a manufacturer can go, anywhere from 6A up to high current solutions around 13A.Of course. But stepping a bit outside of some specs doesn't mean automatically, that there will be trouble. And the question was, will there be some problems?
We all know the 6pin PCIe plugs are good for only 75W according to spec. 8pin plugs are good for 150W, even if the amount of 12V conductors is exactly the same. Forgetting the PCIe or ATX spec for a moment and looking just at the ratings of the actual plugs, one learns it shouldn't be much of a problem to supply (way) more than 200W through a 6pin plug (if the power supply is built to deliver that much). So can we expect problems of burnt 6 pin plugs if a graphics card draws more than 75W through that plug? Very likely not. And the same can be very well true for the delivery through the PCIe slot. I agree maybe one shouldn't try to built a 3 or 4way crossfire system (and overclock the GPU on top of it) and expect no problems as it may strain the power delivery on the board. But as someone has explained already, it may even work without damaging the board (as the card starts to draw more through the 6pin plugs). But this is also just conjecture. We just don't know at this point.
.
Either this is a misinterpretation, or it has literally NEVER come up before ever, in any discussion here on B3D that I've seen, or in any hardware website article or GPU review.PCI-E only specifies a top ceiling of 75W at boot time, after which a series of negotiation between the card and the mobo determine how much power that specific slot will use (up to 300watts I think) the rest of the session.