AMD: Speculation, Rumors, and Discussion (Archive)

Discussion in 'Architecture and Products' started by iMacmatician, Mar 30, 2015.

Thread Status:
Not open for further replies.
  1. SimBy

    Regular Newcomer

    Joined:
    Jun 21, 2008
    Messages:
    502
    Likes Received:
    135
    Did anyone touch on ComputerBase claim P10 is infact 40 CU?
     
  2. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    Well, there is such a thing as physics, and the properties of passing current through various metals is generally well understood. But yes, this is the precise reason why specifications exist and why they are very often extremely conservative.

    The problem (in this specific case) isn't really with the physics though, it is the breaking of a promise. If a power supply promises to provide at least 75w to a device and it in fact provides at least 80w but is confronted with a sustained 85w load (from a device which promised not to pull more than 75w) and shuts down to protect itself, who is at fault? The one breaking the promise. Always. You might say that is a pretty sad power supply, and I might agree. But at the end of the day it fulfilled its obligation and is not liable for the result. The device on the other hand is, because it made a promise it did not keep. No sane company selling thousands/millions of products wants to expose itself to any more liability than is absolutely necessary.
     
  3. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    985
    Likes Received:
    277
    You are assuming that no one bends the rules which in this case AMD is doing.

    There have been lots of cases where recalls have happened when products are buggy. The Pentium divide bug or Nvidia's underfill problem caused products to be recalled or replaced.

    I expect motherboard makers will not indemnify AMD and may in fact state the warranty is void if a PCI-e non-complient card is installed.
     
    Razor1 likes this.
  4. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    Just asking...but is AMD 14nm process 15% better than Nvidia 16nm process..as the numbers suggest?
    If yes, could be why AMD has sacrificed short term leakage for long term advantage....?
    I cannot believe AMD will allow RX 480 to run so loosely in its power use....
     
  5. dskneo

    Regular

    Joined:
    Jul 25, 2005
    Messages:
    517
    Likes Received:
    20
    PCI-E only specifies a top ceiling of 75W at boot time, after which a series of negotiation between the card and the mobo determine how much power that specific slot will use (up to 300watts I think) the rest of the session. Motherboards will not burn.

    This info is also on reddit.
     
    Orion likes this.
  6. spworley

    Newcomer

    Joined:
    Apr 19, 2013
    Messages:
    146
    Likes Received:
    190
    The over-spec PCIE power use seems pretty substantiated, but most people are focusing on anticipated dramatic consequences like blown motherboards.
    More relevant is understanding why this situation occured. Some possibilities:
    1. AMD realized it was over spec, hid it from PCIE qualification, and decided not to fix it
    2. AMD did not realize it was over spec, PCIE qualification missed it, and only reviewers discovered it
    3. After manufacturing began, AMD realized it was over spec, is working on a fix, but shipped the first batch of out-of-spec stock anyway
    4. AMD was in spec, but a last-minute BIOS change to increase clocks/voltages pushed its power use over spec and nobody caught the PCIE consequences
    Each possibility has its own interpretation and consequences. AMD and its engineers are skilled professionals, so I would place my bet on #4 instead of an engineering failure like #1-3.
     
    Otto Dafe, Anarchist4000 and Razor1 like this.
  7. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Well that is the consequence AMD has to take if they don't fix this problem (well after verifying the problem)
     
  8. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    594
    Likes Received:
    298
    What does the spec actually say? Has any mobo makers said anything? You'd think they be the most worried since the are the ones who have to serve the rma if a board does fail.
     
  9. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,910
    Likes Received:
    1,607
    RX 480 Crossfire review ...
    http://www.hardwareunboxed.com/rx-480-crossfire-performance-gtx-1070-killer/
     
    A1xLLcqAgt0qc2RyMz0y likes this.
  10. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Or just take their thermal imaging camera, point it at the mobo and see if any of the traces are glowing. We are after all arguing the merits of moving enough current to liquify copper or something nearby.

    Hard to know without a proper diagram for the card. There could be a common plane biased with resistors and diodes, VRMs tied to the different rails, or a combination where one or more is shared. In theory someone with a card could test the input voltage of each VRM to likely figure it out. VRMs are basically switching power supplies.

    If anything it will burn out one of the traces or melt some plastic. They would need a high voltage line ran to their computer to really blow it up.

    Like taking an old analog phone on an aircraft to mess up cockpit communication or analog landing guidance systems? Maybe in some third world nations.

    Different, not necessarily better. 14nm is the low power variant of what is basically the same thing.

    Not sure #2 is something you can't miss deliberately. #3 is possible, might also be a shoddy component. #4, while possible, seems unlikely as the board should have been equal or biased towards the 6 pin.

    Not sure why people think an 8 pin would make any difference here. More in spec power delivery is about the only reason and that still wouldn't fix the issue.
     
  11. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    There is a pretty old spec for the electromechanical design where that 75W (as the minimum requirement?) is stated. But the PCIe spec actually includes a slot capabilities register which should reflect, well, the capabilities for each slot in the system (and probably/hopefully set in a platform specific way by the BIOS). And apparently this capabilities register includes a value for the "slot power limit" with a range up to 240W in 1 W steps and then 250W, 275W, 300W and even some reserved values for above 300W. Would be interesting to check how this is configured on usual mainboards (as the spec stipulates the card has to limit its consumption to the programmed value as long as it wants to use more than the form factor spec [75W for PEG], it is allowed to use max[form_factor_spec, slot_power_limit] as I understand the spec). I would guess the very high values are used for these MXM like modules for the Tesla cards (where 250+W are supplied over the [non-standard] slots).

    edit:
    PCIe 3.0 base spec, section 6.9:
    [​IMG]
    But no idea how relevant this really is as one can read it also like it should limit the complete consumption of the card not just the amount supplied by the slot. Earlier versions (like 1.0, which also misses the 250W, 275W, 300W, and the reserved above 300W encodings) appear to more clearly specify just supply through the slot, though.

    [​IMG]
     
    #3691 Gipsel, Jun 30, 2016
    Last edited: Jun 30, 2016
    Lightman, Razor1, BRiT and 1 other person like this.
  12. Mat3

    Newcomer

    Joined:
    Nov 15, 2005
    Messages:
    163
    Likes Received:
    8
    If it came down to it, couldn't AMD just provide a software or bios update that would, let's say, tell GPU to stay much closer to its base clock more of the time or something like that?
     
  13. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    "A standard height x16 add-in card intended for server I/O applications must limit its power dissipation to 25 W. A standard height x16 add-in card intended for graphics applications must, at initial power-up, not exceed 25 W of power dissipation, until configured as a high power device, at which time it must not 30 exceed 75 W of power dissipation."

    "The 75 W maximum can be drawn via the combination of +12V and +3.3V rails, but each rail draw is limited as defined in Table 4-1, and the sum of the draw on the two rails cannot exceed 75 W."

    About as clear as it gets...

    From an old revision though...
     
  14. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,853
    Likes Received:
    4,463
    Wow, that was a rather nasty little group of AMD-hater circlejerkers back in the pages 171-172 of this thread.
    Thank you Rys for stopping it through sheer sanity. The card is getting glowing reviews and winning performance/price ratios pretty much everywhere while at the same time offering very good power consumption, yet we're seeing the usual suspects trying to turn it into humanity's greatest failure.

    Regarding the competition part, I can only guess these same people really enjoy how nVidia's full ~300mm^2 GPU debuted at $230 in their first 40nm line, then at $500 in their first 28nm line and now we can't seem to find the 16FF one at less than ~$900 in Europe (thanks to a completely fake MSRP that no one is following because of that FE marketing ploy).
    They're either more reliant on nVidia's stock price than they are on GPU price needs, or this is a totally new form of masochism.


    The PCIe armchair-concerns seems is a bit ridiculous from an electric engineer's POV. The motherboard's 12V feed comes from the PSU (the ATX24 spec includes a number of 12V pins) so if there is more current coming from the slot than from the dedicated 6pin connector then it's simply because it's the path of less resistance in that specific case. Multiple GPUs in one motherboard will probably end up just relying more on the 6-pin connector for the 12V than the PCIe slot.
    At 12V, pushing 75W means a current of 6.25A. At 83W it goes towards 6.92A. Thinking the motherboard "might blow up" because it's pushing 0.67A more on that specific power pathway is a bit ridiculous IMO.
    Regardless, a standard is a standard and although they're almost always over-engineered, they should be followed. Perhaps AMD can issue some driver update that will just tell the bios to force a different power distribution. Though If they don't, I think the consequences will be... none at all.
     
  15. spworley

    Newcomer

    Joined:
    Apr 19, 2013
    Messages:
    146
    Likes Received:
    190
    The official PCIE specification says that an x16 graphics card can consume a maximum of 9.9 watts from the 3.3V slot supply, a maximum of 66 watts from the slot 12V supply, and a maximum of 75W from both combined. Tom's Hardware measurement showed a 1-minute in-game average of 82 watts from the 12V supply, with frequent transient peaks of over 100 watts. The 3.3V draw stayed in spec.
     
    pharma likes this.
  16. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Who's right?
     
  17. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    As I understand it, the larger numbers are the max an individual card in that slot can draw from all sources (e.g. 8 + 6 pin for a 300 watt card). 8 pin = 150 W, + 75W 6 pin, + 75W board = 300W total.

    So you can certainly have a 300W (max) card in a slot, but it should not draw more than 75W max from the board.
     
  18. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    The problem with ratings is that one has to make assumptions because there are several design options and implementations a manufacturer can go, anywhere from 6A up to high current solutions around 13A.
    Same with cabling, the common supplied gauge is 18AWG in this context (especially at the more budget end) where with high current you will want 16AWG cable, and there is nothing stopping a manufacturer reducing this to 20AWG that is also spec/rated by Molex.
    If you want to go with standard rating spec, that would give you roughly 144W Max from the mainboard but this has to be shared with all devices using the PCI express slots and some other devices, on top of this you need to consider the riser-slot that may have its own current limitations (I know some are only even rated to 5A).
    For auxiliary PEG 6-pin gives you 192W max, and 288W max for 8-pin.
    But then you need to accomodate de-rating (meaning you do not want to be too near the max), needs to be 18AWG minimum for all 12V wires, and a reasonable PSU if OC or importantly going 2x480.

    That said it is clear Tom's Hardware was also taking this into consideration and not just PCIe spec because they did not get overly concerned about the power distribution when context is using a single card and not overclocking, even without OCing they had measurements that went beyond not just PCIe spec but also ratings based upon standard components (not HCS).
    They measured peaks at 155W and commented that thankfully they were brief burst in behaviour.
    Where they were not happy is when they OC and it went to average of 100W and peak 200W, and also dubious about 2x480 setup that would again go above ratings for standard components, here I bet it needs a good motherboard/PSU that are not budget-mainstream.
    However as I mentioned you do not want to be too near the max as modern GPUs have a pretty high temp that also influences derating for maximum current (would not be significant but would reduce that max a bit).

    While I doubt a motherboard/PSU will 'blow' but maybe more impact for budget products, the OC and 2x480 power demand/distribution could cause problems that initially would only be noticed if measured with a scope but with long term failure possibilities or unwanted trait behaviours from a power perspective.
    But the caveat is whether this applies to just a few cards or is a more general power behaviour.

    Cheers
     
    xEx likes this.
  19. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    Either this is a misinterpretation, or it has literally NEVER come up before ever, in any discussion here on B3D that I've seen, or in any hardware website article or GPU review.

    I'm dubious as to the veracity of this information, as it would mean a high-end GPU might not need ANY auxiliary power connectors, and that's - as I mentioned - something that has never been brought up for discussion that I have seen. Also, PCIe socket pins are incredibly thin gauge - I wouldn't want to pull 300W through them; I'd be wary of welding the pins to the card edge connector... :lol:
     
    homerdog likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...