AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Is the power distribution on a GPU card programmable from software? (For any vendor/generation, not just RX 480). Certainly the output VRM voltage is dynamically programmable, but can the actual current draw from card edge versus external PCIE plug be selected by software too? If so then a fix for RX 480 is easy of course. But it seems possible, even likely, that a simpler design would use a fixed, literally hard-wired, strategy of allocating say 3 VRMs drawing from the card edge and 3 VRMs drawing from the PCIE plug.

I could imagine programmable balance could be useful in other situations. Perhaps a GPU installed in a supercomputer node could realize there are 8 GPUs on the same PCIE bus, and deliberately choose to draw essentially all of its power from the robust per-card plugs. Or even dynamically change allocation, by noticing voltage sag from one input and rebalancing itself to lighten the current draw from that (struggling) source.
 
Is the power distribution on a GPU card programmable from software? (For any vendor/generation, not just RX 480). Certainly the output VRM voltage is dynamically programmable, but can the actual current draw from card edge versus external PCIE plug be selected by software too? If so then a fix for RX 480 is easy of course. But it seems possible, even likely, that a simpler design would use a fixed, literally hard-wired, strategy of allocating say 3 VRMs drawing from the card edge and 3 VRMs drawing from the PCIE plug.

I could imagine programmable balance could be useful in other situations. Perhaps a GPU installed in a supercomputer node could realize there are 8 GPUs on the same PCIE bus, and deliberately choose to draw essentially all of its power from the robust per-card plugs. Or even dynamically change allocation, by noticing voltage sag from one input and rebalancing itself to lighten the current draw from that (struggling) source.
I have seen 3rd party power management solutions designed also for GPUs specifically to route power distribution (not a basic IC) between the 12V supplies, but I am not entirely sure we see them on most GPUs historically, especially consumer ones.
The worst case scenario is the partitioning is in a way that they have split the GPU voltage-caps equally between both supplies and will require slight modification of the board (easy for AIB to do).
If that is the case then any short term fix will require power target and voltage envelope to drop enough to remove enough of the current draw from the PCI-express slot - the downside is that this reduction needs to be enough to allow for the fact that the power demand is equally split over both power connectors. But that IMO is worst case scenario.

But I am keeping an open mind for now that maybe they are able to do something more and a slight chance that 50-50 distribution characteristic is down to a more configurable power management system, fingers crossed.
Cheers
 
Last edited:
Is the power distribution on a GPU card programmable from software? (For any vendor/generation, not just RX 480). Certainly the output VRM voltage is dynamically programmable, but can the actual current draw from card edge versus external PCIE plug be selected by software too? If so then a fix for RX 480 is easy of course. But it seems possible, even likely, that a simpler design would use a fixed, literally hard-wired, strategy of allocating say 3 VRMs drawing from the card edge and 3 VRMs drawing from the PCIE plug.

I could imagine programmable balance could be useful in other situations. Perhaps a GPU installed in a supercomputer node could realize there are 8 GPUs on the same PCIE bus, and deliberately choose to draw essentially all of its power from the robust per-card plugs. Or even dynamically change allocation, by noticing voltage sag from one input and rebalancing itself to lighten the current draw from that (struggling) source.
The 6 phases of the VRM for the GPU are very likely split between the PEG slot and the 6pin connector without a possibility to change the assignment (it may be not the best idea to connect the12V from the slot and the 6pin plug before the VRM when the power supply has multiple 12V rails [slightly different voltages]). Every phase can draw from only one source, either the slot or the 6pin plug.
But the VRM controller on the RX480 supports a programmable balance between the phases. That means a driver update could indeed shift the usage between the 6pin plug and the slot by instructing the VRM controller that the current balance should be biased towards the phases connected to the 6pin. Another alternative depends on the construction of the VRM. If the phases with a higher number (it really depends on which PWM output of the VRM controller the driver MOSFET of that phase is connected to) use the 12V from the slot, one can also simply switch off a phase or two. The controller even can do it automatically and on the fly (under load, the GPU doesn't even notice) at lower/normal loads (at programmable thresholds). As the VRM for the GPU is pretty overdesigned (it can provide up to 600A or so to the GPU, at 120°C still ~400A, if I remember the derating correctly), one would probably even gain efficiency (at low loads a lower amount of phases is more efficient, that's the reason this functionality is integrated).
If for instance phases 4, 5, and 6 are connected to the slot and the first three phases to the 6pin, one could either bias the current balance lets say +30% to the phases 1, 2, and 3 (it draws then proportionally more from the 6pin plug than from the slot) and/or setting the thresholds for the use of the phases 5 and 6 so high, that in normal operation they are never used (they are only activated with heavy overclocking).
 
Last edited:
There have been no stock of 480 in the whole day...I think AMD or ether asking the stores to hold the cards until the hotfix is out or calling the cards back to be modify. The last one seems unlikely.
 
There have been no stock of 480 in the whole day...I think AMD or ether asking the stores to hold the cards until the hotfix is out or calling the cards back to be modify. The last one seems unlikely.
Must be a regional thing. In Germany the availability is pretty good with plenty of shops having the card on stock.
 
So.... RX 480 uses out of spec 6-pin that could draw much more power than 75W [150W like 8-pin], but AMD decided not to use that to power GPU core. They have shared the power load of GPU between 75W 6-pin and PCIE! VRAM is taking power from 6-pin.

twitch.tv/buildzoid/v/75850933?t=53m39s

On the other hand, hard to find reference 4GB model is there just for PR purposes, it really has 8gigs on board and a different VBIOS [for now].
http://www.neogaf.com/forum/showpost.php?p=209038156&postcount=1612
 
Last edited:
Interesting from 00:53:15 through end...
https://www.twitch.tv/buildzoid/v/75850933
As I said, the phases are split between slot and plug.
Btw., AMD doesn't need the sense function at the 6pin plug because the VRM controller detects if a phase doesn't have a supply and can signal that. That works basically as sense as well and one can use all three pins for the ground connection (as the plug is wired anyway). As there is no physical connection between the 12V from the slot and the plug (which wouldn't be the best idea to have in the first place as it could create large currents in a random direction through the slot if one uses a multirail power supply), there is no way the supply of the additional phases can be "replaced" from the slot. In principle it is a good design, just the split of the load between slot and plug is somewhat off (which wouldn't be a problem if the card would consume 40W less in total).
 
Last edited:
The 6 phases of the VRM for the GPU are very likely split between the PEG slot and the 6pin connector without a possibility to change the assignment (it may be not the best idea to connect the12V from the slot and the 6pin plug before the VRM when the power supply has multiple 12V rails [slightly different voltages]). Every phase can draw from only one source, either the slot or the 6pin plug.
But the VRM controller on the RX480 supports a programmable balance between the phases. That means a driver update could indeed shift the usage between the 6pin plug and the slot by instructing the VRM controller that the current balance should be biased towards the phases connected to the 6pin. Another alternative depends on the construction of the VRM. If the phases with a higher number (it really depends on which PWM output of the VRM controller the driver MOSFET of that phase is connected to) use the 12V from the slot, one can also simply switch off a phase or two. The controller even can do it automatically and on the fly (under load, the GPU doesn't even notice) at lower/normal loads (at programmable thresholds). As the VRM for the GPU is pretty overdesigned (it can provide up to 600A or so to the GPU, at 120°C still ~400A, if I remember the derating correctly), one would probably even gain efficiency (at low loads a lower amount of phases is more efficient, that's the reason this functionality is integrated).
If for instance phases 4, 5, and 6 are connected to the slot and the first three phases to the 6pin, one could either bias the current balance lets say +30% to the phases 1, 2, and 3 (it draws then proportionally more from the 6pin plug than from the slot) and/or setting the thresholds for the use of the phases 5 and 6 so high, that in normal operation they are never used (they are only activated with heavy overclocking).

Would the inductors/capacitors be rated for that on each phase though, especially with a tight margin mainstream product?
On a sidenote just did a quick check the VRM roughly supports anywhere between 25A to 40A for each phase (depending upon model and yeah still a lot) when looking at some of the other modern designs, no idea with the 480, which also needs to be considered is not an enthusiast-performance tier design.
Is there a good high resolution photo for the 480 board-components on a site somewhere.

Thanks
 
Last edited:
Would the inductors/capacitors be rated for that on each phase though, especially with a tight margin mainstream product?
On a sidenote just did a quick check the VRM roughly supports anywhere between 25A to 40A for each phase (depending upon model and yeah still a lot) when looking at some of the other modern designs, no idea with the 480, which also needs to be considered is not an enthusiast-performance tier design.

Thanks
Obviously, someone saw the need to provide the RX480 with VRMs rated way higher than on the 1080 for instance. Not only has it more phases, but each phase can also provide much higher current. The lowside MOSFETs on each phase of the RX480 provide up to 100A (yes, each phase can do 100A). Reducing the number of phases should actually increase the efficiency of the VRM slightly. No idea why they felt this to be necessary. The VRM is basically on FuryX level.
 
Obviously, someone saw the need to provide the RX480 with VRMs rated way higher than on the 1080 for instance. Not only has it more phases, but each phase can also provide much higher current. The lowside MOSFETs on each phase of the RX480 provide up to 100A (yes, each phase can do 100A). Reducing the number of phases should actually increase the efficiency of the VRM slightly. No idea why they felt this to be necessary. The VRM is basically on FuryX level.
Wow,
kinda excessive.
edit:
Do you know which PWM controller-driver they use on the 480?
Just curious.
Even the 980ti was only 60A per phase (using integrated IR device) works out around 40-45A derated at 80celcius, sort of puts it into perspective.
Thanks
 
Last edited:
Less VRMs better cooler and better power delivery and AMD would have a trump card in their hands...Sad how few decisions crew up what could be a really awesome card.
 
Is there a good high resolution photo for the 480 board-components on a site somewhere.
There are some. Detail images (and also detailed power measurements) can be found in this German review (there may be an English version, don't know right now).
Do you know which PWM controller-driver they use on the 480?
Just curious.
Even the 980ti was only 60A per phase (using integrated IR device) works out around 40-45A derated at 80celcius, sort of puts it into perspective.
Thanks
The VRM controller is an IR3567B (6+2 phases, an extensive datatsheet [detailing what can be programmed] is publicly available for the IR3565B, the 4+2 phase version), the lowside MOSFETs (basically carrying the load) are MDU1511 (30V, 100A, 2.4mΩ), the highside MOSFETs are MDU1514 (30V, 66.3A, 6 mΩ), the drivers being CHil CHL8510.
Let's see if deeplinking works.

VRM controller:
PWM-Chip_w_300.jpg


MOSFETs + driver:
Mosfets_w_640.jpg


Edit:
And that is the derating chart for the low side MOSFET:
rx480_ls_mosfet_deratx8jjk.png


So even running these at 125°C still gives you 66A. At more realistic temperatures (<=100°C) with at least somewhat decent cooling (reference should be fine) it's basically unimpeded. And you have 6 of these in there.
 
Last edited:
I always thought AMD and in the past ATi over-engineered VRMs for instance on reference designs compared to nVIDIA. If your mass manufacturing these PCBAs (ref RX480 for example), having a reliable 3~4 phase VRM that handles just over the maximum rated board power would have been enough and saved them alot of money from less/cheaper components. To me thats the good engineering solution with good tradeoffs all round, not something that is overkill.. plus AIB partners can do that with their over the top >10 phase setups. Its why most companies avoid linear technologies like the plague. No need to spend so much money on fancy ICs and Mosfets etc unless its mission critical.

Maybe those rumors of Polaris not hitting the right clocks were true. Obviously the GPU didn't perform as expected and hence having to readjust the final clocks/volts higher than planned. The external peripherals like the VRM design was probably frozen with boards in production ready to go only to be set back by this.
 
Status
Not open for further replies.
Back
Top