Only 75W through PCIe 2.0 slot?

Slyne

Newcomer
Supporter
I've seen pointed-out again today (by Dave B, so I believe it) that a GPU cannot draw more than 75W from the PCIe slot, even if both are using PCIe 2.0 spec. But I'm sure I've read before that PCIe 2.0 supported channeling more power through the motherboard slot (some have said 150W, but nothing I've read pointed to a fixed value).

So, was the spec changed? Was the article I read wrong? Is it more expensive than simply connecting directly to the PSU? I'd love it if someone could shed some light on that topic.

Thanks much
 
I've seen pointed-out again today (by Dave B, so I believe it) that a GPU cannot draw more than 75W from the PCIe slot, even if both are using PCIe 2.0 spec. But I'm sure I've read before that PCIe 2.0 supported channeling more power through the motherboard slot (some have said 150W, but nothing I've read pointed to a fixed value).

So, was the spec changed? Was the article I read wrong? Is it more expensive than simply connecting directly to the PSU? I'd love it if someone could shed some light on that topic.

Thanks much

It is 150w through a PCI-e2.0 slot, it is just the problem of getting motherboard manufacturers to actually do it. I remember when AMD's Spider Platform first came out and someone was playing with it and it had A LOT of wattage options in there, for both K10 and a HD3800.
 
PCI-SIG F.A.Q. said:
Q11: I’ve heard mention that PCI-SIG is working on a new graphics spec – what is it? How is it different from the existing PCIe x16 Graphics 150watt-ATX 1.0 spec?
A11: PCI-SIG is developing a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG is developing some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The PCI-SIG expects the new specification to be complete in 2007.

The wording is a little confusing as it sounds like they're upping the power provided through the slot. The increase however was coming from the 6pin -> 8pin adjustment and not the slot.

Not positive but I'd assume the power pins for PCIe were spec'ed to provide the near the maximum amount of current the traces could support. To go any higher they'd have to double the voltage supplied by the pins and that would also necessitate a change in the slot so people don't kill things.
 
TBH thinking it now, ever even suggesting the slot would supply more watts was plain ridicilous, at least ever since they confirmed full compatibility both ways - why would you bother giving more watts through a slot when the 2.0 cards still have to have the extra plugs for PCIe1.x mobos anyway
 
I seem to recall Asus 790 FX board offering the option of delivering 150W through the (primary) PEG2.0 slot as a user-selectable BIOS option.
 
Thanks all for your answers, and sorry for missing the previous thread on this topic (I had found a few threads on PCIe specs, but not the one linked by Sinistar). Even articles (those I've read at least) on the subject are confused or avoiding details, so I'm happy to finally have an answer, disregarding the one-offs mentioned by LordEC911 and ShaidarHaran.
 
It is 150w through a PCI-e2.0 slot, it is just the problem of getting motherboard manufacturers to actually do it.

Since motherboard manufacturers won't do it, just like AGP-PRO slot; you couldn't simply buy AGP-PRO video card - it was made for professional use ONLY.
 
Last edited by a moderator:
I was always curious about this too, but it seems there are people that do it.

Oddly Theo talks about it in relation to the rv670/rv770 and 9800gt cards. http://www.tgdaily.com/html_tmp/content-view-37881-135.html

He seems to imply that adding the extra juice from the slot does help, although I am unsure of Theo's use of such measures or if he's just making an uninformed statement from what the OP and I had also come to believe.

I wouldn't be surprised if the 4850's 150W max tdp will limit the core clock to just below 4870, maybe 725mhz? This is considering linear scaling from the 110W max TDP of 4850 to 160W TDP of 4870 and the difference in core clock (125mhz)...125/50=2.5mhz/W...2.5x40=100mhz=725mhz@150W....not accounting for the small difference between power usage of GDDR3/5. 700-725?

If you follow that logic, 225W could bring you...2.5mhzx75W=187.5mhz.

725mhz @ 150W + 187.5mhz @ 75w = 912.5mhz@225W.

Of course scaling is rarely linear, and I'm not going to pretend diminishing returns and heat don't play a factor, but one could argue if 300W were accessible and usable, 1ghz could and should be possible. :devilish:

Well,perhaps at least 980, as testamented to in this thread. If you figure here's talking about 4870 cards, that'd mean the double slot cooler could handle ~250W at full bore...If it were linear. Seems feasible enough.

Of course, that being said...with 8800GT/rv770 TDP being almost identical with similar clocks...and that full blown G92's with 128sp being able to score about 825-850c with 225W available, I don't think it's insane to expect the 9800gt to be able to hit 850+, especially if cooled well. If G92b turns out decent, it might very well have a sporting chance in price/performance versus 4850...or at least usurp the market below it currently man-handled by the 9600gt.

This sure could be interesting...and makes you wonder if more board venders will add this option...and especially anyone using non-amd chipsets/cpus.

Please excuse me while I go daydream about pumping 225W through a 4850 or 300W through a 9800GT...Now THAT would be a good deal for ~$200. :devilish:

On the same token...I want to see someone put 375W through a GTX 280....and then volt-mod it....and then buy a power plant to overclock it.
 
Last edited by a moderator:
turtle - if only it were that simple.

Clockspeed scaling with power consumption (voltage required to run the desired clockspeed is actually the correct metric here) only ever approaches linearity across a small range of clocks/voltages.

TDP as a percentage of max. power is no measure of a GPU's ability to scale clockspeed, unfortunately.
 
Back
Top