NVIDIA GF100 & Friends speculation

And that's even after removing World of Warcarft from the list of benchmarked games, where 6870 got negative scaling.... still, TPU review consist of old stuff, that often gets CPU bound or doesn't scale at all. Their reviews aren't a good source for average numbers, until they fix their choice of games.
 
It's still not AMD's problem if a case manufacturer makes an ATX case but doesn't correctly implement the ATX specifications. In other words, blame the case manufacturer's for cutting corners and ignoring the specs.

I don't think it's a matter of blaming anyone but a matter of potential sales opportunities. If I had a very small chassis, chances are, I bought it for a reason (or I am f***ed with Acer/HP microscopic cases and fitting PSUs anyway). So, chances are, that I won't upgrade this particular PC with a card unlikely to fit inside. Sales opportunity lost.

I might, however, build another PC with sufficient space, but not all users will (be able to?) do that.
 
Whoa 2x 8 pin connectors. A ridonkulous cooling solution with 3 fans. If that's what Nvidia is planning, it certainly looks like they are getting ready to give PCIE certification a big middle finger. :D

Regards,
SB
 
Silent_Buddha said:
Whoa 2x 8 pin connectors. A ridonkulous cooling solution with 3 fans. If that's what Nvidia is planning, it certainly looks like they are getting ready to give PCIE certification a big middle finger. :D
I can't figure out why people seem to think that it's important for a specialty product like his to stay within some power number that's written in a technical spec? Especially if the connectors make it abundantly clear that more than usual power is required.

There is a PCIe spec and committee and all that, but it's not as if they have any say about what product can or can not be released. There are tons of legacy PCI products that violated the spec one way or the other, but nobody cares as long as they work in commonly used configurations.

In the worst case, excess power requirements result in the inability to put an official PCIe logo on the box, which, I'm sure, will highly concern some corporate enterprise IT manager.

If anything, dual 8 pin connectors not being an official configuration is a clear indication that the PCIe spec is in dire need of being updated. Nobody can claim that they didn't see the need for increasing power need coming.
 
What should the next wattage limit be?

A quad-card setup of 300W cards with the attendant system (a few hundred W to CPU and misc) to is starting to nudge into the uncomfortable (out of code?) range for 15 amp house circuits.

The SIG may not have an interest in defining the electrician's certification needed for next greatest GPU slot specification.
 
What should the next wattage limit be?

A quad-card setup of 300W cards with the attendant system (a few hundred W to CPU and misc) to is starting to nudge into the uncomfortable (out of code?) range for 15 amp house circuits.

The SIG may not have an interest in defining the electrician's certification needed for next greatest GPU slot specification.

I figure this is probably tongue in cheek, but since it made me curious let me share what I found.

First code in the US at least specifies that on a standard 110 volt outlet you have 15 amps so 1650 watts of power available (remember this is assuming the fixed 110 voltage of the outlet).

Assuming you could run 4 dual gpu cards, then at 300 watts each they would consume 1200W meaning the rest of your system would have to consume over 450 to be pushing spec. Keep in mind that while the individual circuits are rated at 15 amps, a lot of the newer wiring done in houses is rated for 20 amps and the difference is a fuse in your fuse box. That pushes the limit up to 2200W, which is comfortable even given that setup. I think Europe is even better off as they run at 220 volts with wire rated at 15 amps for a total draw of 3300 watts at low power.

I think people who are really planning on running 4 dual GPU cards (that I would guess cost in the $500-600 range each) either have enough money to pay an electrician to change a fuse or enough technical know how to do it themselves. If they are really concerned (and the people who do this type of project often are), they could exploit the fact that many newer houses in the US have 2 circuits per room and just run 2 power supplies on separate circuits. I'm pretty sure they will have to have 2 power supplied anyway to supply that many cards.

I think the whole situation is a bit of a red herring though. I think this is probably a PR card. AMD has held the "fastest single slot card" for a long time on back of their dual GPU offerings. I have a feeling that this is a chance for NVidia to challenge that title. I seriously doubt they plan on producing these in mass quantities and requiring all future cards to have similar power draws. I think they are just looking to put the "AMD has the fastest card!" argument to rest. I have a feeling they figure the PR gain from that outweighs any PR loss from "Look at how many watts this thing pulls!".
 
I figure this is probably tongue in cheek, but since it made me curious let me share what I found.

First code in the US at least specifies that on a standard 110 volt outlet you have 15 amps so 1650 watts of power available (remember this is assuming the fixed 110 voltage of the outlet).
There is usually a safety margin built into the regulatory limit. I've seen 20% bandied about, but that isn't something I can state authoritatively. That would reduce the amount of margin significantly.

I think people who are really planning on running 4 dual GPU cards (that I would guess cost in the $500-600 range each) either have enough money to pay an electrician to change a fuse or enough technical know how to do it themselves.
Such a requirement is beyond what the SIG concerns itself with. There's no real interest in researching and ratifying a standard that requires home modification and contract work.
 
There is usually a safety margin built into the regulatory limit. I've seen 20% bandied about, but that isn't something I can state authoritatively. That would reduce the amount of margin significantly.

The safety margin only applies if you plan 100% load - ie, the video cards would have to be pushing their full 300 W 24/7, 365. Otherwise it is fine to go up to max load as long as you don't run it more than 80% of the time.

Unless someone is planning on running a year long Futuremark test, I don't see that as being a concern. As long as they don't exceed the maximum at any point in time they are fine.

Such a requirement is beyond what the SIG concerns itself with. There's no real interest in researching and ratifying a standard that requires home modification and contract work.

The SIG doesn't care how many cards the computer has. It currently allows 75W from the slot, 1 8 pin connector (150W), and 1 6 pin connector (75W) for a total of 300W. That is independent of you running 1 card in a personal machine of 3000 in a GPU farm.

I really doubt they base the calculations for card wattage on a massively nonstandard setup (a machine with 4 PCI slots instead of 2 running 4 dual GPU cards). I think it kind of borders on absurd to suggest they would.

Put another way, take a look at a board like this one:
ASUSTeK. Now, combine that with a set of risers like this one:
riser.

That is only PCI-2.0 and it still would pull around 900W with 6 cards. Yet PCI-3.0 still went to 300 for a mind blowing 1800W just from the video cards in that configuration. I don't think the SIG lost a minutes sleep over it.

Like I said - red herring. This card is about getting the fastest single slot card on the market. I doubt NVidia cares what the PCI SIG thinks about their card, and I doubt that the SIG will bother trying to research a specification based on one niche market card setup that might pull too much power if joe random sets one and starts a whole suite of benchmarks running nonstop all day for a year.
 
The safety margin only applies if you plan 100% load - ie, the video cards would have to be pushing their full 300 W 24/7, 365. Otherwise it is fine to go up to max load as long as you don't run it more than 80% of the time.
I don't think the time window needs to be that long.
It doesn't seem realistic to permit transient peaks past 80% of max rating, and then define transient as over 60,000 continuous hours. That seems pretty constant.

The SIG doesn't care how many cards the computer has. It currently allows 75W from the slot, 1 8 pin connector (150W), and 1 6 pin connector (75W) for a total of 300W. That is independent of you running 1 card in a personal machine of 3000 in a GPU farm.
The SIG cares enough to define a high-end specification that caters to GPU cards. I am not aware of other product lines that commonly approach the 300W limit. They would be aware of the design bullet points for these top end cards.

I really doubt they base the calculations for card wattage on a massively nonstandard setup (a machine with 4 PCI slots instead of 2 running 4 dual GPU cards). I think it kind of borders on absurd to suggest they would.
There's probably an evaulation of many factors, but I would imagine that expected deployments of a product on commercially available and in-spec platforms is a worthwhile thing to consider.

Put another way, take a look at a board like this one:
ASUSTeK. Now, combine that with a set of risers like this one:
riser.
That board seems to put the limit at 3-way graphics.
Pushing the card count higher would be taking the product beyond specification, so the PCIe spec would not be the first problem.
The reasons for the limit on the motherboard itself may be interesting.

Like I said - red herring. This card is about getting the fastest single slot card on the market. I doubt NVidia cares what the PCI SIG thinks about their card, and I doubt that the SIG will bother trying to research a specification based on one niche market card setup that might pull too much power if joe random sets one and starts a whole suite of benchmarks running nonstop all day for a year.
Then the current 300W specification is not out of date and does not need increasing.
The typical way these cards are sold is to ship them clocked down to meet the PCIe spec, so that the product can be plugged into a PCIe slot without immediately breaking warranties. The user can then decide to flip the switch, which absolves the manufacturer.
 
I'm waiting for the version with the external power cord.

Voodoo volts? :)

http://en.wikipedia.org/wiki/File:Voodoo_5_6000.jpg

http://en.wikipedia.org/wiki/Voodoo_5

Because the card used more power than the AGP specification allowed for, a special power supply called Voodoo Volts was to have been included with it. This would have been an external device that would connect to an AC outlet. Most of the prototype cards utilized a standard internal power supply drive power connector.
 
Back
Top