NVIDIA GF100 & Friends speculation

Looks similar to enet dual-GF110 PCB: http://www.google.de/images?q=enet dual gf110

But on the EVGA card they used GF114/GF104: rectangle order of components on the backside of the gpu and only 4 memory chips.

Perhaps, but why dual 8-pin PCI-e connectors? 375W for dual GF114 seems rather overzealous.

According to Techreport 1GBit memory chips, so only 1GiB per GPU.

Correct me if I'm wrong, but is not the total amount of VRAM also determined by the memory controller interface width? I.E. with a memory interface width > 4x64-bit (whether it be 5x64-bit or 6x64-bit) you could have 1280MB or 1536MB VRAM. I mean, it's not as though there are "1.5GBit" GDDR5 chips floating around out there...
 
Perhaps, but why dual 8-pin PCI-e connectors? 375W for dual GF114 seems rather overzealous.
2x GTX 560 = ~ 360W. Probably they ship it with 300W power limit but give the user the abillity to unlock full potential.

Correct me if I'm wrong, but is not the total amount of VRAM also determined by the memory controller interface width? I.E. with a memory interface width > 4x64-bit (whether it be 5x64-bit or 6x64-bit) you could have 1280MB or 1536MB VRAM. I mean, it's not as though there are "1.5GBit" GDDR5 chips floating around out there...
But on the other side of the PCB they are only another 4 chips, 8 in total = 1GiB @ 1GBit chips.
The GF110-Dual in my Google link had 6 memory chips on each side, 384-Bit.

Hopefully final version will go 2GBit chips, to offer 2GiB per GPU and 2GiB effectiv VRAM for Quad SLI.
 
2x GTX 560 = ~ 360W. Probably they ship it with 300W power limit but give the user the abillity to unlock full potential.

It doesn't work that way, as has been explained numerous times in this very forum. When you design a dual GPU single card or multi-PCB array that appears as one physical device to the system, you use less power-hungry chips and drive them at lower clocks and volts so the end result is significantly reduced power consumption. Also, when using a single PCB there are many shared components which further reduces power consumption.

But on the other side of the PCB they are only another 4 chips, 8 in total = 1GiB @ 1GBit chips.
The GF110-Dual in my Google link had 6 memory chips on each side, 384-Bit.

Hopefully final version will go 2GBit chips, to offer 2GiB per GPU and 2GiB effectiv VRAM for Quad SLI.

Since we don't know what's on the other side of the PCB of this card though, we can't determine what memory configuration it uses, other than the obvious choices of 256-bit, 320-bit, and 384-bit (per GPU).
 
It doesn't work that way, as has been explained numerous times in this very forum.
But also with this savings, dual GTX 560 should be still a step over 300W.
Nvidias latest TDPs were more average consumption than worst case.

The 375W option should be a nice enthusiast feature, like AMDs PowerTune or dual BIOS.


Since we don't know what's on the other side of the PCB of this card though, we can't determine what memory configuration it uses, other than the obvious choices of 256-bit, 320-bit, and 384-bit (per GPU).

An unbalanced number of memory chips on each side is very unlikely. And the rectangular component shape of GPU backsides look like a GF104/GF114.

GF110 dual looked like this: http://www.guruht.com/2010/11/gtx-590-dual-gf110-soon-specifications.html
 
But also with this savings, dual GTX 560 should be still a step over 300W.

Unless you have the precise specifications of such a card, you simply can't know this value. Even then there's always a few watts worth of fudging room.

Nvidias latest TDPs were more average consumption than worst case.

The 375W option should be a nice enthusiast feature, like AMDs PowerTune or dual BIOS.

I'm all for it.

An unbalanced number of memory chips on each side is very unlikely. And the rectangular component shape of GPU backsides look like a GF104/GF114.

GF110 dual looked like this: http://www.guruht.com/2010/11/gtx-590-dual-gf110-soon-specifications.html

I'm not saying this can't be a dual-GF114 SKU, just that 375W is way more than such a card can need unless it is heavily overclocked. Perhaps that is what they're going for, though.
 
It doesn't work that way, as has been explained numerous times in this very forum. When you design a dual GPU single card or multi-PCB array that appears as one physical device to the system, you use less power-hungry chips and drive them at lower clocks and volts so the end result is significantly reduced power consumption.
Well the lower clock and volts part is strictly there to meet a certain TDP target. So if your target is 375W anyway there's no need to do it.
Also, when using a single PCB there are many shared components which further reduces power consumption.
I don't think there's any significant savings there. Might as well use a bit more power even due to the pcie bridge chip.

I am not saying it can't be dual GF110, but another question is what would be faster (well aside from the fact that dual GF104 should be way easier to design and much cheaper to manufacture) for the same TDP. Dual GF110 would need to make serious cuts in units and/or clocks, dual GF104 not.
Just for comparison, mobile GF104 tramples all over the mobile GF100 (granted that's not a good chip) - GTX 485M vs. GTX 480M at the same 100W TDP.
It might make sense to go for 375W (despite if looking at the mobile part, you'd think even a 200W TDP dual (full chip even!) GF104 is doable) - since nvidia can't enforce the power limit as dynamically as AMD, there's basically no way a dual nvidia card can compete with Antilles at the same 300W limit. A dual GF104 card could use full (8 SM) GF104, with even a bit higher clocks than reference GTX 460 and stay within the 375W limit.
 
Unless you have the precise specifications of such a card, you simply can't know this value. Even then there's always a few watts worth of fudging room.

I'm not saying this can't be a dual-GF114 SKU, just that 375W is way more than such a card can need unless it is heavily overclocked. Perhaps that is what they're going for, though.

I don't think you can really claim that either without knowing the precise specifications of the card, as you point out yourself.

If the GTX 560 is a 200W card, then 375W doesn't sound unrealistic for a dual-GPU model at the same clocks, especially if it features 2×2GB, as it should, even though there's no indication that it does at this point.
 
Well the lower clock and volts part is strictly there to meet a certain TDP target. So if your target is 375W anyway there's no need to do it.

You can't just increase clocks forever though simply because the power budget is there. If GF114 clocks @ 800MHz+ with a < 225W power budget, throwing 375W @ two of them doesn't necessarily yield 1GHz clocks.

I don't think there's any significant savings there. Might as well use a bit more power even due to the pcie bridge chip.

Sure there is, and it's been discussed on these forums. Many of the power components are shared, display controllers are not (usually) duplicated, and I'm sure there are other things as well.

I am not saying it can't be dual GF110, but another question is what would be faster (well aside from the fact that dual GF104 should be way easier to design and much cheaper to manufacture) for the same TDP. Dual GF110 would need to make serious cuts in units and/or clocks, dual GF104 not.
Just for comparison, mobile GF104 tramples all over the mobile GF100 (granted that's not a good chip) - GTX 485M vs. GTX 480M at the same 100W TDP.
It might make sense to go for 375W (despite if looking at the mobile part, you'd think even a 200W TDP dual (full chip even!) GF104 is doable) - since nvidia can't enforce the power limit as dynamically as AMD, there's basically no way a dual nvidia card can compete with Antilles at the same 300W limit. A dual GF104 card could use full (8 SM) GF104, with even a bit higher clocks than reference GTX 460 and stay within the 375W limit.

GF110 seems to be a good deal more power efficient than GF100, or at least it is more well-behaved at the limits. I certainly wouldn't expect to see a dual-GF100 SKU but GF110 isn't out of the question. Also, dual GF-114 may not be enough to beat Antilles, and I don't think that's a scenario Nvidia is comfortable with.

I don't think you can really claim that either without knowing the precise specifications of the card, as you point out yourself.

I never claimed any specific figures for the dual card though. All I said was that 375W seems like a lot for a mere dual GF114 SKU unless it's heavily overclocked.

If the GTX 560 is a 200W card, then 375W doesn't sound unrealistic for a dual-GPU model at the same clocks, especially if it features 2×2GB, as it should, even though there's no indication that it does at this point.

Look at HD 5870 vs. HD 5970. By this logic 5970 should be close to 2* 5870's TDP yet it is not. 5870 is 225W and 5970 is 300W.
 
You can't just increase clocks forever though simply because the power budget is there. If GF114 clocks @ 800MHz+ with a < 225W power budget, throwing 375W @ two of them doesn't necessarily yield 1GHz clocks.
No certainly not. I would only expect something like 750Mhz.
But the point is the opposite is true too at a certain point - you might not be able to really lower voltage (or only by a tiny amount) if you reduce clocks further, which means your perf/power might not improve at all if you lower clocks.

Sure there is, and it's been discussed on these forums. Many of the power components are shared, display controllers are not (usually) duplicated, and I'm sure there are other things as well.
Yes, but without any estimates how much power these shared things might save. My assumption is very little.

GF110 seems to be a good deal more power efficient than GF100, or at least it is more well-behaved at the limits.
Not really a whole lot - less leakage yes, most of the rest seems to come from better cooling.

Also, dual GF-114 may not be enough to beat Antilles, and I don't think that's a scenario Nvidia is comfortable with.
I'm just not sure GF110 is really more power efficient than GF114 (if they have their clocks/voltages set so they both draw about 150W). GF104 isn't quite as good on desktop nowadays, however that seems to be mostly because its voltages are set a bit too high (higher than on GTX 570 GF110 despite lower clock), of course that allows stellar overclocks but for power consumption it's a disaster.
Of course if you go for a 250W TDP target per chip, then it's a no brainer GF110 is better - you could increase voltages on GF104 but clock (and hence performance) wouldn't really go up a lot more, and perf/power sink way below GF110 level.

I never claimed any specific figures for the dual card though. All I said was that 375W seems like a lot for a mere dual GF114 SKU unless it's heavily overclocked.
I guess for a 375W card two GF110 gpus probably really would be more power efficient and hence faster. But it is possible the target TDP is 300W and it's just for overclocking headroom. And at 300W there might be very little performance difference between these two hypothetical solutions.

Look at HD 5870 vs. HD 5970. By this logic 5970 should be close to 2* 5870's TDP yet it is not. 5870 is 225W and 5970 is 300W.
The HD5970 is more of a 2* 5850 and it acts accordingly - twice the power draw, and very similar performance to HD 5850 CF. Not overvolting memory (as pretty much all other AMD desktop GPUs do) also helps a bit, as does using HD5850 chip voltages certainly.
 
That's because the assumption is the same clock speeds. Voltage isn't going to drop significantly.

Why wouldn't it? The chips will be cherry-picked so even running at the same speeds as their single GPU brethren (which is quite the assumption on your part, given no such reference design dual-GPU SKU has ever existed) they will have lower voltage.

And the 5870 is a 188W card, not 225W.

188*2 (376) is still > 300.
 
Why wouldn't it? The chips will be cherry-picked so even running at the same speeds as their single GPU brethren (which is quite the assumption on your part, given no such reference design dual-GPU SKU has ever existed) they will have lower voltage.



188*2 (376) is still > 300.

Again, clocks. The HD 5970 is really more of a dual-HD5850 with a few more SPs enabled, which don't draw much power. And it has nearly exactly double the power draw of the 5850.

If you want to argue that NVIDIA can make a dual-GF114 within 300W while significantly reducing clocks, then I fully agree. Otherwise, unless the GTX 560's power is < 180W, that seems unlikely.
 
Why wouldn't it? The chips will be cherry-picked so even running at the same speeds as their single GPU brethren (which is quite the assumption on your part, given no such reference design dual-GPU SKU has ever existed) they will have lower voltage.

I don't expect a dual 560 part at all, not my assumption.

You can get to a lower target wattage if you're willing to sacrifice performance, I don't think anyone is arguing that. Problem becomes if you sacrifice that much performance, where does this dual gpu chip fall. Competing against their current high end part? Doesn't make much sense.
 
I don't expect a dual 560 part at all, not my assumption.

You can get to a lower target wattage if you're willing to sacrifice performance, I don't think anyone is arguing that. Problem becomes if you sacrifice that much performance, where does this dual gpu chip fall. Competing against their current high end part? Doesn't make much sense.

Me neither.

More like a dual 570 at 480 clocks (or lower) is what I'd expect.

In any case, my question is this. According to hardware.fr's power consumption measurements of the 6970/50 review, the 6970 consumes as much as a GTX 570 in 3dmark06. Actually the 570 was a tad lower.

IMG0030351.gif


It seems very natural for everyone to expect a 6990 with two Cayman XTs but very impossible for a GTX 595 to feature two GF110s. Why?
 
Last edited by a moderator:
Back
Top