NVIDIA GF100 & Friends speculation

That would mean there are five SMs in the GPC (in contrast to every other GF1xx GPU). Seems unlikely to me.
Well the GPC can obviously handle uneven numbers (see: redundancy) so the question is simply whether the GPC can handle that many inputs. Consider also that no matter whether GF106 has 4 or 5 SMs, it'd still have an even more extreme SM-ROP ratio than any other GF10x. I think it's quite likely they beefed up the SM output to 4 pixels/clock, which means the GPC would need revision to be able to handle that many pixels anyway.
 
I think that's an oversimplification. Nvidia's architecture and transistor spend is also targeting a broader set of workloads. Any "perf/mm²" metric should be qualified. Also, AMD seems to be doing better on the density front and it would be interesting to know whether that's due to specific architectural traits or just them being better at overall semiconductor design.
AMD had 26% more dense chip with rv770 vs. g92b. However, if you think the 331mm^2 number is correct for GF104, that advantage has shrunk to only about 10% for Cypress vs. GF104 - OTOH if the 367mm^2 is correct the ratio has almost stayed the same.
 
The 367mm² figure is pretty much certain at this point, but it doesn't take under account the fact that a large portion of GF104 runs at 1.35 GHz, and actually doesn't seem to mind clocks up to 1.6GHz, or even more.
 
The 367mm² figure is pretty much certain at this point, but it doesn't take under account the fact that a large portion of GF104 runs at 1.35 GHz, and actually doesn't seem to mind clocks up to 1.6GHz, or even more.
That's relevant for unit count and transistor count, but not overall performance which already takes frequency into account. There's certainly a point to be made that a full GF104 at good clocks wouldn't have a massive performance gap with HD5870, so the performance efficiency gap for gaming compared to the 55nm generation actually narrowed a bit I suppose.
 
That's quite unfair comparision. HD5870 at higher clocks would perform better, too. The same applies for GTX285. I'd compare only actual products, not hypotetical models... It's quite unlinkely, that full-fledged GF104 will stand against Cypress. I think it will compete with Bart...
 
That's quite unfair comparision. HD5870 at higher clocks would perform better, too. The same applies for GTX285. I'd compare only actual products, not hypotetical models... It's quite unlinkely, that full-fledged GF104 will stand against Cypress. I think it will compete with Bart...

GF104's clock headroom is much higher than Cypress' when expressed as a percent.
 
Well the GPC can obviously handle uneven numbers (see: redundancy)
There aren't any additional SMs for redundancy in GF100, so I doubt there are on GF104 and especially GF106.

Also redundancy wouldn't affect logic, because they always need to drive only four SMs.
 
Last edited by a moderator:
That's relevant for unit count and transistor count, but not overall performance which already takes frequency into account. There's certainly a point to be made that a full GF104 at good clocks wouldn't have a massive performance gap with HD5870, so the performance efficiency gap for gaming compared to the 55nm generation actually narrowed a bit I suppose.

Yeah, my point was merely about transistors/mm².

And yes, you could argue that the performance/mm² gap narrowed a little, but at 55nm, I believe perfomance/watt was actually on NVIDIA's side.

Even with GF104, that's far from being the case for this generation, let alone when the HD 6000s are released, which you could argue is really what GF104 will spend most of its life competing against.
 
GF104's clock headroom is much higher than Cypress' when expressed as a percent.
But you shouldn't ignore power consumption and realated headroom in possible voltage adjustment. GTX460 at 700MHz consumes as much power as HD5870 at 850MHz. I think GF104 would hit the 225W limit sooner than performance level of mildly overclocked HD5870.

Will nVidia go beyond 225W with single-chip GF104? They had to be quite desperate to do that...
 
But you shouldn't ignore power consumption and realated headroom in possible voltage adjustment. GTX460 at 700MHz consumes as much power as HD5870 at 850MHz. I think GF104 would hit the 225W limit sooner than performance level of mildly overclocked HD5870.

Will nVidia go beyond 225W with single-chip GF104? They had to be quite desperate to do that...

What 225W limit? Many single GPU cards already go beyond 225W, including both GTX 480 and even 470. PCI-e graphics cards are limited to 300W, should the manufacturer wish to obtain PCI-e certification.
 
225W limit of two 6-pin PCIe power connectors.
ShaidarHaran said:
Many single GPU cards already go beyond 225W, including both GTX 480 and even 470.
That's exactly why many users consider GF100's power consumption as unacceptable. Neither retail customers nor OEMs would be happy to see a mainstream product consuming over 225W...
 
225W limit of two 6-pin PCIe power connectors.

8-pin connectors exist for a reason.

That's exactly why many users consider GF100's power consumption as unacceptable. Neither retail customers nor OEMs would be happy to see a mainstream product consuming over 225W...

Do you have market research that demonstrates "many users consider GF100's power consumption unacceptable"? And since when are $500 graphics cards "mainstream products"?
 
And since when are $500 graphics cards "mainstream products"?
Do you really expect, that full-speed GF104 board will cost $500? Who would buy it, as far as...

Do you have market research that demonstrates "many users consider GF100's power consumption unacceptable"?
...sales of GTX470 are still poor here despite its price is significantly under HD5870's asp.

GF104 will became mainstream product pretty soon. Even if Bart GPU will be slower, it's power characteristics and price/performance will likely be very interesting to customers and hardly allowing to sell GF104-based products for $500 :)
 
Do you have market research that demonstrates "many users consider GF100's power consumption unacceptable"? And since when are $500 graphics cards "mainstream products"?
no-X is obviously thinking of a potential GTX 475, which would retail nowhere near $500. But his point is irrelevant; many cards from both vendors with TDPs well below 225W have two power connectors. Case in point: the HD5850.

Personally I'd expect 384SP/825MHz/1GB 4.5GHz GDDR5 with the same 215W TDP as the GTX 470, although maybe that's too optimistic overall.
 
GF104's clock headroom is much higher than Cypress' when expressed as a percent.

Which is entirely immaterial if other considerations/issues prevent them from releasing parts at those frequencies. Its similar to the argument that using liquid helium, AMD chips are faster... Sure fine, they are still slower in any situation that really matters on the market. Same thing here, theoretical frequency headroom matters very little when you are power constrained. And no, just throwing more power at the problem doesn't work either. And yes, users care about power.
 
If GF104 was perfectly capable of running 900 MHz 24/7 with it's normal fan and without any fear of an increased return/RMA rate, we would see 900 MHz cards everywhere.
AIB partners don't like high return rates so not everyone is shipping it's cards at those clocks.
 
If GF104 was perfectly capable of running 900 MHz 24/7 with it's normal fan and without any fear of an increased return/RMA rate, we would see 900 MHz cards everywhere.
AIB partners don't like high return rates so not everyone is shipping it's cards at those clocks.
I assume you mean 800MHz? I'm not aware of anyone shipping it beyond that, and overclocking GTX 460 to 900MHz is only feasible with a voltage increase. And obviously I do expect a slightly higher voltage to run at 825MHz, and I do expect a beefier fan. I might still be too optimistic though, but the problem is if it's not fast enough then it makes no sense to name it GTX 475 as it'd often be slower than a 470. Then again that kind of thing has never stopped NV before...
 
I assume you mean 800MHz? I'm not aware of anyone shipping it beyond that, and overclocking GTX 460 to 900MHz is only feasible with a voltage increase. And obviously I do expect a slightly higher voltage to run at 825MHz, and I do expect a beefier fan. I might still be too optimistic though, but the problem is if it's not fast enough then it makes no sense to name it GTX 475 as it'd often be slower than a 470. Then again that kind of thing has never stopped NV before...

Well, Colorful already released GTX 460 overclocked to 900MHz
 
I assume you mean 800MHz? I'm not aware of anyone shipping it beyond that, and overclocking GTX 460 to 900MHz is only feasible with a voltage increase. And obviously I do expect a slightly higher voltage to run at 825MHz, and I do expect a beefier fan. I might still be too optimistic though, but the problem is if it's not fast enough then it makes no sense to name it GTX 475 as it'd often be slower than a 470. Then again that kind of thing has never stopped NV before...

Basically any clock noted here to "beat opponent X" will increase the RMA rate. So besides component costs you're also looking at your RMA costs and you might end up with a proposition that's not all together too interesting because the cost of everything just increased.

I'm looking at some N460GTX Hawk reviews right now and it needs to go up to 900Mhz to get close to the GTX470.
But once you start to check the higher resolutions, an OC'ed 460 doesn't have a better perf/Watt performance than a 470/480, although it blows it away in the perf/$ department. I'm just afraid that the later part would be nullified once they call it GTX475.
http://www.techpowerup.com/reviews/MSI/GTX_460_HAWK/29.html

For reference. The 460 Hawk sells for €256, the cheapest 470 for €242!
5870's start at €340 here in Holland, guess who's trying to blow up the market this time.


Well, Colorful already released GTX 460 overclocked to 900MHz
On the Asian market where warranty is worth how much exactly?
 
Last edited by a moderator:
Basically any clock noted here to "beat opponent X" will increase the RMA rate. So besides component costs you're also looking at your RMA costs and you might end up with a proposition that's not all together too interesting because the cost of everything just increased.
Oh I obviously agree completely, I was responding mostly because: 1) I didn't know about that 900MHz GPU and I never said 900MHz myself, 2) I wanted to make it clear that GTX 475 would obviously have a beefier fan, so your point isn't as strong as it seems.

I'm looking at some N460GTX Hawk reviews right now and it needs to go up to 900Mhz to get close to the GTX470.
Well, it's 8 vs 7 TPCs. And do keep in mind it's bandwidth-constrained even below 900MHz... I did say I was expecting/hoping for 4.5GHz GDDR5, not the same 3.6GHz.

I'm just afraid that the later part would be nullified once they call it GTX475.
Probably, since they can't kill GTX 470 sales completely. Presumably their GF100 production is down quite a lot compared to a few months ago though... and with the more mature 40nm process and the fairly weak Tesla/Quadro SKUs, they don't really need GeForce volume to have enough professional chips. So it's probably not as bad as it looks.
 
Back
Top