What's benefit of 112 scalar processors ? make GTS faster ?
According to FUDzilla, the G92 launch has been brought forward to October 29th:
http://www.fudzilla.com/index.php?option=com_content&task=view&id=3403&Itemid=1
Also (supposedly) the version of G92 launching this year will be called 8800GT, with all that this implies in terms of performance:
http://www.fudzilla.com/index.php?option=com_content&task=view&id=3404&Itemid=1
fellix said:May be they just should slap more heat-pipes in there.
G92-high-end has heat problems?
Or make the fan rotate a bit faster:May be they just should slap more heat-pipes in there.
Because they are not talking about the G92 on 8800GT?Why does the G92 will have heat problem when the VR-Zone told us that the GeForce 8800 GT Overclocking Is Good.
The Inq is saying that was 'a few weeks ago' though. At the same time, several websites are now claiming the launch will be earlier than expected, in late October. So wouldn't that imply the problem has been fixed already? Or am I missing something?
That's an interesting point of view, although if it's bandwidth limited anyway, why bother? Reducing the core clock, thus power and heat, can also result in a lower BOM because the PCB and cooling can be less expensive. Unlike in the enthusiast segment, increasing performance by X% isn't worth it if it also increases costs by 2X%!It could be launched earlier because they wanna sell as much GF8800GT cards as they can before RV670 launches... And ever wondered why the GF8800GT is clocked at only a measly 600Mhz (according to rumors)? This could also imply that they were having some problems with heat/power and the single slot formfactor they were aiming for.
I said and I'll say it again: I think the 8800 GT has 6 clusters activated and runs at 600/1800, while the top G92 SKU will have 8 clusters and run at 800/2800 or even higher. It wouldn't exactly be very hard to hide a SKU like that if they wanted to. And unlike for the 8800 GT, there is no good reason for AIBs to ever make their own PCBs or cooling solutions here, most likely... At least IMO.
This was a typo... right ?
Not so different when you remember that Nvidia's is 65nm and AMD's is 55.So, it seems we're accepting these rumours as implying that NVidia will be releasing a 289mm2 chip to compete against AMD's 194mm2 chip?
Well, what I'm saying is that they're pretty much using 3/4th of a G92 to compete against a full RV670. If I'm right, and this is a big if, then clearly the fastest single-chip desktop G92 will be a faster than the fastest single-chip desktop RV670 by a fair bit.So, it seems we're accepting these rumours as implying that NVidia will be releasing a 289mm2 chip to compete against AMD's 194mm2 chip?
I thought it might give some people pause for thought on the recurring "yields" question, since people love to fawn over tiny dies.Not so different when you remember that Nvidia's is 65nm and AMD's is 55.
289 * (55 * 55) / (65 * 65) = 207.
207 vs 194 - not so very different. Maybe the half-node gamble will pay off this time. (Surely it has to some day? )