The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
According to FUDzilla, the G92 launch has been brought forward to October 29th:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3403&Itemid=1


Also (supposedly) the version of G92 launching this year will be called 8800GT, with all that this implies in terms of performance:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3404&Itemid=1

Any news on the Nvidia lowing the price for 8800 GTX ?

oh and is the new 8800GT going to better than GTX ?

THX ! :D
 
The Inq is saying that was 'a few weeks ago' though. At the same time, several websites are now claiming the launch will be earlier than expected, in late October. So wouldn't that imply the problem has been fixed already? Or am I missing something?
 
It could be launched earlier because they wanna sell as much GF8800GT cards as they can before RV670 launches... ;) And ever wondered why the GF8800GT is clocked at only a measly 600Mhz (according to rumors)? This could also imply that they were having some problems with heat/power and the single slot formfactor they were aiming for.
 
The Inq is saying that was 'a few weeks ago' though. At the same time, several websites are now claiming the launch will be earlier than expected, in late October. So wouldn't that imply the problem has been fixed already? Or am I missing something?

I don't see how both can be true. I suspect FUD at work here. The question is: which is FUD and which is reality?
 
It could be launched earlier because they wanna sell as much GF8800GT cards as they can before RV670 launches... ;) And ever wondered why the GF8800GT is clocked at only a measly 600Mhz (according to rumors)? This could also imply that they were having some problems with heat/power and the single slot formfactor they were aiming for.
That's an interesting point of view, although if it's bandwidth limited anyway, why bother? Reducing the core clock, thus power and heat, can also result in a lower BOM because the PCB and cooling can be less expensive. Unlike in the enthusiast segment, increasing performance by X% isn't worth it if it also increases costs by 2X%!

Also, may I point out that if we assume the 8800 GT to be the 'cut-down' SKU, the clock disparity between the 7900 GT/GS and the 7900 GTX was similar if the high-end G92 SKU is clocked at 825MHz+? Indeed, it can easily be seen that 600*(650/450) = 866.67MHz, which is much nearer the previously rumoured G92 core clock... ;) And variation on 65nm is presumably worse than on 90nm, not better.

I said and I'll say it again: I think the 8800 GT has 6 clusters activated and runs at 600/1800, while the top G92 SKU will have 8 clusters and run at 800/2800 or even higher. It wouldn't exactly be very hard to hide a SKU like that if they wanted to. And unlike for the 8800 GT, there is no good reason for AIBs to ever make their own PCBs or cooling solutions here, most likely... At least IMO.
 
I said and I'll say it again: I think the 8800 GT has 6 clusters activated and runs at 600/1800, while the top G92 SKU will have 8 clusters and run at 800/2800 or even higher. It wouldn't exactly be very hard to hide a SKU like that if they wanted to. And unlike for the 8800 GT, there is no good reason for AIBs to ever make their own PCBs or cooling solutions here, most likely... At least IMO.

This was a typo... right ? ;)
I know there's GDDR4 memory running at these speeds, but the cost must be stratospheric right now.
 
1.4GHz GDDR4 is nothing out of the extraordinary. We're talking 1.2GHz for the fastest RV670 (aka Gladiator), for example. The cost of memory has always been very significant on high-end parts anyway, since it's one of the key value differentiators!

You could argue that it might have been a better idea to just go for a wider bus in that case, but that'd also have made the PCB more expensive and increased the number of chips required... So it's not so obvious to me it'd be much cheaper. This solution can also be more power-efficient (the voltage on 1.4GHz GDDR4 remains very reasonable).

Now, of course, if you were talking about 1.6GHz GDDR4 for example, the prices might become much more ridiculous and you might have a point... However, even that might make sense if it's for a very low-volume SKU.
 
So, it seems we're accepting these rumours as implying that NVidia will be releasing a 289mm2 chip to compete against AMD's 194mm2 chip?

Jawed
 
So, it seems we're accepting these rumours as implying that NVidia will be releasing a 289mm2 chip to compete against AMD's 194mm2 chip?
Not so different when you remember that Nvidia's is 65nm and AMD's is 55.

289 * (55 * 55) / (65 * 65) = 207.

207 vs 194 - not so very different. Maybe the half-node gamble will pay off this time. (Surely it has to some day? :) )
 
So, it seems we're accepting these rumours as implying that NVidia will be releasing a 289mm2 chip to compete against AMD's 194mm2 chip?
Well, what I'm saying is that they're pretty much using 3/4th of a G92 to compete against a full RV670. If I'm right, and this is a big if, then clearly the fastest single-chip desktop G92 will be a faster than the fastest single-chip desktop RV670 by a fair bit.

Also, checking Samsung and Hynix's sites, both have been in mass production of 1.4GHz GDDR4 basically forever, and have said so publicly for a while. Samsung has also demonstrated 2GHz GDDR4 while Hynix seems to be claiming they're mass producing 1.6GHz GDDR4. So really, 1.4GHz+ GDDR4 for the highest-end SKUs is hardly unimaginable if the memory controller supports it. And unlike 'slow' GDDR4, it might also less of a problem latency-wise.

nicolasb: 55nm is a 90% linear shrink, so the die size should be 19% lower than on 65nm. That would be 234mm2, thus, not 207mm2... :)
 
Not so different when you remember that Nvidia's is 65nm and AMD's is 55.

289 * (55 * 55) / (65 * 65) = 207.

207 vs 194 - not so very different. Maybe the half-node gamble will pay off this time. (Surely it has to some day? :) )
I thought it might give some people pause for thought on the recurring "yields" question, since people love to fawn over tiny dies.

It would appear NVidia's aiming to make two categories of SKUs from it, just like G80. Which begs the question, what size bus does the full chip have? Is it likely to stay at 256-bits, or will it have more ROPs/MCs and be 320/384-bits?

It seems strange to me that NVidia wouldn't release the fully-enabled chip before Christmas if the part-enabled chip is launching in about 1 month. If the heat rumours mean anything then perhaps they indicate that fully-enabled chips are scarce? Would that be that "fine-grained" redundancy in (in-)action :LOL:

Doesn't add-up...

Jawed
 
Status
Not open for further replies.
Back
Top