The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
That's interesting... 5000-6000 wafers is about one million chips before you take yields into account. I presume this is not NVIDIA's entire production, though.

PortalPlayer made its chips (including the GoForce 6100) at UMC. This makes me wonder whether their next-gen handheld will be at TSMC or at UMC; I had assumed the former, but now it looks like things may not be so simple...

This almost makes me wonder whether NVIDIA is going straight from TSMC 65nm to UMC 55nm... Or at least whether the two chips are slightly different in terms of specs or focus (notebooks?).

If NVIDIA wants to maximize their production capabilities for G92 so much though, it does make me wonder whether it will span a much larger price range than previously thought by some. 6C/384MiB/192-bit for ~$169 might get interesting... Interestingly, UMC's defects rates are higher AFAIK, which would turn out handy for such a part! :p


EDIT: On 65nm (200 chips/wafer and 75% yields including redundancy for arguement's sake), they'd need to sell each chip for an average of ~$60 to have 40% gross margins, assuming 300mm wafer costs around $5.5-6K. On 55nm, this would go down to $45-$50. FWIW, it is important to realize that 40% is their current GPU gross margin... Their average margins are higher because of the professional segment.

These calculations primarily serve to justify that a 6C/384MiB/192-bit SKU would make financial sense.
 
no, the card is at least partially limited by memory bandwidth. do you think every frame (or every part of a frame) has the same bottlenecks?

No, the card is well balanced. There is not one thing you can change that would yield an overall speed increase close to the increase in % you did to one setting. (typical settings and typical games)
 
No, the card is well balanced. There is not one thing you can change that would yield an overall speed increase close to the increase in % you did to one setting. (typical settings and typical games)
No, the ONLY thing you can say based on this is that the card is not massively unbalanced. It does NOT prove that it's the optimal balance in terms of performance/dollar.
 
No, the card is well balanced. There is not one thing you can change that would yield an overall speed increase close to the increase in % you did to one setting. (typical settings and typical games)
What's your point? Changing one thing by X percent never increases the total cost of the board by X percent either. In my post I was suggesting that 512MB with a 512-bit bus probably would make G80 notably faster (~15%), and would probably reduce cost.

If the increase was 1:1 with memory clock, then that means changing the core clock would have zero benefit, and they're wasting tons of silicon.
 
Yep, the GTX looks like it would benefit a lot from more bandwidth in that Q4 4xAA bench. Thanks for the link AnarchX!

But what's really going on with R600 though? The 4xAA graph in the Q4 comparison is basically a straight downward shift from the noAA graph.
Not really. It just looks that way due to incomplete data and the stupid trendlines they drew.
Now if that's the case why the hell does the 4xAA graph flatten out at the exact same 700Mhz (at a lower performance level) if the workload should shift more onto the memory bus / shader core with AA enabled? :???:
I think you just answered your own question. ;)

Remember that the GTX can do 4 AA samples per clock. Thus when you enable AA there are no additional chip cycles necessary and the extra load is entirely placed on the memory subsystem. There is also likely spare texturing ability w/o AF (depending on the shader), so again enabling it primarily increases the memory load as opposed to chip load.

R600's ROPs need 2 cycles to do 4xAA, so we have more chip cycles as well as bandwidth. There's also no hardware resolve, so this requires cycles that are "free" on G80. The same is true with AF, since R600 has far less filtering ability compared to G80. It didn't have idling filter units w/o AF, so the additional texture load now adds chip cycles.

The graphs show that AA/AF in fact don't shift workload onto the memory bus for R600. Instead, AA/AF adds notable workload to both the chip and the memory. This has been true on ATI products for quite a while. You'll notice that AA/AF hit on the Radeon 9700/9800 is almost identical to that of the 9500 Pro.
 
Graphic G92 desing without cooler
06-GF8800GT.jpg
 
No, the ONLY thing you can say based on this is that the card is not massively unbalanced. It does NOT prove that it's the optimal balance in terms of performance/dollar.

True, you can´t say it is optimal balance as this would also depend on applications etc.

However 8800GTX shows that an increase of clockspeed usually gives more performance gain, then an increase in badwith (memory clock). It hsould be the other way round, if we want to believe that G80 as we know it would be much faster with just a hugee increase in bandwith.
 
http://my.ocworkbench.com/bbs/showthread.php? threadid=68000


On 29 OCT, NVIDIA will launch the 8800GT (G92). The model scheduled is a 512M card with 256bits memory interface, it has 112 stream processors.

What we heard is that there seems to be plans to launch another lower end model of G92. It will come with 96 shader processors and 256M / 512M GDDR3. This model may or may not launch. It all depends on sales of the first 8800GT and AMD's RV670.
 
What's your point? Changing one thing by X percent never increases the total cost of the board by X percent either. In my post I was suggesting that 512MB with a 512-bit bus probably would make G80 notably faster (~15%), and would probably reduce cost.

No. The PCB would be much more complicated and more expensive (more layers as well probably), also not that nice for the memory layout.
 
EDIT: On 65nm (200 chips/wafer and 75% yields including redundancy for arguement's sake), they'd need to sell each chip for an average of ~$60 to have 40% gross margins, assuming 300mm wafer costs around $5.5-6K. On 55nm, this would go down to $45-$50.
How soon will a 55nm wafer be as cheap as a 65nm wafer?

Jawed
 
TSMC capacity constrained on 65nm? Didn't we get signs of this earlier in the year? Did AMD get the lion's share?

Jawed

It could also point to yield issues.
But i'm betting on increased CELL and RSX orders as the main culprit.
 
Status
Not open for further replies.
Back
Top