That's interesting... 5000-6000 wafers is about one million chips before you take yields into account. I presume this is not NVIDIA's entire production, though.
no, the card is at least partially limited by memory bandwidth. do you think every frame (or every part of a frame) has the same bottlenecks?
No, the ONLY thing you can say based on this is that the card is not massively unbalanced. It does NOT prove that it's the optimal balance in terms of performance/dollar.No, the card is well balanced. There is not one thing you can change that would yield an overall speed increase close to the increase in % you did to one setting. (typical settings and typical games)
What's your point? Changing one thing by X percent never increases the total cost of the board by X percent either. In my post I was suggesting that 512MB with a 512-bit bus probably would make G80 notably faster (~15%), and would probably reduce cost.No, the card is well balanced. There is not one thing you can change that would yield an overall speed increase close to the increase in % you did to one setting. (typical settings and typical games)
Not really. It just looks that way due to incomplete data and the stupid trendlines they drew.Yep, the GTX looks like it would benefit a lot from more bandwidth in that Q4 4xAA bench. Thanks for the link AnarchX!
But what's really going on with R600 though? The 4xAA graph in the Q4 comparison is basically a straight downward shift from the noAA graph.
I think you just answered your own question.Now if that's the case why the hell does the 4xAA graph flatten out at the exact same 700Mhz (at a lower performance level) if the workload should shift more onto the memory bus / shader core with AA enabled?
No, the ONLY thing you can say based on this is that the card is not massively unbalanced. It does NOT prove that it's the optimal balance in terms of performance/dollar.
On 29 OCT, NVIDIA will launch the 8800GT (G92). The model scheduled is a 512M card with 256bits memory interface, it has 112 stream processors.
What we heard is that there seems to be plans to launch another lower end model of G92. It will come with 96 shader processors and 256M / 512M GDDR3. This model may or may not launch. It all depends on sales of the first 8800GT and AMD's RV670.
What's your point? Changing one thing by X percent never increases the total cost of the board by X percent either. In my post I was suggesting that 512MB with a 512-bit bus probably would make G80 notably faster (~15%), and would probably reduce cost.
TSMC capacity constrained on 65nm? Didn't we get signs of this earlier in the year? Did AMD get the lion's share?
How soon will a 55nm wafer be as cheap as a 65nm wafer?EDIT: On 65nm (200 chips/wafer and 75% yields including redundancy for arguement's sake), they'd need to sell each chip for an average of ~$60 to have 40% gross margins, assuming 300mm wafer costs around $5.5-6K. On 55nm, this would go down to $45-$50.
TSMC capacity constrained on 65nm? Didn't we get signs of this earlier in the year? Did AMD get the lion's share?
Jawed
Cell at TSMC?