That'll make Domell's statement even more likely, won't it?GT218 should be a 40nm chip for budget/low end.
I think if GT218 stands against Rv870 NVIDIA will be defeated with no doubt.
Hmm, there is still plenty of SKUs to rebrand out there.It is interesting to see what NVIDIA will do during this month and April after releasing by AMD their new GPUs.
GT200's lateness (though in fact it may well have been a replacement for "G100") indicates that the wheel nuts started coming undone towards the end of 2007.NVIDIAs strategy is very confusing.
GT200's lateness (though in fact it may well have been a replacement for "G100") indicates that the wheel nuts started coming undone towards the end of 2007.
NVidia's strategy also led to the poor performance increment of GT200, distracted as they were by CUDA functionality.
Then there was their apparent arrogance in the face of ATI.
And then their repeated failures to make 65nm and 55nm technology deliver the goods.
So a wheel or two have fallen off of their strategy. I wouldn't call it a strategy any more.
GT300's strategy, with a bit of luck, hasn't really been affected. You could say it boils down to 40nm now, as an external factor, and not much else.
Question is, why has NVidia struggled to get the most out of TSMC, both in terms of die area and in terms of timeliness?
---
Interestingly enough, G92b with a GDDR5 interface would prolly give RV770 a good run for its money. I think AMD was lucky with GDDR5 (RV770 would have been pretty lame without it) and come this autumn I imagine there'll be a level playing field there, though NVidia might not deploy GDDR5 as widely as AMD, electing to keep the low end on GDDR3?
Jawed
I don't quite agree with that. Judging by the results some published (color fill benchmarks, overclocking experiments), efficiency of gddr5 (at least with the memory controller in rv770) seems to be quite a bit lower than gddr3, and the hd4850 indeed scales very well with memory clock. I think a rv770 with same gpu clock as HD4870 but using these factory-overclocked 1.3Ghz gddr3 parts would be quite close in performance to the HD4870 as we know it.I think AMD was lucky with GDDR5 (RV770 would have been pretty lame without it)
Well, samsung does offer 1.3Ghz GDDR3 chips (only 512mbit ones, however, fastest 1gbit ones are 1Ghz but qimonda has faster 1gbit chips - 1.2Ghz). AFAIK everything over 1Ghz though requires more than 1.8V (hence I call them factory-overclocked).AFAIK those are not exactly 2,6 GHz GDDR3, but overclocked chips rated for lower clocks (2,5 GHz eff. perhaps?).
Possible, though the 1.1Ghz gddr3 was available way before that (GF8800U had mem clock of 1.08Ghz). Maybe 1.2Ghz would have been available, not sure. I wouldn't call it lame with 2.2Ghz GDDR3 neither, but such a configuration clearly would be no longer competitive with GTX 260.Anyway, such chips weren't probably available for HD 4870's release, and again, even 2,2 GHz GDDR3 doesn't limit RV770 to a point we could call "lame".
So from the discrepancy between NVidia's numbers and Carsten's measurements, it seems that there's 0.5mm of sealant/packaging on each of the width and height. Useful to bear in mind when future die measurements are performed...
Jawed
That would count for 85 chips per wafer (gross selection), and from all the pictures available you can count definitely more there -- 94~93, by my own measurements.GT200 (65nm): 606,9 mm² (24,77 mm x 24,5 mm)