ELSA hints GT206 and GT212

GT218 should be a 40nm chip for budget/low end. Fuad most likely is refering to Gt3x0 than anything else.
 
Why would GT218 stand against RV870? They're in two completely different segments. It'll be interesting to see whether or not AMD goes big again for DX11 or sticks to the $300 price point. And if so, would Nvidia have a ~300mm^2 part to compete or would they have to use a cut down version of a bigger chip.
 
But the most important question is what chip is GT218? Low-end, Mainstream or Performance? Did NVIDIA change their codenames again or Fudzilla is simply wrong about GT218 as performance GPU?
Will be another 40nm GT2xx GPUs there?? Or maybe NVIDIA have cancelled some of their 40nm GT2xx GPUs like GT214 or GT215 (GT212 is most likey cancelled too)??

NVIDIAs strategy is very confusing. There is no new info about GT2xx 40nm GPUs and there is no GT3xx info too. AMDs Rv740/Rv790 are going closer and closer and Rv740 seems to be a real big thing. Very small die size and performance comparable to slower versions of Rv770 is simply amazing. I think that at this point NVIDIA should make the best they could do and push GPU with performance/price ratio comparable to AMDs killer.

It is interesting to see what NVIDIA will do during this month and April after releasing by AMD their new GPUs.
 
It is interesting to see what NVIDIA will do during this month and April after releasing by AMD their new GPUs.
Hmm, there is still plenty of SKUs to rebrand out there. :LOL:

On a serious note, at least we should see some GDDR5 adoption from NV, if nothing else. :rolleyes:
Heck, even the skinny mobile 4800 series will be getting 5GHz parts, now!
 
NVIDIAs strategy is very confusing.
GT200's lateness (though in fact it may well have been a replacement for "G100") indicates that the wheel nuts started coming undone towards the end of 2007.

NVidia's strategy also led to the poor performance increment of GT200, distracted as they were by CUDA functionality.

Then there was their apparent arrogance in the face of ATI.

And then their repeated failures to make 65nm and 55nm technology deliver the goods.

So a wheel or two have fallen off of their strategy. I wouldn't call it a strategy any more.

GT300's strategy, with a bit of luck, hasn't really been affected. You could say it boils down to 40nm now, as an external factor, and not much else.

Question is, why has NVidia struggled to get the most out of TSMC, both in terms of die area and in terms of timeliness?

---

Interestingly enough, G92b with a GDDR5 interface would prolly give RV770 a good run for its money. I think AMD was lucky with GDDR5 (RV770 would have been pretty lame without it) and come this autumn I imagine there'll be a level playing field there, though NVidia might not deploy GDDR5 as widely as AMD, electing to keep the low end on GDDR3?

Jawed
 
GT200's lateness (though in fact it may well have been a replacement for "G100") indicates that the wheel nuts started coming undone towards the end of 2007.

NVidia's strategy also led to the poor performance increment of GT200, distracted as they were by CUDA functionality.

Then there was their apparent arrogance in the face of ATI.

And then their repeated failures to make 65nm and 55nm technology deliver the goods.

So a wheel or two have fallen off of their strategy. I wouldn't call it a strategy any more.

GT300's strategy, with a bit of luck, hasn't really been affected. You could say it boils down to 40nm now, as an external factor, and not much else.

Question is, why has NVidia struggled to get the most out of TSMC, both in terms of die area and in terms of timeliness?

---

Interestingly enough, G92b with a GDDR5 interface would prolly give RV770 a good run for its money. I think AMD was lucky with GDDR5 (RV770 would have been pretty lame without it) and come this autumn I imagine there'll be a level playing field there, though NVidia might not deploy GDDR5 as widely as AMD, electing to keep the low end on GDDR3?

Jawed

So we can probably expect a G92bb with GDDR5 support, named GTS260! :LOL:

Seriously, a G92@40nm with a 128bit MC and GDDR5 would outperform RV740. The die-size would be probably a little bigger (150-160 mm^2 maybe?), but it would be close enough to be a competitive solution. I actually think that GT215 isn't going to be much different from this.

Now, the problem is: when Nvidia will be able to launch such solution? Q3? This gives ATi almost 6 months of overwhelming superiority both in desktop and notebook market.
 
IHVs have their ups and their downs exactly because of specific strategies per timeframe and that goes for all IHVs and all markets. There's no such thing as getting things always right and never make any mistakes and there's it's quite irrelevant if it's Intel, NVIDIA, AMD or anything else.

We can of course start counting how many corpses each and everyone has hidden in its own dungeon, but it'll result to the same nonsensical drivels as always.

If there's one lesson some should have learned over the years while following the whole 3D circus (especially those that followed it from the very beginning) is that it doesn't take too long until tables turn. Some balances hold longer and some less. The challange for the consumer is to find out which hardware (irrelevant if it's GPU, CPU or anything else) offers the best bang for the buck for a given timeframe and buy exactly that without any senseless brand loyalties or feelings.

We are going through extremely tough times and AMD not only did their very best achieving an outstanding perf/mm2 ratio for their chips, it reduced prices significantly across the market in times where its quite tough for everyone (and no I don't think when AMD conceived the RV7x0 family that they could have foreseen the financial crysis). The challenge now is to repeat a similar stunt with the D3D11 generation. Albeit their odds having RV7x0 in the back of one's mind seem excellent at the moment, it's still no absolute guarantee.

The only D3D11 architecture a few details are known about at the moment is Intel's Larabee and that only because Intel is entering the GPU market with a fundamentally new architecture and has to start evangelize for it as early as possible. The other two remain a quite big question mark; it might be safe to assume that AMD will continue its performance GPUs only strategy and NVIDIA its monolithic high end single chip strategy but that tells next to nothing about what may be the better sollution after all.

Any of the above aside it is true that NVIDIA this time has really to convince the world that their monolithic single high end chip approach is the better strategy. Personally it didn't convince me one bit this past round.
 
I think AMD was lucky with GDDR5 (RV770 would have been pretty lame without it)
I don't quite agree with that. Judging by the results some published (color fill benchmarks, overclocking experiments), efficiency of gddr5 (at least with the memory controller in rv770) seems to be quite a bit lower than gddr3, and the hd4850 indeed scales very well with memory clock. I think a rv770 with same gpu clock as HD4870 but using these factory-overclocked 1.3Ghz gddr3 parts would be quite close in performance to the HD4870 as we know it.
 
AFAIK those are not exactly 2,6 GHz GDDR3, but overclocked chips rated for lower clocks (2,5 GHz eff. perhaps?). Anyway, such chips weren't probably available for HD 4870's release, and again, even 2,2 GHz GDDR3 doesn't limit RV770 to a point we could call "lame".
 
AFAIK those are not exactly 2,6 GHz GDDR3, but overclocked chips rated for lower clocks (2,5 GHz eff. perhaps?).
Well, samsung does offer 1.3Ghz GDDR3 chips (only 512mbit ones, however, fastest 1gbit ones are 1Ghz but qimonda has faster 1gbit chips - 1.2Ghz). AFAIK everything over 1Ghz though requires more than 1.8V (hence I call them factory-overclocked).
Anyway, such chips weren't probably available for HD 4870's release, and again, even 2,2 GHz GDDR3 doesn't limit RV770 to a point we could call "lame".
Possible, though the 1.1Ghz gddr3 was available way before that (GF8800U had mem clock of 1.08Ghz). Maybe 1.2Ghz would have been available, not sure. I wouldn't call it lame with 2.2Ghz GDDR3 neither, but such a configuration clearly would be no longer competitive with GTX 260.
 
So from the discrepancy between NVidia's numbers and Carsten's measurements, it seems that there's 0.5mm of sealant/packaging on each of the width and height. Useful to bear in mind when future die measurements are performed...

Jawed

Here's some new stuff to ponder about...

G80 (90nm): 486,5 mm² (21,7 mm x 22,42 mm)
GT200 (65nm): 606,9 mm² (24,77 mm x 24,5 mm)
GT200b (55n): 497,3 mm² (22,3 mm x 22,3 mm)
G92b (55nm): 264,8mm²
RV770 (55nm): 274,7mm²

http://www.pcgameshardware.de/aid,6...200-und-GT200b-nachgemessen/Grafikkarte/News/

(in german only, so you might want to use your favourite online translator)
 
GT200 (65nm): 606,9 mm² (24,77 mm x 24,5 mm)
That would count for 85 chips per wafer (gross selection), and from all the pictures available you can count definitely more there -- 94~93, by my own measurements.
 
I was at 95 (maybe minus 1). But take into account the glue around each die. That might make a small difference.
 
Oh look, quad GT200 GPUs on one card! :runaway:
IMG_1287.jpg

http://www.extrahardware.cz/node/3266
 
Really nice looking fake or it is a real GT200b workstation-card with 4 GPUs, since there are rumors about GTX 295 equivalents:???:
 
Pretty nicely done, can't even make out which card it's based on... Do we even have pictures of the coming single-pcb 295?
 
Back
Top