NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
From what I've seen here, this is my understanding of rough price points and performance, in order from lowest in performance to highest:

These are disregarding any 512MB/1GB variations... just the chip itself.

1) ATI HD4850 at ~$250
2) ATI HD4870 at ~$350
3) Nvidia GTX260 at ~$450
4) Nvidia GTX280 at ~$550+
5) ATI HD4850x2 at ~$500
6) ATI HD4870x2 at ~$550

7) Nvidia GTX260x2 ??? at ~$550+ (assuming 65nm, if possible)
8) Nvidia GTX280x2 ??? at ~$600+ (assuming 65nm, if possible)

We see Nvidia coming in the lead for the single cards, but this time (as different with the 38xx series) the x2 takes the lead significantly over Nvidia's single offerings.

What's particularly interesting is the CrossfireX ability of the new 48xx series. We'll be seeing true memory sharing and chip integration, which means an actual x2 card will look as if its just 1x. CrossfireX will still show 2 I believe, but the memory may still be shared? AND, its cheaper than the GTX280.

Make adjustments to my assumptions, as they are based off of incomplete reports from around the web.
 
Last edited by a moderator:
I doubt we will see any GTX280/260X2... those TDP's would just be ridiculous for a dual card. And given that Nvidia likes the dual PCB approach, it would hurt
 
Unless you want a burnt hole in your motherboard where the PCI-e express slot is located along with neighboring parts, i doubt nVIDIA will even remotely think about any GX2 type card for the GT200 at this present moment. Its more of an alternative design for a highend refresh when they have successfully migrated to the 55nm with a combination of GDDR5 memory and 256bit memory interface where this will greatly reduce die size, TDP and power consumption for such designs to be even practical. At this point in time, im about 99.9% skeptical.

IMO i think the kind of performance jump one might see this time around could be as big as G80 was to the G71. Id say its pretty easy given the fact that most G92 cards, and even the various SLi setups takes a nose dive at 19x12 or higher res with AA/AF in modern titles.

But what im more concerned is, did nVIDIA over do too much? when it comes to yields, transistor count, etc it doesn't sound as rosy as it does on the performance side of things. This chip is like very uneconomical with current specs and rumours.

Plus the GT200 rumours seemed to totally silence the G92b rumours. Its a given that nVIDIA will dominate the high end with single GPU solutions, but its $249~349 market is left wide open for the HD48x0 series unless those % performance figures were indicating 3dmark difference.
 
But what im more concerned is, did nVIDIA over do too much? when it comes to yields, transistor count, etc it doesn't sound as rosy as it does on the performance side of things. This chip is like very uneconomical with current specs and rumours.
While it is a concern, I think anyone who takes yield rumours seriously should seriously reconsider how he judges reliability. The number of people at a company that know about real yield data is likely so small that the chances of an accurate leak are basically zero.

That doesn't mean you should expect mind-blowingly good yields for a 550mm²+ chip, of course - but I don't think you can really estimate them or cost at all. Plus, yields aren't just about die size and clock speeds - they're also about transistor density, testing procedures, etc... Certainly in terms of transistor density, it's fairly easy to see that GT200 and G9x are ridiculously worse than G80 given how poor the scaling is/was - and unless their engineers are just massive retards, which I doubt, that will definitely have a positive effect on yields (whether it has a positive effect on overall manufacturing cost is another question entirely, and is much more dubious).
 
The bigger issue w/r/t yields is how many you get a wafer, and obviously from a # of chips/wafer approach, GT200 is going to be worse than the much smaller RV770 chips in terms of chips you can get per wafer, unless yields on the RV770 are much worse, which odds wise isn't. So from a price point, ATI has them there, which is why I feel that Nvidia did go for making GT200 so large and ridiculously powered, because if it didn't outperform the previous gen by a lot, its pricing would not make sense.

After all, the $200-300 high performance cards was largely because ATI finally pushed Nvidia to use the G92 to sell at lower pricings since ATI was willing to release cheap cards, and that hurt Nvidia's margins last quarter. ATI will probably want to continue that strategy so Nvidia wants a card firmly at the high end now.
 
The bigger issue w/r/t yields is how many you get a wafer, and obviously from a # of chips/wafer approach, GT200 is going to be worse than the much smaller RV770 chips in terms of chips you can get per wafer, unless yields on the RV770 are much worse, which odds wise isn't. So from a price point, ATI has them there, which is why I feel that Nvidia did go for making GT200 so large and ridiculously powered, because if it didn't outperform the previous gen by a lot, its pricing would not make sense.

After all, the $200-300 high performance cards was largely because ATI finally pushed Nvidia to use the G92 to sell at lower pricings since ATI was willing to release cheap cards, and that hurt Nvidia's margins last quarter. ATI will probably want to continue that strategy so Nvidia wants a card firmly at the high end now.

Your reasoning goes well, but you are not considering that GT200 was in development far before than the launch of RV670/G92, so were defined its specifications.
 
Yes and no. Yes it was in development far longer in time but IIRC, all that talk about the 1TFp beast last year was misinterpreted as G92 while GT/G200 was what they actually referred to. While it was not released last year, what was Nvidia doing to it? Probably optimizations or improving design for yields / production. And I'm sure watching the market also to see where they could fit it better. After all, since G92 was released, we had G94 show possible shader optimizations and so on.
 
=>ZerazaX: I find no fault in your reasoning. Remeber what nVidia said in 2007: that they would release a new high-end product every Q4 (before Xmas) and a high-end refresh in Q2 (before summer holidays). G80 was the first, Ultra was its refresh. G90, GT200 or whatever was it called was probably originally planned for Q4'07 release. But looking at the situation - ATi canned its R650 project, couldn't compete even against G92 and G80 was still sufficient for all games except in super-high resolutions (and there's SLI for that), they decided to release dual G92 instead as the Q4 high-end product (which slipped to Q1) and probably make some modifications to "G90" (increasing the ALU:tex ratio, more shaders per cluster, 55nm manufacturing process) to release it later as GT200. ATi did the same with R650 aka the 96 5D ALUs chip, it became the basis for RV770.
 
Yes and no. Yes it was in development far longer in time but IIRC, all that talk about the 1TFp beast last year was misinterpreted as G92 while GT/G200 was what they actually referred to. While it was not released last year, what was Nvidia doing to it? Probably optimizations or improving design for yields / production. And I'm sure watching the market also to see where they could fit it better. After all, since G92 was released, we had G94 show possible shader optimizations and so on.

AFAIK, there are no shader optimizations on G94 being not present in G92, too.
And a base chip design cannot be modified so easily, if they were aiming for the teraflop mark, there are about 100 GFLOPS missing...
No, I don't want make an Nvidia bashing like the Inquirer, I want only point out that developing such big chips is not an easy task and that it's since the AMD/ATI acquisition that also ATI said to have a "1 TFLOP chip" in development. It takes Years in development time, indeed.
 
Yes base design can't really be changed, and I'm sure that the 1TFp card specs were planned ahead of time (after all, if you know that (2MADD + MUL) * 240 SP * x shader clock, you can get the shader clock for 1TFp) and perhaps yields weren't what were expected, and shader clocked was dropped a bit (FWIW, if it is indeed 1296 shader clock with those specs, then 933 GFlops is what its calculated to so its close to 1TFp).

However, during these past 6 months, we'd have to wonder what Nvidia was doing with GT200 since I doubt they just sat on it. As far as small changes go, keep in mind that ATI did go from R600 to RV670 in the span of 6 months and moved UVD on-board, removed the 512-bit / 1024-bit internal ringbus to 256/512 and did whatever changes were required for DX10.1 so I don't think its out of the question that small changes could have been made to optimize efficiency, especially if testing were done longer.

I read a story about the Nvidia headquarters and they have an entire supercomputer designed to simulate a chip without needing to put it into production, so I wouldn't be surprised that even if production didn't happen until these past two months that they had a few months to simulate and test the configuration and introduce optimizations.

I'm sure they've always had it penciled in for the high end but that doesn't mean that optimizations and revisions weren't done. Remember, G80 A2 to A3 revisions introduced much higher clocked G80 cores and that was done within a few months time..
 
=>ZerazaX: I find no fault in your reasoning. Remeber what nVidia said in 2007: that they would release a new high-end product every Q4 (before Xmas) and a high-end refresh in Q2 (before summer holidays). G80 was the first, Ultra was its refresh. G90, GT200 or whatever was it called was probably originally planned for Q4'07 release. But looking at the situation - ATi canned its R650 project, couldn't compete even against G92 and G80 was still sufficient for all games except in super-high resolutions (and there's SLI for that), they decided to release dual G92 instead as the Q4 high-end product (which slipped to Q1) and probably make some modifications to "G90" (increasing the ALU:tex ratio, more shaders per cluster, 55nm manufacturing process) to release it later as GT200. ATi did the same with R650 aka the 96 5D ALUs chip, it became the basis for RV770.

Yes I felt that 9800GX2 was created more to counter R680 than anything else. Sort of to show everyone that "hey, we can do it too" kind of mantra, as well as to make sure they kept the crown across the board since there are certain games where 3870X2 benches better than any single GPU config at the time of its release.
 
the basic problem for NVIDIA, in my perspective, is that after the availability of GT280, the gap between 8800GT and 9800GTX could be filled by ATI's new graphic solutions.

PS : I recently have heard that Prices for 8800GTS 320MB/640MB have been reduced to less than 100~130 USD in the ASIA.
 
However, during these past 6 months, we'd have to wonder what Nvidia was doing with GT200 since I doubt they just sat on it. As far as small changes go, keep in mind that ATI did go from R600 to RV670 in the span of 6 months and moved UVD on-board, removed the 512-bit / 1024-bit internal ringbus to 256/512 and did whatever changes were required for DX10.1 so I don't think its out of the question that small changes could have been made to optimize efficiency, especially if testing were done longer.

I

The changes with RV670 were not so trivial and I strongly doubt it was done in only 6 months to remedy to the R600 lack of success.
 
the basic problem for NVIDIA, in my perspective, is that after the availability of GT280, the gap between 8800GT and 9800GTX could be filled by ATI's new graphic solutions.

PS : I recently have heard that Prices for 8800GTS 320MB/640MB have been reduced to less than 100~130 USD in the ASIA.

Yes there were some even in the US sold at those prices now recently...

As far as the gap between GT200 and G92 chips, I think thats where G92b will fill in. Higher clocked G92 should be competition against 4850 certainly and possibly 4870 as well.
 
The changes with RV670 were not so trivial and I strongly doubt it was done in only 6 months to remedy to the R600 lack of success.

I agree that work probably began before that but keep in mind that supposedly the 55nm yields for RV670 were "surprisingly" good and actually pushed RV670 ahead of schedule... IIRC RV670 was supposedly mapped not for Q407 but for Q1 08 but above-expected yields and returns pushed it to Q4. So a lot of time working on chips also seems to be time simply waiting for returns on prototypes
 
However, during these past 6 months, we'd have to wonder what Nvidia was doing with GT200 since I doubt they just sat on it.
I doubt they sat on it, they were probably fixing it ... and making changes while fixing it would take a lot of balls.
 
True and I doubt they really made many big changes - any of them were to fix issues (such as the supposed fixing of the MUL)

Also, for the entire yields issue, I think the better thing to cite would be how many die per wafer compared to ATI. If 256 mm^2 is the correct size for RV770 and 576 mm^2 were correct for GT200, then for every 4 GT200's, ATI could make 9 RV770's (assuming identical yield %). Of course, if yields were different, then Nvidia having a completely unusable GT200 would hurt a lot more for the same surface area on the wafer than ATI losing one GPU.

And if one plays with profit margins, then ATI certainly can sell cards at half the profit margin and still come out at top assuming that every 4 GT200's sold ATI can sell 9 RV770's.

Obviously, all of that is a very simple look at where ATI's strategy is coming from, but the word yield is definitely a bit mis-leading...
 
Also, for the entire yields issue, I think the better thing to cite would be how many die per wafer compared to ATI. If 256 mm^2 is the correct size for RV770 and 576 mm^2 were correct for GT200, then for every 4 GT200's, ATI could make 9 RV770's (assuming identical yield %). Of course, if yields were different, then Nvidia having a completely unusable GT200 would hurt a lot more for the same surface area on the wafer than ATI losing one GPU.

Yields are related to chip size, too. Let's imagine that you can stick exactly 2 times as many RV770 than GT200 in a 30cm wafer (let's say 200 RV770 and 100 GT200). Now, if you have five defective chips in a wafer, in the RV770 case you throw out 2.5% of the wafer, in the case of GT200 you throw out 5%. Of course process and redundancy can affect this, but if you have a 15% scrap on a 250 mm^2 chip, it's likely the scrap on a chip 2 times bigger is near the 30% number than the 15% (if redundancy and process are the same). Of course you can also sell the defective chip (if the failure is not critical) as a lower spec part.
 
True and I doubt they really made many big changes - any of them were to fix issues (such as the supposed fixing of the MUL)

Also, for the entire yields issue, I think the better thing to cite would be how many die per wafer compared to ATI. If 256 mm^2 is the correct size for RV770 and 576 mm^2 were correct for GT200, then for every 4 GT200's, ATI could make 9 RV770's (assuming identical yield %). Of course, if yields were different, then Nvidia having a completely unusable GT200 would hurt a lot more for the same surface area on the wafer than ATI losing one GPU.

And if one plays with profit margins, then ATI certainly can sell cards at half the profit margin and still come out at top assuming that every 4 GT200's sold ATI can sell 9 RV770's.

Obviously, all of that is a very simple look at where ATI's strategy is coming from, but the word yield is definitely a bit mis-leading...


I suppose that GT200 actually have 16X16 configuration at least, both GT280 and GT260 are disabled version of GT200.
 
Status
Not open for further replies.
Back
Top