NVIDIA GF100 & Friends speculation

What did they announce? 192bit GF106?

Tuesday's Siggraph presentation: Quadro to the Power of Fermi
Scott Fitzpatrick, Product Manager, NVIDIA

There are still people who listen to Charlie's stories about manufacturing and cost? Why would they do that?
That's because everybody says he's wrong and nobody comes up with any real figures. At least he has a method that makes sense by taking publicly available data on wafer costs and industry standard calculation tools.

Everyone saying he's wrong still doesn't make them right, maybe he's just baiting for real figures?
 
Tuesday's Siggraph presentation: Quadro to the Power of Fermi
Scott Fitzpatrick, Product Manager, NVIDIA

Thanks.

That's because everybody says he's wrong and nobody comes up with any real figures. At least he has a method that makes sense by taking publicly available data on wafer costs and industry standard calculation tools.

Everyone saying he's wrong still doesn't make them right, maybe he's just baiting for real figures?

Or he's trolling for AMD? I think my suggestion is more likely...

If you read a Charlie article taking into account his anti-Nvidia agenda then they make a lot more sense. I mean what the hell was that 'update' of his bump-gate articles on the day the GF104 embargo ended. It's stuff like that which makes Charlie a suspect source of information, I mean he basically had nothing bad to say about GF104 (because it is quite a good product) so he bumps a bunch of negative articles that have nothing to do with anything at this time to distract. He used the excuse that the original articles went walkies, but they had been gone for longer than that.
 
Thanks.
Or he's trolling for AMD? I think my suggestion is more likely...

Oh sure, but his numbers stem from logic, exponentially increasing the price of wafers with current known data, he could only be too high.

so he bumps a bunch of negative articles that have nothing to do with anything at this time to distract.
After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.
 
I remember Charlie said early this year that nv was not actively developing other fermi chips such as gf104 because the architecture was so fundamentally broken. Then again a couple of months before the gf104 launch he said gf104 would be a failure because the architecture was not scalable, not manufacturable, blah, blah, blah...

Now the tone has changed to "gf104 is good but nv can't make money out of it, ...".

I would be surprised to see people still trust him now.

And I think he totally misunderstood the bump issue. As far as I understand the material alone did not caused any problem (desktop chips were running ok). It was the mismatch between thermal spec for notebook and the material that caused the problem. I could be wrong though.

Oh sure, but his numbers stem from logic, exponentially increasing the price of wafers with current known data, he could only be too high.


After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.
 
Oh sure, but his numbers stem from logic, exponentially increasing the price of wafers with current known data, he could only be too high.


After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.

The GTX460 "it's too big waaaahh, I hate Nvidia waaaah" article was a disgrace of subjective supposition, assumption and as usual opinion presented as fact.

The problem is that no one knows what kind of deal Nvidia did with TSMC for their fab capacity, all we know is that Nv (with an apparently low yield process) have no shortages, while ATi (with a high yield process) do. What that means in the real world I couldn't say with any real confidence, but my guess is that Nv were able to secure a bulk discount for buying up capacity in advance, probably with a big upfront payment, the type that AMD couldn't afford.
 
And I think he totally misunderstood the bump issue. As far as I understand the material alone did not caused any problem (desktop chips were running ok). It was the mismatch between thermal spec for notebook and the material that caused the problem. I could be wrong though.

The same bump material was used on Xenos and caused the RROD's. it has nothing to do with the "thermal spec" hoax nV was pushing to the press, but everything with it being a crappy solution.
That's why it also likes to exhibit itself on cards like the 8800GTX.
 
The same bump material was used on Xenos and caused the RROD's. it has nothing to do with the "thermal spec" hoax nV was pushing to the press, but everything with it being a crappy solution.
That's why it also likes to exhibit itself on cards like the 8800GTX.

So where's Charlie's massive 4 page article on how ATi are a terrible company and how MS should dump them for it...
 
So where's Charlie's massive 4 page article on how ATi are a terrible company and how MS should dump them for it...

ATI didn't produce the Xenos chip! they designed it and sold it to M$, it was M$' decision to use said material during manufacturing. Maybe you should read Charlie's articles, sometimes they're full of useful information! ;)
 
I would be surprised to see people still trust him now.
He was true regarding the delay and problems of GF100, he was true regarding the mobile GPUs and packaging issues and he was true about shortage of GT200b-based products last year (nVidia was loosing on them).

nVidia told to press, that GF100 will be realeased during Q4/09, than January 2010 etc. Many sites published it. nVidia was denying any issues related to their mobile GPUs (many sites published their standpoint) until the trial. nVidia told to press, that GT200b-based products are scarce, because they underestimated the demand and ordered too few GPUs at TSMC, they promised improvement etc. Many sites published it as explanation.

Nobody is criticising all the websites, which published this nVidia's bs, but many members are criticising Charlie, although he was true in all major points.

As for GF104 - it's ~360mm²(fixed) 40nm GPU using common GDDR5 modules. There's no reason why it couldn't be released during last fall - both 40nm node and memory modules were available at that time. But it was released in A1 revision three quarters later. Doesn't it mean, that nVidia in fact had problems with scaling their architecture, especially when compared to ATi, which released its mainstream part in a month after RV870?
 
Last edited by a moderator:
As for GF104 - it's ~260mm² 40nm GPU using common GDDR5 modules. There's no reason why it couldn't be released during last fall - both 40nm node and memory modules were available at that time. But it was released in A1 revision three quarters later. Doesn't it mean, that nVidia in fact had problems with scaling their architecture, especially when compared to ATi, which released its mainstream part in a month after RV870?
It might simply mean they weren't as aggressive as ATI. Taping out multiple chips before the first silicon has been debugged is a risky proposition.
 
If there were large problems with the original GF100 chip, we may see significant improvements to the architecture even without a die shrink.
 
neliz said:
The same bump material was used on Xenos and caused the RROD's. it has nothing to do with the "thermal spec" hoax nV was pushing to the press, but everything with it being a crappy solution.
That's why it also likes to exhibit itself on cards like the 8800GTX.

Many reliability effects have an exponential behavior depending on a certain factor. Whether or not a particular solution is worthy to be used in a certain environment is then dependent on the presence of that factor.

A bump material that fails after, say, 1 year in one particular environment may be perfectly acceptable in another if the environmental factor that triggers it is sufficiently reduced so that it happens only after 5 years. There may well be some instances in the latter environment where it happens earlier, but that's statistics at play and, if it's not sufficiently higher than other failure modes, a valid trade-off to make. So it's definitely not unlikely that thermal spec was a big factor, very much on the contrary. It was probably the crucial factor indeed that tipped the scale.

And, before you start out with righteous indignation, there are many more such decisions being made for chips that go into production.

This is the kind of subtlety that never registers with a journalist like Charlie. Not that it matters: his audience only cares about black and white anyway and doesn't want to be bothered with the gray-gray real engineering.
 
The GTX460 "it's too big waaaahh, I hate Nvidia waaaah" article was a disgrace of subjective supposition, assumption and as usual opinion presented as fact.

The problem is that no one knows what kind of deal Nvidia did with TSMC for their fab capacity, all we know is that Nv (with an apparently low yield process) have no shortages, while ATi (with a high yield process) do. What that means in the real world I couldn't say with any real confidence, but my guess is that Nv were able to secure a bulk discount for buying up capacity in advance, probably with a big upfront payment, the type that AMD couldn't afford.

Nvidia has contracted out a significant portion of TSMC's capacity, as has ATI. Both will end up getting roughly the same price per wafer. The main difference is that a large portion of Nvidia's 40nm capacity is/has been going to smaller parts (aka not fermi based) while ATI's has almost all been higher end parts.

If you look at what's going on in retail, the fermi parts are starting to pile up in inventory causing various IHV to have to discount them to get them moving. Meanwhile the ATI parts are still selling above the original MSRP and are on partial allocation. The majority of the Nvidia chips that are moving are of the lower end type.

As far as fab prices, the wafer cost differentials are going to be in the noise and the part costs are going to be primarily about yields and recovery.
 
Many reliability effects have an exponential behavior depending on a certain factor. Whether or not a particular solution is worthy to be used in a certain environment is then dependent on the presence of that factor.

this is effectively BS. The thermal cycles and environment are well understood and any company not taking them into account has screwed up.

A bump material that fails after, say, 1 year in one particular environment may be perfectly acceptable in another if the environmental factor that triggers it is sufficiently reduced so that it happens only after 5 years. There may well be some instances in the latter environment where it happens earlier, but that's statistics at play and, if it's not sufficiently higher than other failure modes, a valid trade-off to make. So it's definitely not unlikely that thermal spec was a big factor, very much on the contrary. It was probably the crucial factor indeed that tipped the scale.

The issue is that incompatible materials were chosen which didn't have enough in common wrt thermal expansion issues. This is a general problem and one not mitigated by saying you can't run Tj above y for x number of times. It is simply bad material selection.

And, before you start out with righteous indignation, there are many more such decisions being made for chips that go into production.

This is the kind of subtlety that never registers with a journalist like Charlie. Not that it matters: his audience only cares about black and white anyway and doesn't want to be bothered with the gray-gray real engineering.

Which is fine if something is a gray-gray. The whole bumpgate saga though was not an issue of gray-gray, it was an issue of basic bad material selection and compatibility.
 
He was true regarding the delay and problems of GF100, he was true regarding the mobile GPUs and packaging issues and he was true about shortage of GT200b-based products last year (nVidia was loosing on them).

nVidia told to press, that GF100 will be realeased during Q4/09, than January 2010 etc. Many sites published it. nVidia was denying any issues related to their mobile GPUs (many sites published their standpoint) until the trial. nVidia told to press, that GT200b-based products are scarce, because they underestimated the demand and ordered too few GPUs at TSMC, they promised improvement etc. Many sites published it as explanation.

Nobody is criticising all the websites, which published this nVidia's bs, but many members are criticising Charlie, although he was true in all major points.

As for GF104 - it's ~260mm² 40nm GPU using common GDDR5 modules. There's no reason why it couldn't be released during last fall - both 40nm node and memory modules were available at that time. But it was released in A1 revision three quarters later. Doesn't it mean, that nVidia in fact had problems with scaling their architecture, especially when compared to ATi, which released its mainstream part in a month after RV870?

I thought Charlie said it's ~360mm2.
 
Nvidia has contracted out a significant portion of TSMC's capacity, as has ATI. Both will end up getting roughly the same price per wafer. The main difference is that a large portion of Nvidia's 40nm capacity is/has been going to smaller parts (aka not fermi based) while ATI's has almost all been higher end parts.
While true initially for ATI it's not true anymore. I'm paraphrasing, but during the conference call Dirk Meyer said the allocation in Q2 shifted to the low end as notebook parts ramped up and supplying them is more important than the channel because if the GPU isn't available for a notebook there is no notebook.
 
While true initially for ATI it's not true anymore. I'm paraphrasing, but during the conference call Dirk Meyer said the allocation in Q2 shifted to the low end as notebook parts ramped up and supplying them is more important than the channel because if the GPU isn't available for a notebook there is no notebook.

He said allocation shifted. He didn't say low end laptop gpu's took up the majority of wafers.
 
Back
Top