NathansFortune
Regular
What did they announce? 192bit GF106?
There are still people who listen to Charlie's stories about manufacturing and cost? Why would they do that?Especially when you believe Charlie's stories about manufacturing and cost.
What did they announce? 192bit GF106?
That's because everybody says he's wrong and nobody comes up with any real figures. At least he has a method that makes sense by taking publicly available data on wafer costs and industry standard calculation tools.There are still people who listen to Charlie's stories about manufacturing and cost? Why would they do that?
Tuesday's Siggraph presentation: Quadro to the Power of Fermi
Scott Fitzpatrick, Product Manager, NVIDIA
That's because everybody says he's wrong and nobody comes up with any real figures. At least he has a method that makes sense by taking publicly available data on wafer costs and industry standard calculation tools.
Everyone saying he's wrong still doesn't make them right, maybe he's just baiting for real figures?
Thanks.
Or he's trolling for AMD? I think my suggestion is more likely...
After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.so he bumps a bunch of negative articles that have nothing to do with anything at this time to distract.
Oh sure, but his numbers stem from logic, exponentially increasing the price of wafers with current known data, he could only be too high.
After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.
Oh sure, but his numbers stem from logic, exponentially increasing the price of wafers with current known data, he could only be too high.
After the bump, he put an article up about GF100 using the same bad bump material, I didn't know why he bumped them first, but that explained some.
And I think he totally misunderstood the bump issue. As far as I understand the material alone did not caused any problem (desktop chips were running ok). It was the mismatch between thermal spec for notebook and the material that caused the problem. I could be wrong though.
The same bump material was used on Xenos and caused the RROD's. it has nothing to do with the "thermal spec" hoax nV was pushing to the press, but everything with it being a crappy solution.
That's why it also likes to exhibit itself on cards like the 8800GTX.
So where's Charlie's massive 4 page article on how ATi are a terrible company and how MS should dump them for it...
He was true regarding the delay and problems of GF100, he was true regarding the mobile GPUs and packaging issues and he was true about shortage of GT200b-based products last year (nVidia was loosing on them).I would be surprised to see people still trust him now.
It might simply mean they weren't as aggressive as ATI. Taping out multiple chips before the first silicon has been debugged is a risky proposition.As for GF104 - it's ~260mm² 40nm GPU using common GDDR5 modules. There's no reason why it couldn't be released during last fall - both 40nm node and memory modules were available at that time. But it was released in A1 revision three quarters later. Doesn't it mean, that nVidia in fact had problems with scaling their architecture, especially when compared to ATi, which released its mainstream part in a month after RV870?
neliz said:The same bump material was used on Xenos and caused the RROD's. it has nothing to do with the "thermal spec" hoax nV was pushing to the press, but everything with it being a crappy solution.
That's why it also likes to exhibit itself on cards like the 8800GTX.
The GTX460 "it's too big waaaahh, I hate Nvidia waaaah" article was a disgrace of subjective supposition, assumption and as usual opinion presented as fact.
The problem is that no one knows what kind of deal Nvidia did with TSMC for their fab capacity, all we know is that Nv (with an apparently low yield process) have no shortages, while ATi (with a high yield process) do. What that means in the real world I couldn't say with any real confidence, but my guess is that Nv were able to secure a bulk discount for buying up capacity in advance, probably with a big upfront payment, the type that AMD couldn't afford.
Many reliability effects have an exponential behavior depending on a certain factor. Whether or not a particular solution is worthy to be used in a certain environment is then dependent on the presence of that factor.
A bump material that fails after, say, 1 year in one particular environment may be perfectly acceptable in another if the environmental factor that triggers it is sufficiently reduced so that it happens only after 5 years. There may well be some instances in the latter environment where it happens earlier, but that's statistics at play and, if it's not sufficiently higher than other failure modes, a valid trade-off to make. So it's definitely not unlikely that thermal spec was a big factor, very much on the contrary. It was probably the crucial factor indeed that tipped the scale.
And, before you start out with righteous indignation, there are many more such decisions being made for chips that go into production.
This is the kind of subtlety that never registers with a journalist like Charlie. Not that it matters: his audience only cares about black and white anyway and doesn't want to be bothered with the gray-gray real engineering.
He was true regarding the delay and problems of GF100, he was true regarding the mobile GPUs and packaging issues and he was true about shortage of GT200b-based products last year (nVidia was loosing on them).
nVidia told to press, that GF100 will be realeased during Q4/09, than January 2010 etc. Many sites published it. nVidia was denying any issues related to their mobile GPUs (many sites published their standpoint) until the trial. nVidia told to press, that GT200b-based products are scarce, because they underestimated the demand and ordered too few GPUs at TSMC, they promised improvement etc. Many sites published it as explanation.
Nobody is criticising all the websites, which published this nVidia's bs, but many members are criticising Charlie, although he was true in all major points.
As for GF104 - it's ~260mm² 40nm GPU using common GDDR5 modules. There's no reason why it couldn't be released during last fall - both 40nm node and memory modules were available at that time. But it was released in A1 revision three quarters later. Doesn't it mean, that nVidia in fact had problems with scaling their architecture, especially when compared to ATi, which released its mainstream part in a month after RV870?
While true initially for ATI it's not true anymore. I'm paraphrasing, but during the conference call Dirk Meyer said the allocation in Q2 shifted to the low end as notebook parts ramped up and supplying them is more important than the channel because if the GPU isn't available for a notebook there is no notebook.Nvidia has contracted out a significant portion of TSMC's capacity, as has ATI. Both will end up getting roughly the same price per wafer. The main difference is that a large portion of Nvidia's 40nm capacity is/has been going to smaller parts (aka not fermi based) while ATI's has almost all been higher end parts.
While true initially for ATI it's not true anymore. I'm paraphrasing, but during the conference call Dirk Meyer said the allocation in Q2 shifted to the low end as notebook parts ramped up and supplying them is more important than the channel because if the GPU isn't available for a notebook there is no notebook.