NV to leave IBM?

radar1200gs said:
yes, you don't think nVidia were optimistic about the processes they planned to use for nothing do you? Sometime after then, the whole process blew up in everyones face. Only TSMC reallly knows how/why and they sure as hell aren't telling.
So TSMC, after declaring their process to be risky, went to nVidia in private and gave them false assurances?

Isn't it more likely that nVidia simply assumed they could get away with pushing a new, unproven process as they had with previous generations? Pushing fab processes past where their competition was willing to go had stood them in good stead before. It's a large part of how they were able to crush 3Dfx. nVidia had a history of betting the store on risky processes and winning. This time it didn't work out for them, that's all.
 
VtC said:
radar1200gs said:
yes, you don't think nVidia were optimistic about the processes they planned to use for nothing do you? Sometime after then, the whole process blew up in everyones face. Only TSMC reallly knows how/why and they sure as hell aren't telling.
So TSMC, after declaring their process to be risky, went to nVidia in private and gave them false assurances?

Isn't it more likely that nVidia simply assumed they could get away with pushing a new, unproven process as they had with previous generations? Pushing fab processes past where their competition was willing to go had stood them in good stead before. It's a large part of how they were able to crush 3Dfx. nVidia had a history of betting the store on risky processes and winning. This time it didn't work out for them, that's all.

Yes, nVidia does take risks, but, they had reason to believe this was a risk they would win on. TSMC wasn't warning people not to use Low-K until very late in the game ( I really must dig out my archive cds and try to find their pdf catalog circa NV30 launch).
 
radar1200gs said:
Yes, nVidia does take risks, but, they had reason to believe this was a risk they would win on. TSMC wasn't warning people not to use Low-K until very late in the game ( I really must dig out my archive cds and try to find their pdf catalog circa NV30 launch).

Man, you're like a broken record; this ground has been repeatedly covered. . .let it go!
 
radar1200gs said:
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.

nV30 was cancelled, remember, and so as a mass-market product nV30 turned out to be impossible, even at .13 microns...;) I said it then and will say it again, something is wrong when you say that functionality is only "possible" with a certain manufacturing process. R3x0 convincingly disproved nVidia's "only possible at .13 microns" PR statements, since R3x0 does everything well that nV3x doesn't, and does a lot nV3x didn't do at all, and does it on .15 microns to boot, with good yields besides...;) R3x0 proved nVidia's statements about .13 microns wholly inaccurate in that regard.

As I recall, it was only after the cancellation of nV30 that nVidia started playing the "let's blame the FAB game" for cash and prizes (after months of high-profile PR fantasies were spun about "The Dawn of Cinematic Computing" by nVidia--which was actually instead "The Sunset" for nV30)...;)

I had two theories about that at the time:

(1) Somebody higher-up in engineering in nVidia played a successful CYA game and convinced the boss that nV30 was "really great" and what sucked was TSMC, and, gee, if we move it to IBM things will be all better.

(2) Faced with the reality that nV30 was, quite aside from its own problems relative to yields at TSMC, piss-poor when contrasted with R300, nVidia needed a PR scapegoat to attempt to mitigate the disaster with investors and analysts. And so, the idea of playing the "let's blame the FAB game" was cooked up in desperation as an expensive bandaid.

Of course, it was probably some combination of the above two situations that caused nVidia to start publicizing that it was embarking on a shell game between FABs for nV3x production. Anyway, my last theory is that after it became evident to nVidia that no FAB anywhere on earth could save its bacon relative to nV3x versus R3x0, that the ambitious design plans for nV40 were embarked upon in earnest. The irony to me, is, that once again, nVidia finds itself and its plans totally dependent on the abilities of the FABs to execute its designs with yields that will produce meaningful results for the company. The question is not "Is nV40 everything that nV30 should have been?" because the answer is obviously "Yes." The question, this time, though, is "Will the various FABs nVidia employs be more successful with nV40 yields than they were with nV3x?" That is the open question at this time, imo.
 
One comment I do have a problem here is that ATI can obviously do things in silicon that NVIDIA can't. This is a bit misleading, especially making the distinction of "look at the differences between Intel and AMD". First off, AMD and Intel run their own fabs, and so they do custom cells based upon their particular process. For companies like NVIDIA and ATI, they rely on standard cells provided by TSMC or other 3rd party companies. It makes no sense for either of these guys to pay engineers to do custom cells for each design (and a custom cell could take an engineer a month to lay out). Instead, ATI and NVIDIA make processors that are far more parallel than what AMD and Intel put out, and hence they do not need to be clocked so high. CPU's are essentially single pipe designs that have lots of cache and high clock speeds. GPU's on the other hand can do everything in parallel, and not only that, but it is locally connected to its memory (and that is a very high speed bus).

So basically ATI and NVIDIA utilize these same custom cells for the most part (ATI only uses TSMC, so probably uses the standard cells provided by TSMC that are designed for that process, while NVIDIA utilizes cells provided by TSMC and IBM). Now, even though they are using these same building blocks, I believe that ATI has done a much better job with their design of the RXX00 series than NVIDIA has with the NV3x series. Part of this is complexity, and the tradeoffs associated with it. ATI had a very concrete plan of what to build, and they did it exactly to the spec that DX9 dictated. NVIDIA on the other hand tried to make a much more complex part that included many features outside of the DX9 spec. In the end NVIDIA tried to do too much at once, and got burned badly. It is almost as if NV tried to run before it could walk, while ATI took a more logical and conservative approach, and it paid out in spades.

It seems with this latest generation the two companies are still following their overall philosophies. NVIDIA is releasing a part this is still very complex, and has features above that of what ATI has (and is rumored to have). While ATI on the other hand is taking a smaller step feature-wise, but it appears as if the overall performance will be higher than NVIDIA's. Which way will be right? I have no clue, I guess we will have to make a final decision on that one a year from now.
 
thought ATI used UMC also?, the 15m R300 design was mostly ** hand built, and ATI likes to maximize the power of the minimum spec with the highest IQ( Since the R300). What i just said makes sense only to me?

"One comment I do have a problem here is that ATI can obviously do things in silicon that NVIDIA can't."( visivi) They did and are still doing it. since the Radeon they have been doing that. Does it mean a better GPU? I guess it means a different mind set.


**( most likely not, will revisit later)
 
karlotta said:
thought ATI used UMC also?, the 15m R300 design was mostly hand built, and ATI likes to maximize the power of the minimum spec with the highest IQ( Since the R300). What i just said makes sense only to me?

"One comment I do have a problem here is that ATI can obviously do things in silicon that NVIDIA can't."( visivi) They did and are still doing it. since the Radeon they have been doing that. Does it mean a better GPU? I guess it means a different mind set.

I believe ATI does use UMC also, for rv280 perhaps?
 
I am not 100% sure, but I think ATI uses UMC for the very low end stuff, or for any overflow conditions that may exist. Nearly all the R3x0 and RV3x0 series are TSMC AFAIK.

Also, where did you hear that R300 was almost all hand built? Not that I am saying you are wrong, I just hadn't heard that before. I know there is a certain amount of hand tuning with any design, but hand building a 105 million transistor part seems somewhat daunting, especially within the time spent developing the chip. Most of the fabless guys use many of the same design tools, but just in different combinations (I guess). It would be nice if SirEric would drop by and say, "yes, we did much of the trace and route by hand and not by auto-routing software".
 
WaltC said:
radar1200gs said:
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.

nV30 was cancelled, remember, and so as a mass-market product nV30 turned out to be impossible, even at .13 microns...;) I said it then and will say it again, something is wrong when you say that functionality is only "possible" with a certain manufacturing process. R3x0 convincingly disproved nVidia's "only possible at .13 microns" PR statements, since R3x0 does everything well that nV3x doesn't, and does a lot nV3x didn't do at all, and does it on .15 microns to boot, with good yields besides...;) R3x0 proved nVidia's statements about .13 microns wholly inaccurate in that regard.

As I recall, it was only after the cancellation of nV30 that nVidia started playing the "let's blame the FAB game" for cash and prizes (after months of high-profile PR fantasies were spun about "The Dawn of Cinematic Computing" by nVidia--which was actually instead "The Sunset" for nV30)...;)

I had two theories about that at the time:

(1) Somebody higher-up in engineering in nVidia played a successful CYA game and convinced the boss that nV30 was "really great" and what sucked was TSMC, and, gee, if we move it to IBM things will be all better.

(2) Faced with the reality that nV30 was, quite aside from its own problems relative to yields at TSMC, piss-poor when contrasted with R300, nVidia needed a PR scapegoat to attempt to mitigate the disaster with investors and analysts. And so, the idea of playing the "let's blame the FAB game" was cooked up in desperation as an expensive bandaid.

Of course, it was probably some combination of the above two situations that caused nVidia to start publicizing that it was embarking on a shell game between FABs for nV3x production. Anyway, my last theory is that after it became evident to nVidia that no FAB anywhere on earth could save its bacon relative to nV3x versus R3x0, that the ambitious design plans for nV40 were embarked upon in earnest. The irony to me, is, that once again, nVidia finds itself and its plans totally dependent on the abilities of the FABs to execute its designs with yields that will produce meaningful results for the company. The question is not "Is nV40 everything that nV30 should have been?" because the answer is obviously "Yes." The question, this time, though, is "Will the various FABs nVidia employs be more successful with nV40 yields than they were with nV3x?" That is the open question at this time, imo.
Walt, walt walt...
NV30 was released - in limited quantities, then cancelled. nVidia had to try and release something while they waited for IBM to become production ready for NV35.
 
radar1200gs said:
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.

And this indicates a process or fab problem more then a design problem?

Whats possible for ATi isn't necessarily possible for nVidia and vice-versa (similar to AMD vs Intel).

You think it might have something to do with their differing implementations of the same thing?

What do you think finally drove nVidia from TSMC? It wasn't the Low-K failure which was disappointing - it was the fact that even on bulk silicon TSMC couldn't produce a die worth a damn.

Boy if I worked for TSMC right now I would be very offended at your comment. I just can't believe after all of the articles being pushed out by companies going on about TSMC's success, and all of the articles talking about IBM's failures, you still choose to just listen to whatever NV says and treat it as the god honest truth. Perhaps TSMC just told Nvidia it wasn't going to waste any more good silicon making a product with design issues which will drain money and have nothing to show.

Now, if I bring a design to TSMC for a new processor I am making, one that cuts back on things that make it faster than the competition, concentrating on other matters in the design for specific features, but relying on it's clock speed on being say....4Thz to be competitive so that it could become profitable, and TSMC cannot manufactuer my chips to run at the speed stated on my plans, is that my fault or TSMC just being a crappy foundry?
 
JoshMST said:
....
It seems with this latest generation the two companies are still following their overall philosophies. NVIDIA is releasing a part this is still very complex, and has features above that of what ATI has (and is rumored to have). While ATI on the other hand is taking a smaller step feature-wise, but it appears as if the overall performance will be higher than NVIDIA's. Which way will be right? I have no clue, I guess we will have to make a final decision on that one a year from now.

Yes, but I don't get you here about "philosophies"...;) Its far better performance and yields aside, R3x0 just up and walked away from nV3x in terms of "features" and functionality, didn't it? So I'd definitely say that nVidia's "philosophy" underwent a paradigm shift between nV3x and nV4x, and this was done in recognition of the success ATi's had with its R3x0 feature-driven philosophy (especially API function hardware support.) nV4x is definitely not "your father's" nVidia gpu in that regard...:D If ATi is indeed taking "smaller steps" with regard to hardware support between R3x0 and R4x0 than nVidia between nV3x and nV4x, it's only because ATi took most of their giant steps with R3x0, whereas with nV3x nVidia took only a few baby steps above nV2x. IMO, of course...;)
 
radar1200gs said:
Walt, walt walt...
NV30 was released - in limited quantities, then cancelled. nVidia had to try and release something while they waited for IBM to become production ready for NV35.

The fate of nV30 was that it was cancelled before it hit retail. Only some few thousand *pre-orders* for the product were filled (estimates vary as to how many)--but no nV30 product ever hit retail store shelves for purchase by retail customers that I was aware of--nV30 was cancelled while still in the pre-order stage. I believe the exact quote by JHH from nVidia in regards to the cancellation was "nV30 was a failure," and I believe that's pretty close if not exact. IIRC, it was not until after nV30 got the axe that nVidia began playing the nV3x FAB production shell games.
 
WaltC said:
radar1200gs said:
Walt, walt walt...
NV30 was released - in limited quantities, then cancelled. nVidia had to try and release something while they waited for IBM to become production ready for NV35.

The fate of nV30 was that it was cancelled before it hit retail. Only some few thousand *pre-orders* for the product were filled (estimates vary as to how many)--but no nV30 product ever hit retail store shelves for purchase by retail customers that I was aware of--nV30 was cancelled while still in the pre-order stage. I believe the exact quote by JHH from nVidia in regards to the cancellation was "nV30 was a failure," and I believe that's pretty close if not exact. IIRC, it was not until after nV30 got the axe that nVidia began playing the nV3x FAB production shell games.
*bangs head against brick wall*
NV30 was released. People own them. Whether or not it got to retail is irrelevant (I think a few 5800 non ultras did find their way into some stores btw).
Yes, Huang did say NV30 was a failure, and a joint conference between nVidia and TSMC was planned to let people know why it was a failure, except that TSMC never showed for that conference...

The only reason NV30 was released at all was because they couldn't get NV35 into production quickly enough at IBM to replace NV30 (it taped out very soon after NV30's official launch). nVidia decided they couldn't wait any longer, whether that was a wise decision or not is a matter of opinion.
 
radar1200gs said:
WaltC said:
radar1200gs said:
Walt, walt walt...
NV30 was released - in limited quantities, then cancelled. nVidia had to try and release something while they waited for IBM to become production ready for NV35.

The fate of nV30 was that it was cancelled before it hit retail. Only some few thousand *pre-orders* for the product were filled (estimates vary as to how many)--but no nV30 product ever hit retail store shelves for purchase by retail customers that I was aware of--nV30 was cancelled while still in the pre-order stage. I believe the exact quote by JHH from nVidia in regards to the cancellation was "nV30 was a failure," and I believe that's pretty close if not exact. IIRC, it was not until after nV30 got the axe that nVidia began playing the nV3x FAB production shell games.
*bangs head against brick wall*
NV30 was released. People own them. Whether or not it got to retail is irrelevant (I think a few 5800 non ultras did find their way into some stores btw).
Yes, Huang did say NV30 was a failure, and a joint conference between nVidia and TSMC was planned to let people know why it was a failure, except that TSMC never showed for that conference...

The only reason NV30 was released at all was because they couldn't get NV35 into production quickly enough at IBM to replace NV30 (it taped out very soon after NV30's official launch). nVidia decided they couldn't wait any longer, whether that was a wise decision or not is a matter of opinion.


Umm I was under the impression the NV35 is designed by TSMC fabs, And the NV36 (FX 5700 Ultra and 5700, Were the First IBM fabbed chips)
 
radar1200gs said:
*bangs head against brick wall*

I could be coy and say that this kind of behavior may explain some of your opinions, but I won't do it. I might recommend using a pillow the next time you feel the urge coming on, though...;)

NV30 was released. People own them. Whether or not it got to retail is irrelevant (I think a few 5800 non ultras did find their way into some stores btw).

NV30 was cancelled, not released. And it would not be until late August of '03 that nVidia would claim that the chip it did release through its OEMs into the retail channels, nV35, would begin hitting its yield targets. nV38 came still later in the year. Neither nV35 or nV38 was cancelled as was nV30. Here's how it worked:

(1) nVidia announces nV30 in Late '02.

(2) nVidia OEMs subsequently announce products of their own based on the nV30 reference design supplied by nVidia, and place their advance orders for the gpus (and other things) with nVidia.

(3) Prior to receiving enough nV30 gpus from nVidia to take their nV30 products to retail distribution, nVidia's AIB partners at the time began the process known as "pre-orders," which allows them to take advance orders for an announced product they will ship later, which sometimes involves taking money in advance from their customers and sometimes does not, depending on the OEM and its policies in that regard.

(4) Prior to meeting the orders nVidia's AIB partners had placed with it for nV30 gpus in quantities sufficient to ship their nV30 products into the retail channel, nVidia informs its AIB partners that nV30 is a dead duck and they will not be filling those orders for nV30 after all--with one single exception:

nVidia agreed in the cases of some of its AIB partners, like BFG, to ship them enough nV30 product to cover the pre-orders they'd received advance payment for. And that is all that ever happened, IIRC.

I can't fathom why you'd say that "Whether or not it got to retail is irrelevant," since clearly nVidia would be out of business in short order if all of its gpus followed the nV30 pattern in that respect...;) The idea behind launching a gpu is to be able to sell it in quantities sufficient to meet the demand for it in the retail channels served by your AIB partners (otherwise, there's no point in making it, since your partners won't buy it if they cannot sell enough of it to turn a profit.)

Yes, Huang did say NV30 was a failure,

Which presumably is why he cancelled its production, right?

and a joint conference between nVidia and TSMC was planned to let people know why it was a failure, except that TSMC never showed for that conference...

The problem is that although JHH stated that nV30 had failed (which we both agree he did), he did not state that it was a failure because of TSMC at that time. Instead, the entire "blame the FAB game" was conducted by way of a PR campaign which caused people to *infer* that "nV30 was TSMC's fault." JHH never stated that it was. People will infer what they may, but in this case it seemed to me at the time highly unlikely that TSMC was responsible for nV30's failure, and I said as much then. I think that assumption has been proved correct, and nVidia still uses TSMC for a good portion of its business, accordingly.

As to the hypothetical "conference" you allege, from which you infer TSMC was going to stand up and say: "We suck and that's why nV30 sucks," I think TSMC would have to be insane to make such a statement, most especially if TSMC knew perfectly well it had nothing to do with nV30's failure.

The only reason NV30 was released at all was because they couldn't get NV35 into production quickly enough at IBM to replace NV30 (it taped out very soon after NV30's official launch). nVidia decided they couldn't wait any longer, whether that was a wise decision or not is a matter of opinion.

I hope you can see what a ridiculous statement this is...;) What you are saying is that nVidia never planned to ship nV30, and that the entire nV30 announcement was a deliberate sham; and that the entire time nVidia was working with TSMC on nV30, through one delay after another, that its use of TSMC was also a sham, since nVidia planned all along not to use TSMC to manufacture nV30--but the *real and secret plan* nVidia had was to actually make nV35 later on at IBM instead...;) Heh...;) (As some have pointed out, though, it seems like it was TSMC, not IBM, which fabbed nV35, which completely erodes your notion here.)

Radar, I think you are registering one too many blips for your own good...:D If nVidia had made a habit out of doing as you suggest, the company would have expired long ago from general incompetence laced with a healthy dose of insanity...:D
 
Chrisray
NV35 was fabbed at IBM, not TSMC.

WaltC
only people like you cause me to want to hit my head against a brick wall. Tell the owners of 5800's and 5800 Ultras that it was never released. Take your garbage elsewhere.
 
Back
Top