IBM to produce Nvidia chips

Sabastian said:
Yeah but they will lose them margins on the high end wafers. This also speaks volumes about TSMCs ability to be competitive on the high end in the future or at least what nvidia thinks. OTOH nvidia will have a better fab and thats something they desperately needed it appears.
That's an issue for Ati also BTW
 
I think it's nice to see a good fab in America getting some more work. I doubt that Nvidia's motivation has anything to do with a sense of nationalism, which would be a terrible reason anyway. The decision is more of a reflection on high quality and productive workers that the IBM plant can offer Nvidia.
 
I hope ATi does a similar move, and asks intel for help ;)

Would really love to see the R400, the R400 what they want it to be..

Not TSMC's version.. i.e what they can allow..

I mean, the TSMC version of the NV3X architecture is a joke.
 
Hmm, don`t bet on IBM being used for anything else but the highest end chips-they are too expensive to be used for mainstream.This may also mean that high-end chips go up in terms of price.
 
When we've already had comments saying that the chip being produced would be an updated version of Geforce FX I doubt that implies NV40 since that is not likely to be a 'GeForce FX'.
 
RussSchultz said:
Maybe NVIDIA is dual sourcing? (I.e. they're both making the chip?)

Russ, is that ecomomically viable given the complexity of the ASIC and the obvious differences between the respective 0.13u processes? How easy is it to migrate designs between processes of the same basic feature size?

I would guess such a transition is highly automated these days and reletively swift, so could well be worth it if TSMC are capable of producing the "non-Ultra" percentile of the overall NV35 yield at a lower cost/die than IBM.

MuFu.
 
Our company took the (AC97) industry by storm. We cut costs by a variety of means. One of which was "fab of the week"--making sure our chip could be fabbed at any number of places. Its hell on the production engineers, but when you've got fabs actually competing for your business and you can switch between them, it can pay off in ways that overcome the loss of yield that arises because you can't optimize for one process.

Also, some fabs claim to be "the same" as TSMCs process. We've had some fabs approach us and offer free mask sets and reduced lot costs to woo our business, or at least give them a try.

Additionally, with as many earthquakes as Taiwan has, its only prudent to dual source chips, especially when you're looking at that sort of volume.
 
Interesting, thanks. I guess competition and not putting your eggs all in one basket are always good things.

RussSchultz said:
Our company took the (AC97) industry by storm.

Heh... not sure why, but that made me laugh. About time someone did. :LOL:

MuFu.
 
Well, somebody has to make them. :p

Though, truthfully, RealTek has managed to undercut our prices but their audio quality is not as good. PC components is a cutthroat business.
 
I think it's much more likely that IBM will be fabbing some of the relatively low-end NVidia chips (integrated).

There is a general belief in the industry that chips produced at IBM fall under IBM's cross licensing with (among others) Intel.
 
The main downside is that the high quality comes at a high price, so this will make it even harder for Nvidia to lower prices and still get decent margins.

It really depends on yields. If yields are higher with IBM, it's possible the difference in yields might, at least partially, consume the higher priced foundry in returns.
 
RGB said:
I think it's much more likely that IBM will be fabbing some of the relatively low-end NVidia chips (integrated).

There is a general belief in the industry that chips produced at IBM fall under IBM's cross licensing with (among others) Intel.

ITs already been confirmed from NVIDIA's management that the high end parts will be fabbed at IBM.
 
I see it as being no less a challenge for IBM than it was for TSMC. It's one thing to build a 190M transistor .13 micron Power 4 + cpu when much of the transistor count is devoted to cache (and expense and operating temps are a secondary concern) as opposed to fabbing a .13 micron ~150M + transistor gpu scheduled for the x86 desktop market, where size, temperature, and expense are critical considerations.

I kind of wonder though if nVidia isn't simply trying to double its odds of success by making this announcement and shift in strategy. Pretty clear though that nVidia thinks its problems are FAB related rather than stemming from basic gpu design complications--which seems to me an odd position to have considering no one out there (including IBM at this stage) can yet make the silicon they want made at useful yields. Also, if indeed we won't see any IBM-fabbed nVidia chips until 2004, as the Silicon Strategies pice suggests ( http://www.siliconstrategies.com/story/OEG20030327S0057 ), one wonders at the viability of nv35 at this stage.
 
Cache produces less heat than other types of transistors? I was under the impression it was the opposite. Or is cache composed of something other than transistors, and thus is excluded from the transistor count?

I'm not so sure size and temperature are crucial considerations for the high end, as nV has demonstrated (though expense is, obviously, a universal concern). It seems to me the performance crown is all-important for the high-end.
 
Pete said:
Cache produces less heat than other types of transistors? I was under the impression it was the opposite. Or is cache composed of something other than transistors, and thus is excluded from the transistor count?

I'm not so sure size and temperature are crucial considerations for the high end, as nV has demonstrated (though expense is, obviously, a universal concern). It seems to me the performance crown is all-important for the high-end.

Not necessarily less heat, of course, just less complex (a good bit less complex.) In current 3D chips areas of transistors routinely shut down or go into a reduced power state when not being used, which is why most current 2D/3D chips run much cooler when doing 2D as opposed to 3D operations. IE, you can't premise a successful .13 micron gpu on the basis of a successful .13 micron cpu with roughly the same number of transistors as the operational requirements (indeed the design itself) is much different between the two--ie, you can't use a gpu for a cpu and vice-versa. The "number of transistors" is merely the tip of the iceburg.
 
Back
Top