NV to leave IBM?

Chalnoth said:
1. I really don't see nVidia publicly stating something that would damage their relationship with TSMC. They depend on TSMC pretty heavily (but you'll note that it was around that time that they started to move to IBM).
2. He didn't rule out other factors (i.e. fab problems at TSMC).
I didn't say it's the only reason. But they mentioned it as the foremost reason.
And as we have seen, TSMC was very annoyed at the NV30 trouble. If it had really been their fault, they couldn't have blamed others.
Basically, they said, "No more bs like this. We won't tolerate such a bs as this any more"
3. He did not say anything about 16-bit FP.
What are you talking about? FP32 unit=FP16 unit=32 funcional units of FX. It's one and only the same thing.
Be decent, Chalnoth, you know what it means. Don't fake FP16 != FP32 units of FX.

I still believe that what happened was when nVidia first decided on the process for the NV30, TSMC had an optimistic outlook for the possibilities of their low-k .13 micron process. As the NV30 neared launch, it became apparent that this process would not be available on time. So, lots of extra work had to be done to get the NV30 operational on a normal .13 micron process, which reduced the performance of the final product.
That's purely your speculation.
 
Chalnoth said:
binmaze said:
I couldn't find the very article I was refering to. But here is the close one:
And contrary to the rumors mill that the 0.13 micron manufacturing process delayed taping out the GeForceFX, Kirk blamed implementing these 32 processing units:

One of the reasons that NV30 took us so long is that everything top to bottom is 128-bit floating-point. So for the first time it's possible to make the pictures in hardware that you would make in software, because you don't trash your precision anywhere in the pipe.

Link
1. I really don't see nVidia publicly stating something that would damage their relationship with TSMC. They depend on TSMC pretty heavily (but you'll note that it was around that time that they started to move to IBM).
2. He didn't rule out other factors (i.e. fab problems at TSMC).
3. He did not say anything about 16-bit FP.

I still believe that what happened was when nVidia first decided on the process for the NV30, TSMC had an optimistic outlook for the possibilities of their low-k .13 micron process. As the NV30 neared launch, it became apparent that this process would not be available on time. So, lots of extra work had to be done to get the NV30 operational on a normal .13 micron process, which reduced the performance of the final product.
This really isn't the thread to be delving into all the woes that befell nv30, but I think what you will find is that David Kirk was saying something similar to the following "NV30 took so long to develop in part because of the challenges we faced in implementing a top to bottom 128 bit FP architecture". The problems with Low-K and the forced redesign on TSMC's Bulk 0.13 micron only added to these delays.
 
The problems with Low-K and the forced redesign on TSMC's Bulk 0.13 micron only added to these delays.

In any case it wasn't a less than 6 months delay due to it if not even more.

While I disagree with the POV that TSMC was supposedly "optimistic" over it's low-k 130nm at the time (AFAIK it was the exact opposite and NVIDIA had been warned, that it's still unsafe...), I do agree with Chalnoth that the redesign was the most important factor of the NV30 delay.

Does it help that during summer 2002 not a single fully operational chip came back from the labs?
 
Ailuros said:
The problems with Low-K and the forced redesign on TSMC's Bulk 0.13 micron only added to these delays.

In any case it wasn't a less than 6 months delay due to it if not even more.

While I disagree with the POV that TSMC was supposedly "optimistic" over it's low-k 130nm at the time (AFAIK it was the exact opposite and NVIDIA had been warned, that it's still unsafe...), I do agree with Chalnoth that the redesign was the most important factor of the NV30 delay.

Does it help that during summer 2002 not a single fully operational chip came back from the labs?
Don't forget after NV30 was relaid out for bulk 9.13, nVidia had to do it all over again because TSMC changed the design libraries, little wonder it took an extra 6 months and more.

As for the redesign being the most important part of the delay, subtract 6 months and compare development time to prior NV chips. If (yes, people I know what a what if scenario is...) the Low-K process had worked as intended and no redesign were required NV30 would have launched just before or with R300 (nVidia would have liked to have launched in April, but any earlier Low-K setback ruled that out.
 
As for the redesign being the most important part of the delay, subtract 6 months and compare development time to prior NV chips. If (yes, people I know what a what if scenario is...) the Low-K process had worked as intended and no redesign were required NV30 would have launched just before or with R300 (nVidia would have liked to have launched in April, but any earlier Low-K setback ruled that out.

In the grander scheme of things and if I especially look at the changes in roadmaps of the past, NV30 was in fact far more delayed than just 9 months.

There truly minus the "9 month-low-k130nm-kicked in our face" delay, I saw NV25 being delayed by half a year in order for Ti200/500 to take it's place and NV25 almost a year after NV20 filling the gap that should had been filled back then by NV30. If it sounds too confusing I'm sure one can easily find a public statement that mentions that NV30 was to be their spring 2002 product. There's your FP-whatever hurdle and not past spring 2002.

In fact NV30 must have been longer under development than anything else at NV (all encounted).
 
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control. It's not as if they caused Low-K to foul up twice (look at that NASA document in one of the threads to see how Low-K has stalled industry wise several times now) or the design library problems.
 
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.
 
radar1200gs said:
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.

TSMC warned about low-k 130nm and not "plain" 130nm. There it was NVIDIA's obvious choice to take the risk.
 
I think ATI has proven that to be incorrect, certinaly in the case of NV31.

However, 130nm could have been targetted from the off.
 
Still their fault if they made a design for a process not ready.

Noone is saying that is TMSC fault if ATI had to push back the original R400. But at least they were clever enough to acknowledge that the process will not match their design and change it.
 
Ailuros said:
radar1200gs said:
DaveBaumann said:
Yeah, but, all the extra delays were brought about by 3rd party issues out of nVidia's direct control.

Everyone has control of the process they chose.
And they choose the process for a reason. NV30 would not have been possible at .15 microns. NV31 might barely have been possible.

TSMC warned about low-k 130nm and not "plain" 130nm. There it was NVIDIA's obvious choice to take the risk.

What do you think finally drove nVidia from TSMC? It wasn't the Low-K failure which was disappointing - it was the fact that even on bulk silicon TSMC couldn't produce a die worth a damn.

DaveBaumann said:
I think ATI has proven that to be incorrect, certinaly in the case of NV31.

However, 130nm could have been targetted from the off.
Yeah, but how much time elapsed before ATi did prove nVidia wrong?
 
What do you think finally drove nVidia from TSMC? It wasn't the Low-K failure which was disappointing - it was the fact that even on bulk silicon TSMC couldn't produce a die worth a damn.

I can certainly see that if I take a look at what ATI has in store; which will of course become more prelevant after they've announced their R420.

In straight reply though to your question, I'd say you take another look at the thread title and the speculation that NV might move it's high end designs back to TSMC in the foreseeable future or is at least considering it.

Those two paragraphs above clearly show how bad of a foundry TSMC really is.
 
Yeah, but how much time elapsed before ATi did prove nVidia wrong?

Fall 2002: R300@325MHz, 107M, 150nm (that in reply of NV31 being barely possible on 150nm).
 
Ailuros said:
Yeah, but how much time elapsed before ATi did prove nVidia wrong?

Fall 2002: R300@325MHz, 107M, 150nm (that in reply of NV31 being barely possible on 150nm).
You are forgetting that ATi and nVidia differ pretty radically in how they implement their transistors. Whats possible for ATi isn't necessarily possible for nVidia and vice-versa (similar to AMD vs Intel).
 
That's one point you should have considered before claiming that TSMC in the past wasn't able to produce a die worth a damn (on 130nm or whatever else). I'm pretty certain A0 "vaporware" silicon on TSMC@130nm exists with a nearly comparable transistor count as NV31 and that since early 2002.
 
Ailuros said:
That's one point you should have considered before claiming that TSMC in the past wasn't able to produce a die worth a damn (on 130nm or whatever else). I'm pretty certain A0 "vaporware" silicon on TSMC@130nm exists with a nearly comparable transistor count as NV31 and that since early 2002.
yes, you don't think nVidia were optimistic about the processes they planned to use for nothing do you? Sometime after then, the whole process blew up in everyones face. Only TSMC reallly knows how/why and they sure as hell aren't telling.

I have a suspicion at all has to do with the metal layers personally, that this has been TSMC's trouble all along. The blowup I mentioned above probably struck about the time it was discovered copper vias (bridges between the metal layers) could migrate. TSMC was one of the last fabs to rectify this problem.
 
radar1200gs said:
Only TSMC reallly knows how/why and they sure as hell aren't telling.

Wrong...TSMC and nVidia know why, and neither of them are telling.

Of course, ATI just shrugs their shoulders and says "we don't know why nvidia had problems either..."
 
Back
Top