NV to leave IBM?

radar1200gs said:
Chrisray
NV35 was fabbed at IBM, not TSMC.

WaltC
only people like you cause me to want to hit my head against a brick wall. Tell the owners of 5800's and 5800 Ultras that it was never released. Take your garbage elsewhere.
he didnt say that. He said it was stopped. He knows there where " thousands" made ( only by nvidia, [with MSIs help?]).



I realy dont think nVidia is going to leave IBM, theres nowhere eles to go?
 
radar1200gs said:
Chrisray
NV35 was fabbed at IBM, not TSMC.

WaltC
only people like you cause me to want to hit my head against a brick wall. Tell the owners of 5800's and 5800 Ultras that it was never released. Take your garbage elsewhere.


Surprises me. Guess I was wrong all this time ;p
 
AlphaWolf said:
radar1200gs said:
NV35 was fabbed at IBM, not TSMC.

Well there were rumors of both but this and this seems to suggest that nv35 was fabbed at tsmc.

IBM fabbing nv36. It also mentions nv38 at tsmc.


<edit>No Chris, you were right. ;)


So I was right? well dont feel so stupid now. Thanks ;p 8)
 
Walt C: I think few will argue that in terms of features, the NV30 was more "advanced" than the R300, but it of course couldn't perform worth beans as compared to the competition. Examples include long shader support, FP16/FP32 support, PS/VS 2.0a, and a couple of other little things. So yeah, ATI was able to deliver PS/VS 2.0 performance far beyond what the NV30 could, but when you look at the overall feature set, the NV30 was slightly richer.

And yes, NV35 and NV38 are made by TSMC. NV36 was the first chip to come off of the IBM 130 nm FSG line. Again, most indicators point to the 130 nm bulk process as being very solid, it is problems with 130 nm SOI and 90 nm that have caused major losses for IBM (in terms of both good die and money poured into those processes to get them up to speed). If I were a betting man (and I guess I kinda am), I would think that by mid-summer of this year most of IBM's problems will be solved and we will probably see increased yields on their advanced processes. Makes the idea of a fall refresh for NVIDIA somewhat more interesting.
 
I would think that by mid-summer of this year most of IBM's problems will be solved and we will probably see increased yields on their advanced processes. Makes the idea of a fall refresh for NVIDIA somewhat more interesting.
Unless they're already using TSMC and .13u low-K for a PEG-native NV40 (NV45, in other words).

Hint hint, wink wink.
 
The Baron said:
I would think that by mid-summer of this year most of IBM's problems will be solved and we will probably see increased yields on their advanced processes. Makes the idea of a fall refresh for NVIDIA somewhat more interesting.
Unless they're already using TSMC and .13u low-K for a PEG-native NV40 (NV45, in other words).

Hint hint, wink wink.

Well good point not like a company the size of NVIDIA could afford not to mend the bridges between itself and TMSC nor would you expect to leave one SC manufacturer before they have scheduled availability of another.

It really was a shame that the .13u low-k process didn't work out for NVIDIA originally because I believe they invested a lot in TMSC in getting the process right.

But don't you remember the famous conference call.

LOW K IS DANGEROUS :devilish: ;)
 
Evildeus said:
Hmmm, we are already talking of NV45? :oops: ;)
NV45 doesn't seem to be a refresh like the NV35 or NV25 were refreshes. To the best of my knowledge, it's a PEG-native NV40. The better part is that it's already taped out.
 
JoshMST said:
Walt C: I think few will argue that in terms of features, the NV30 was more "advanced" than the R300, but it of course couldn't perform worth beans as compared to the competition. Examples include long shader support, FP16/FP32 support, PS/VS 2.0a, and a couple of other little things. So yeah, ATI was able to deliver PS/VS 2.0 performance far beyond what the NV30 could, but when you look at the overall feature set, the NV30 was slightly richer.

Maybe NV30 was more complex or flexible, but in terms of features R300 was(is) ahead big time:

1. FP render targets
2. Multiple render targets
3. Programable grid multisampling
4. Centroid sampling

Plus it ran games<DX9 in FP24,which was a visible improvement.

It could be used in multiple GPU cards... and so on.
 
The Baron said:
Evildeus said:
Hmmm, we are already talking of NV45? :oops: ;)
NV45 doesn't seem to be a refresh like the NV35 or NV25 were refreshes. To the best of my knowledge, it's a PEG-native NV40. The better part is that it's already taped out.
I suposed then that it will depend on Ati R420 XT (423 XT) part. If the gap is too big, Nv could release this chip quickly, otherwise, just have another source of NV40s :)
 
Evildeus said:
The Baron said:
Evildeus said:
Hmmm, we are already talking of NV45? :oops: ;)
NV45 doesn't seem to be a refresh like the NV35 or NV25 were refreshes. To the best of my knowledge, it's a PEG-native NV40. The better part is that it's already taped out.
I suposed then that it will depend on Ati R420 XT (423 XT) part. If the gap is too big, Nv could release this chip quickly, otherwise, just have another source of NV40s :)
I'd say that this is almost certainly 100% correct. The NV45 taped out around the same time as the NV40, but there is an entry for a bridged 6800 Ultra (aka an NV40 that uses the HSI in an AGP chipset to PCI Express motherboard fashion).
 
The Baron said:
I'd say that this is almost certainly 100% correct. The NV45 taped out around the same time as the NV40, but there is an entry for a bridged 6800 Ultra (aka an NV40 that uses the HSI in an AGP chipset to PCI Express motherboard fashion).
THAT would make an awful lot of sense, and go a loooong way towards explaining (to me at least) why they developed the HSI. (If "HSI" is that AGP/PEG adapter thingy)

Smart move, it covers their asses either way. 8)
 
vb said:
JoshMST said:
Walt C: I think few will argue that in terms of features, the NV30 was more "advanced" than the R300, but it of course couldn't perform worth beans as compared to the competition. Examples include long shader support, FP16/FP32 support, PS/VS 2.0a, and a couple of other little things. So yeah, ATI was able to deliver PS/VS 2.0 performance far beyond what the NV30 could, but when you look at the overall feature set, the NV30 was slightly richer.

Maybe NV30 was more complex or flexible, but in terms of features R300 was(is) ahead big time:

1. FP render targets
2. Multiple render targets
3. Programable grid multisampling
4. Centroid sampling

Plus it ran games<DX9 in FP24,which was a visible improvement.

It could be used in multiple GPU cards... and so on.

Josh, I have to go with vb on this one...;) fp32 was a "marchitecture" feature of nV3x, not a useful architectural feature for the purpose of running 3d games, ditto the "long shader instruction chain support," and fp16 was and is inferior to fp24 on R3x0, both in terms of efficacy and performance. vb lists several other API-support shortcomings that really put the nV3x in the DX9 category in name only--another "marchitecture" aspect of nV3x. I mean, FX12 and fp16 were more "advanced" than fp24 in what respect, exactly? But for 3d games FXx and fp16 were all she wrote for nV3x. Indeed, lately the only thing Kirk and anybody else in nVidia wants to talk about publicly in relation to nV3x is fp16--even nVidia dropped the pretense that fp32 in nV3x ever meant anything in regard to 3d gaming support.

Additionally, how could anyone classify a 4x2 pipeline organization, dating years back, as "more advanced" than R3x0's 8x1 organization, which was definitely new?

If you recall as well, nVidia devoted prodigious amounts of PR last year to explaining why ps2.x was not "the future of 3d," and did things like resign from the FM program in "protest" over the benchmark's inclusion of things like ps2.0 support, did many driver compilation "optimizations" the purpose of which was to replace 2.0 shader code with 1.x, etc., ad infinitum. I just cannot see how any of this might be congruent with a more "advanced" product. Rather, it always seemed clear to me that the reverse was always true, which was why nVidia made all of the PR blunders it made last year. I mean, even JHH was quoted as saying the R300 was a "wonderful" chip, and even he did not pretend nV3x was its equal.
 
I am not arguing that the R300 was a fabulous chip. What I am saying is that for the NV30 design NVIDIA was far to adventurous to try to support things like long shader programs and FP16/FP32. Now, I realize that FP16 is far inferior to FP24, but for professional applications the mix of FP16/FP32 appears to be appreciated. Yes, the 4x2 architecture was outclassed by the 8x1. There is no doubt the R300 is a much better chip, especially for gaming! What I am trying to say is that if NVIDIA had followed the DX9 spec much more religiously (instead of trying to do their own thing), then perhaps they might have been a bit more successful. As it was, they tried to push features that were beyond SM 2.0 specifications. It was a horrible decision by NVIDIA to do this, and it burned them very badly. Yes, the NV3x architecture is missing things such as MRT's, FPR, and centroid sampling. However, it does add quite a few things that are more professional application specific vs. gaming specific. Honestly, I am not trying to take anything away from ATI, as they designed a tremendous part. What I am trying to say is that NVIDIA shot themselves in the foot by trying to be all things to all consumers with this design. They made the design far too complex in many ways, yet not forward looking enough (such as using the 4x2 architecture). Good idea... bad execution.
 
As advanced as the rest of the NV3x was, the register limiation of the chips was just a fucking stupid design decision. Why, why, why, why!
 
Josh, I see what you're saying, but I just think you are a bit topheavy on the "professional" side of the equation, because I would imagine the demand for nV3x to support 3d gaming APIs is probably on the order of 10-1, or maybe 20-1, over the demand for those chips for so-called "professional" 3d use. As such, I saw nVidia's increased marketing efforts toward "professionals" for nV3x as mostly a dodge to shift attention away from what was missing or else poorly executed on the 3d-gaming API side of the fence. I think that if 3dfx was still around they'd have used the word "unbalanced" to describe nV3x, and I think they'd have been right. nV3x always appeared to me as a .13 micron nV2x variant with some "DX9-ish" stuff more or less "bolted on", and I think the hardware and performance characteristics of nV3x bear this out (it is a kludge, in other words, and performs congruent with a kludge.) It's certainly nowhere near the kind of homogenous design of R3x0, or nV40, for that matter. Basically, my own opinion is that R300 threw a nasty monkeywrench into nVidia's ongoing strategem to milk the basic nV10 architecture for as long as possible. Had it not been for R300, the scary thing is that nV3x would have looked much better, and I think it would have been a long time, if ever, before we'd have seen nVidia try something like nV40. IMO, of course...;)
 
Back
Top