Josh on NV40 & R420

JoshMST said:
It could very well be, or it could also be that the 175 million tran part was supposed to be introduced this April, while the 210 million tran part was going to be a fall "refresh" product. This could still be the case, but perhaps that 210 mil part was ready well before they were expecting and decided to release that also.

From my limited knowledge of fabrication and chip design, each particular design for a fab uses a "standard cell", that cell is either provided by a 3rd party or the fab partner. In this case, the IBM standard cell is quite different from the TSMC standard cell. So, would it really be in NVIDIA's best interest to port the same chip to each major fab? That would involve thousands upon thousands of man hours and computing years to make sure that the design would work with two different standard cell designs. So basically, what I think NV would do is choose which design will be manufactured where (eg. the FX 5700 is only manufactured by IBM, while the FX 5900/5950 is only manufactured by TSMC).

So, if this is truly the case, then there may be two high end NVIDIA chips in the wild here shortly. How they measure up? I have no clue.

EDIT: some poor spelling and editing on my part.
Exactly. And since the 5700 runs pretty cool too, i think the 210 million part should be the one we all saw the pretty pictures of and they are without any doubt IBM samples (core PCB, ASIC).
If those rumours are correct, TSMC may be the one that manufactered the 175 million part, and this info leaked pretty early, but they both seem to have been taped-out at almost the same time.
 
I thought it was IBM having fab troubles and not TSMC. There was also some speculation that IBM actually annoyed certain customers due to their preferential treatment towards another one of their customers (not mentioning any names... that would be MAD).

IIRC even if the speculations as to why IBM suddenly lost a lot of customers (NVIDIA only being one of them) is incorrect it does not change the fact that IBM has definitely been surpassed by TSMC with regards to gaining NVIDIA back to produce its high end chips.

It is almost definitely not the other way around.

And as for two parts, one being 175m and the other being 210m, and both still being NV40 - it doesnt make any sense at all. My own personal thought is that NVIDIA are using IBM to produce the mid-end cards and TSMC will produce the high end parts but these will not both be labelled internally as NV40 (NVIDIA has never done this in the past and it would not make any sense to do it now).

The difference between an Ultra and non-Ultra has been core and memory speeds only traditionally.

As to how many transistors NV40 has.. well no one really has offered any concrete information on this either.
 
Nappe1 said:
Quitch said:
...Not only that, but surely the fact that todays high-end is tomorrows low-end makes checkbox features important at some point.


in fact, this is not anymore true. R300 never falled to low end, nor did the NV30. They were replaced cheapo ass cores, that have features, but not power to run them. So, only thing where those checkbox features matter is a launch. it is all about the first impression. As an examples, where is software / drivers that supports:
- VideoShader on R300?
- R100 Key Frame Interpolation?
- Adaptive Displacement Mapping on Parhelia?
- Nvidia Shading Rasterizer?

After the company has next generation cores out, they more like seems to hope that older chips would not exist at all or would be forgotten totally.

(I actually noticed that ATI has much more these "hopefully they forget what we demoed on launch and are happy with games" kind-of-things than nVidia.)

Another funny thing is Matrox... they do not have fancy name nor they haven't really hyped their way to do real time effects on videostream, but they do have full support from Adobe. Maybe that's the reason why Parhelia & RT-100 combination is still market leader on real-time video editing.


I'd put SSAA instead of VideoShader, lol.
 
JoshMST said:
From my limited knowledge of fabrication and chip design, each particular design for a fab uses a "standard cell", that cell is either provided by a 3rd party or the fab partner. In this case, the IBM standard cell is quite different from the TSMC standard cell. So, would it really be in NVIDIA's best interest to port the same chip to each major fab? That would involve thousands upon thousands of man hours and computing years to make sure that the design would work with two different standard cell designs.
I think you're overestimating the effort. Its not a trivial effort, to be sure, but its not thousands upon thousands of man hours to port a digital design from one fab to another. Its probably in the order of a few hundred man hours, plus CPU cycles on automated tools.

Think of it as analogous to a program written in standard C and compiling it for two different architectures. Generally, its not a problem unless the architectures are wildly different or the design unnecessarily relies on the underlying architecture.
 
IST said:
Nappe1 said:
Quitch said:
...Not only that, but surely the fact that todays high-end is tomorrows low-end makes checkbox features important at some point.


in fact, this is not anymore true. R300 never falled to low end, nor did the NV30. They were replaced cheapo ass cores, that have features, but not power to run them. So, only thing where those checkbox features matter is a launch. it is all about the first impression. As an examples, where is software / drivers that supports:
- VideoShader on R300?
- R100 Key Frame Interpolation?
- Adaptive Displacement Mapping on Parhelia?
- Nvidia Shading Rasterizer?

After the company has next generation cores out, they more like seems to hope that older chips would not exist at all or would be forgotten totally.

(I actually noticed that ATI has much more these "hopefully they forget what we demoed on launch and are happy with games" kind-of-things than nVidia.)

Another funny thing is Matrox... they do not have fancy name nor they haven't really hyped their way to do real time effects on videostream, but they do have full support from Adobe. Maybe that's the reason why Parhelia & RT-100 combination is still market leader on real-time video editing.


I'd put SSAA instead of VideoShader, lol.

What happened to Truform post 8500 and 32 bit color on the TNT? :) Dot Product 3 bump mapping in Geforce, Pixel Shader 1.1 in Xabre (well there I'm being a little facetious), NVIDIA higher order surfaces (in fact what happened to HOS support alltogether? F-Buffer etc.
 
Yes true Software Truform. Still looking for software support of Truform in recent games btw....

Oh and Vertex Shader 2.0 support in Parhelia...
 
"First off it appears that there are actually two chips that can fall into the NV40 spec, one made by TSMC and the other by IBM. If I were to hazard a guess, the 210 million transistor model is made by TSMC and the 175 million transistor product will be from IBM. I have no idea how these two will stack up, or even if they will be released at the same time."

True or false?

False.
 
If there is a 210 million transistor and a 175 million transistor chip, then I'd assume they are two different chips altogether. Such as NV40 and NV4X.
 
JoshMST said:
Haha, Taz loves me.

I had just posted a few tidbits that I had gleaned from here and there, and yes, most of it has been widely circulated. The thing that I did find very interesting is that it does appear to be two NV4x chips coming out at very close to the same time, one from IBM and the other from TSMC. Could this be the FX6x00 and FX6x00 Ultra? A $400 pricepoint (IBM) card, and the higher end $500 (TSMC) offering?

I don't know for sure, but that "may" look to be the case. I of course was not told anything that specific by NVIDIA, or even that there were two chips.

Not try to be a Faud here, just passing on some info that I had gathered.

I'd look to Jen Hsung Huang's recent statements about there being many different chips in the NV4x family for a clue as well as what happened with NV3x - the highend was manufactured at IBM with TSMC producing the volume chips.

Personally I'd take Huangs comments about hoping TSMC can get yields right as meaning nVidia does not want to be faced with another NV31 fiasco this time round.
 
Quitch said:
Not only that, but surely the fact that todays high-end is tomorrows low-end makes checkbox features important at some point.

I'm not really sure you can say that. For example, on the NVIDIA side you have the FX5200 which is nowhere near as fast as the previous highend GeForce4Ti. Yet the FX5200 does support SM2.0 while the GF4 doesn't. Even before that, when the GF4 was the highend, the GF3 was not the lowend, it was cast by the wayside to make way for the cheaper GF4MX which had less features and performance than the GF3.

You have a case on the ATI side where the R9200 is for all intents and purposes a spinoff on the R8500, which was ATI's previous highend solution. Of course, I think this was a horrible mistake on ATI's part and it allowed NVIDIA to sell a lot of FX5200's with DX9 checkbox features since ATI didn't have a lowend DX9 solution.
 
StealthHawk said:
Quitch said:
Not only that, but surely the fact that todays high-end is tomorrows low-end makes checkbox features important at some point.

I'm not really sure you can say that. For example, on the NVIDIA side you have the FX5200 which is nowhere near as fast as the previous highend GeForce4Ti. Yet the FX5200 does support SM2.0 while the GF4 doesn't. Even before that, when the GF4 was the highend, the GF3 was not the lowend, it was cast by the wayside to make way for the cheaper GF4MX which had less features and performance than the GF3.

You have a case on the ATI side where the R9200 is for all intents and purposes a spinoff on the R8500, which was ATI's previous highend solution. Of course, I think this was a horrible mistake on ATI's part and it allowed NVIDIA to sell a lot of FX5200's with DX9 checkbox features since ATI didn't have a lowend DX9 solution.
ATi didn't really have a lot of choice in the matter - they had several squillion R200 core accidently ordered they had to do something with.
 
It would have been better to call the 9000/9100/9200 8600 8700 8800 maybe, but that's just my opinion.

To be fair to ATI I think a lot of the reason for the name change was due to AIB partner pressure (I remember talking to Stan Ossias in SF (he was the 9000 Product manager) and that's what he said.
 
ben6 said:
It would have been better to call the 9000/9100/9200 8600 8700 8800 maybe, but that's just my opinion.

To be fair to ATI I think a lot of the reason for the name change was due to AIB partner pressure (I remember talking to Stan Ossias in SF (he was the 9000 Product manager) and that's what he said.

The AIB partners probably wanted it but i would think that Ati had just as good reasons as them to get rid of those chips.
 
radar1200gs said:
StealthHawk said:
You have a case on the ATI side where the R9200 is for all intents and purposes a spinoff on the R8500, which was ATI's previous highend solution. Of course, I think this was a horrible mistake on ATI's part and it allowed NVIDIA to sell a lot of FX5200's with DX9 checkbox features since ATI didn't have a lowend DX9 solution.
ATi didn't really have a lot of choice in the matter - they had several squillion R200 core accidently ordered they had to do something with.
That was the 9100, not the 9000/9200, they're two significantly different cores. He was talking about the decision to manufacturer a new DX8 (rather than "DX9" as Nvidia did) part as the budget end of their lineup.
 
Haha, wow. Always nice to get the backhand comments!

Anyway, my point in the post was not that there are in fact two chips that are codenamed NV40, but rather that there are two chips in existence that could possibly fall under the "category of NV40". What their official codenames are, I have no idea.

As for being Pro-NVIDIA, I think that is a bit far fetched. I love graphics technology. What ATI has done in the past 1.5 years is amazing. I made some big mistakes in my "Sordid State of 3D", and I admited those. I did not give ATI the credit due to them. Both companies have their strengths and weaknesses, but I truly do not favor one over the other. In fact, right now, I much prefer ATI products over NVIDIA (which is why I run a 9800XT in my gaming machine). Catalyst drivers are very, very good. Dare I say that they are as good if not better than the NVIDIA Forceware (well, except for a few bugs like the gaming profiles that don't quite work)?

And yes, I may not be the most technically astute individual, but I am not an idiot, and I strive to learn more and more each chance I have.

I guess in a few weeks we will see if I am right. If not, my mistake. If I am right... well, I guess I am right (and we can leave it at that).
 
Back
Top