NVidia management

Let me comment a few things:

"On the other hand, Huang is a realistic and smart guy and is likely to take dramatic actions to correct this path (if the Board of Directors lets him, that is)."

Well, I don't see how the board of directors could say "No." to Jen Hsun Huang. He owns 10% of the company's stock, is one of the co-founders and seems to have nearly inconditional support from his employees.
Jen Hsun Huang, IMO, is a great CEO and President. However, I also fear he's way too optimistic.

Strangely, I believe he's the one who put nVidia into this NV30 mess. He actually said it indirectly himself.
He's the one who asked the NV30 team to achieve "Cinematic Computing". He said it himself. And really, with a so aggressive objective, it was obvious they had to shoot for very aggressive specs: 0.13 and high-end GDDR-II made a lot of sense at the time.
The problem, however, was that both of these made a August release impossible. And nVidia didn't get as fast GDDR-II as they hoped for.

So, as you see, it's Huang who indirectly created all of this mess. But I'm sure he's a smart man, and won't do the same mistakes again.

"The best thing for NVidia to do at this point is to recognize that they have indeed lost this battle and focus their attention on the market that actually pays for all this amazing chip design: the budget category. "

As other people said, the split of profit between high-end and low-end is pretty much 50/50.
My understanding of nVidia's 2003 strategy is simple:
1. Release the NV31/NV34 ( and maybe NV33 ) for the mid-end and low-end respectively. Shoot for the price/performance ratio.
2. Release the NV35 between the R350 and R400 release, gaining the performance crown.
3. Release the NV36 after the R400, to compete with the traditional R400 mid-end solutions and keep the mid-end market ( as well as increasing their lead in workstation & high-end mobile )
4. Release a NV33/NV34/NV35 refresh ( based on the same ASIC ) *if* such products would have any use. The 0.13 process is more mature, so higher clocks or lower price is authorized. A NV35 refresh will only happen if the R400 isn't much faster than the NV35, which is hard to predict ( hard to guess if ATI will deliver or not with the ambitious R400 architecture )

That roadmap will work very well unless:
- The RV350 is a revolution and beats the NV31 by a nice margin
- The R400 beats the NV35 by more than 25% ( which would result in increasing ATI's brand value )
- One of the products got unforseen problems ( the NV31 is currently expected to have problems reaching a high clock-speed, so it'll be interesting to see how much of a problem against the RV350 that'll create )


Uttar
 
Well, I don't think nVidia will have much problem selling their NV31/NV34 chips. But they really need to have some much better FSAA just around the corner, or risk becoming mostly a low-end supplier.
 
Nforce2 is a very good product.
Integrated chipsets are the future (i.e. the "low end" M64s, etc are most likely not going to be all the relevant going forward) so if I were a graphics chip maker I'd be focusing on getting my low end chip into a chipset I can sell to an OEM.
ATI's Radeon9700 Pro is probably the most dominating product in the graphics market since the original voodoo but guess what, they won at the wrong time--weak pc market, weak economy, etc.
Having said all that, I think NVidia's brass is going to have some 'spainin to do to their shareholders.
How much time do ATI and Nvidia have before Intel comes up with a killer integrated chip?
Discuss amongst yourselves.
 
TheMightyPuck is right. Integrated is the future. For us in the enthusiasts market, not a good trend does this portend, methinks. Aslo, did anyone catch that article in Wired this past summer? Huang all but declared that Nvidia was going to do what AMD has not all these many years. Topple Intel.
 
Integrated will eventually become good for performance. It's just impossible for the communication busses between chips to accelerate as quickly as the chips themselves. Once the primary bottlenecks become the busses between various components, the highest-performing systems will use fewer chips.

If you doubt this, just look at how things are progressing today. 3D add-in boards of the past used many chips (Voodoo, Voodoo2, for example), and later ones often still used many external processors, such as RAMDAC's and the like. All of this functionality is being moved onto the main chip. Granted, these reasons are mostly for cost, but performance has not suffered, has it?
 
While everyone is on the integrated kick, youguys think that GPU's will become like CPU's ever and people will just buy a new chip and plop it into their board? I realize currently they are changing much to quickly, but they are also catching up to themselves at an amazing rate. I think in 5 years, or less, they will have very slow progress on them.
 
Mulciber said:
I'll actually be looking for a change in market share towards ATI and Intel. But I still expect growth for nVidia this year in the mobile and chipset segments. ATI should deffinately be gaining marketshare in the retail section.

Why would nvidia be growing in the mobile segment? nvidia has never been particularly strong in that area, and with the 4200Go being a complete paper launch, the MR9000 is *the* highend mobile card. The GF4Go is doing OK, but with RV350 coming soon, the MR9000 should be getting cheaper (and the MR7500 even more so).

Not to mention ATI seems to have plans to integrate the 9000 into its next integrated chipset lineup.
 
MrBond said:
Mulciber said:
I'll actually be looking for a change in market share towards ATI and Intel. But I still expect growth for nVidia this year in the mobile and chipset segments. ATI should deffinately be gaining marketshare in the retail section.

Why would nvidia be growing in the mobile segment? nvidia has never been particularly strong in that area, and with the 4200Go being a complete paper launch, the MR9000 is *the* highend mobile card. The GF4Go is doing OK, but with RV350 coming soon, the MR9000 should be getting cheaper (and the MR7500 even more so).

Not to mention ATI seems to have plans to integrate the 9000 into its next integrated chipset lineup.

The Geforce Go products have never really been strong competitors compaired to ATIs mobile lineup. nVidias presence in the market hasn't been that impressive, but the simple fact that they will have a product on the market (is it nv31 or nv34?), and that its name is Geforce, that you will see companies supporting it. Considering their current marketshare, the product wouldn't even have to do very well to bring them "growth".
 
MrBond said:
Why would nvidia be growing in the mobile segment? nvidia has never been particularly strong in that area, and with the 4200Go being a complete paper launch, the MR9000 is *the* highend mobile card. The GF4Go is doing OK, but with RV350 coming soon, the MR9000 should be getting cheaper (and the MR7500 even more so).

All true, but when you're starting from market share in the single digits, it's hard to believe that they wouldn't or couldn't grow their market share, so long as their products are somewhat price/performance competitive.

The real question is how much growth and how fast? To the former, I would guess not much (their mobile chips run too hot) and to the latter, I would guess not too fast. Remember that ATI has about 50% market share in mobile.

IMO, I believe that most of NV's recent growth in the mobile segment has been on the Apple side of the market, not PCs. Obviously, that market segment is very limited in size at this time.
 
Nvidia has no dx9 low end part like ati does with their 9500np/pro. Nvidia has only $300 400/800 gffx part. If they downclock any lower then ati low end part wins in speed. Ati has nvidia covered at both ends and I don't see how nvidia is going to get out of this one.
 
demalion said:
Hmm...

I think nVidia engineers recognized the comparison of nv30 versus R300 long ago. It is simply that marketing comments didn't.
Huang's comments and marketing decisions reflect...marketing, but unless he acted the fool, his other decisions would have reflected the evaluation of the engineers.
I think that admission concerning performance leadership deviates from reflecting marketing only as much as necessary to convey to investors that he hasn't acted the fool...


Assuring your investors that you are not a fool is always wise policy for the CEO of a publicly held company, wouldn't you agree?.....;) I also think nVidia knew for certain it was not in a competitive position at least as far back as September--they had a couple of good months there before the nv30 paper launch to study the R300 product in its shipping form, during which the Dustbuster was conceived, IMO.


I maintain it is a fallacy to view the nv35 design decisions as something that just occurred, or that admitting the loss of performance leadership in public now has anything to do with being able to address the nv30's deficiencies in the nv35. Note where that comment begins and ends, and the issues with nv35 launch it does not seek to address.

Well, this presents a problem, I think. If we say we think the "nv35" was conceived at a point well in advance of nVidia's getting a look at the R300, which is most likely the case, and we concede that nv30 is not competitive with R300, what does this do to any nv35 plans which were earlier based on an nv30 design which nVidia now knows is non-competitive? This also assumes there is nothing wrong with nv30 from a design standpoint. It would seem to me that with the knowledge that nv35 will have to not only directly compete with R350, but R400, as well, that nVidia might well conclude that its existing plans for nv35 are insufficient and need to be completely reexamined and revamped.

Also, I think the nv30 is BOTH based on the GeForce 4 design (fixed function and some of the "shader" functionality) AND a new architecture (the advanced shaders capabilities). No period emphasized.... :) I also persist in my opinion that dismissing the magnitude of the impact of the 128-bit bus is a folly, and basing expected architecture performance on such a dismissal leads to a very flawed analysis.

I think it can be just as flawed, however, to exaggerate the importance of a 256-bit bus to the nv30's performance characteristics, though, at this time. We don't really know enough about the chip to conclude what impact, if any, a 256-bit bus would have on it in its present state. In theory, it should have a lot, in some circumstances. We just don't know enough about nv30 right now to know whether theory applies here, or there are practical, physical problems with the core that are obscuring whatever theoretical performance characteristics it might have had.

As far as the architecture goes, my only point here is that even if it is loosely based around nv25, it is still brand new, and as such must be considered as a whole--unless the core is physically segregated somehow....which of course it isn't (We don't really even know that much about it.) Clearly it isn't a revamped GF4 (which might not be true if Dave's 4x2 speculations hold water, however), so it's a new architecture for nVidia. As such, there are no guarantees as to the success of the architecture which can be based on the success of the GF4. And of course the success of the GF4 was not built on facing competition from chips like R300 (with which it cannot compete.)

At the same time, I think there are all sorts of positive comments about the R300 in comparison to the nv30 possible even ignoring the 256-bit bus advantage, and all sorts of opportunities for the R350 to warrant even more such comments.

Certainly--I can see that. I guess it boils down to me being more pessimistic than you about the competence at nVidia's helm right now. I think nVidia through the process of absorbing 3dfx and really facing no other 3D-market competition for the last couple of years, has evolved (devolved?) into one of those companies which has a very hard time functioning in the midst of direct competition (re: Apple, 3dfx, etc.) I think nVidia may have erroneously concluded that with the absorbtion of 3dfx the field had been cleared of competition, and had become used to internally comporting itself under that assumption.

Suddenly R300 strikes like a bolt out of the blue--not just a single chip, but the basis of theoretically years of successful future architectures, and all the carefully laid plans nVidia had were thrown into shambles. NVidia hasn't been used to "scrapping" for a few years now. So I think either nVidia will downsize and reorganize along leaner, meaner priciples and come out ready and able to scrap with ATI for market dominance, or like 3dfx they will reel from shock to shock, blow after blow, in a kind of numbed stupor, as though they cannot believe their former glory has departed. They're obviously still reeling right now. I wonder if they can make the transition.
 
WaltC said:
demalion said:
Hmm...

I think nVidia engineers recognized the comparison of nv30 versus R300 long ago. It is simply that marketing comments didn't.
Huang's comments and marketing decisions reflect...marketing, but unless he acted the fool, his other decisions would have reflected the evaluation of the engineers.
I think that admission concerning performance leadership deviates from reflecting marketing only as much as necessary to convey to investors that he hasn't acted the fool...


Assuring your investors that you are not a fool is always wise policy for the CEO of a publicly held company, wouldn't you agree?.....;) I also think nVidia knew for certain it was not in a competitive position at least as far back as September--they had a couple of good months there before the nv30 paper launch to study the R300 product in its shipping form, during which the Dustbuster was conceived, IMO.

It looks to me like you are stuck on the notion that nVidia only knew about the R300 when the rest of us learned the details for certain. I simply don't think that is the case at all...there are all sorts of opportunities for "noise" around announcements and communications with shared vendors that would indicate important information to a savvy competitor that could have given information very early on, and that is excluding leaks via employee cross-pollenation or downright industrial espionage (if you happend to be someone who considers these latter two paranoid ideas...).

Even barring all of the above, I think the engineers MIGHT have been smart enough to guess a few things as far back as E3 when they tried to compete with an overclocked GF 4 Ti 4600 for demonstrating Doom 3 (which happened in May).

We (people in general not in the "3d industry info loop") certainly had rumors of a 256-bit bus in March. I'd think nVidia engineers might have had reason to suspect it before that.

I maintain it is a fallacy to view the nv35 design decisions as something that just occurred, or that admitting the loss of performance leadership in public now has anything to do with being able to address the nv30's deficiencies in the nv35. Note where that comment begins and ends, and the issues with nv35 launch it does not seek to address.

Well, this presents a problem, I think. If we say we think the "nv35" was conceived at a point well in advance of nVidia's getting a look at the R300, which is most likely the case, and we concede that nv30 is not competitive with R300, what does this do to any nv35 plans which were earlier based on an nv30 design which nVidia now knows is non-competitive?

The particular flaw I think I see in your chain of comments here is that you:
1) Ignore that I've mentioned that 256-bit performance may have been a way nVidia saw to enhance performance relatively easily for the nv35 even before they heard of such for the R300 (when they thought they had the option to do so at their leisure), which is why I've mentioned prior that that aspect may quite possibly have been part of the initial design.
2) Are stuck on the idea that nVidia could only have responded after the R300's actual launch (July/August/September, pick your favorite month), when it seems to me quite logical they had information about this much earlier. When they initiated actual appropriate action to address this is wide open to debate, as that depends on company decision making inertia and the adaptability of their engineering crew, but the opportunity for this to have started, IMO, was there a year ago.

Further food for thought, "reliable sources" have indicated things along these lines. That's sources that are perceived to be reliable at this site in particular...

This also assumes there is nothing wrong with nv30 from a design standpoint.

No it doesn't, as I've explained before at length (too lazy to linkify it again).

It would seem to me that with the knowledge that nv35 will have to not only directly compete with R350, but R400, as well, that nVidia might well conclude that its existing plans for nv35 are insufficient and need to be completely reexamined and revamped.

You are making all sorts of assumptions about what they failed to conceive with the nv35 design and the capabilities of the R400...not that they are necessarily wrong (though I disagree completely :p ). I don't think the nv35 can be adapted to compete with the R400, but by my own line of reasoning, if they think they can do so, that might be a valid reason to delay it.
However, I hope you can see how this is a completely different stance than the "nv35 can't be due soon because the nv30 just arrived" stance that I've been targetting (as noted by the absence of my "attack" on this position with anything more than my reasons for concluding differently).

Also, I think the nv30 is BOTH based on the GeForce 4 design (fixed function and some of the "shader" functionality) AND a new architecture (the advanced shaders capabilities). No period emphasized.... :) I also persist in my opinion that dismissing the magnitude of the impact of the 128-bit bus is a folly, and basing expected architecture performance on such a dismissal leads to a very flawed analysis.

I think it can be just as flawed, however, to exaggerate the importance of a 256-bit bus to the nv30's performance characteristics, though, at this time. We don't really know enough about the chip to conclude what impact, if any, a 256-bit bus would have on it in its present state. In theory, it should have a lot, in some circumstances.

Yes, as I've said before, those circumstances where the the 9700 non pro outperforms the 9500 Pro significantly, as one general indication. Though, that depends somewhat on how the new 4x2 speculation pans out...

We just don't know enough about nv30 right now to know whether theory applies here, or there are practical, physical problems with the core that are obscuring whatever theoretical performance characteristics it might have had.

The idea I was arguing against is "256-bit bus wouldn't have helped the nv30 significantly". How much is open to debate, but I don't think the opportunities this could afford the nv35 should be dismissed. How that is exaggerating, I'm not sure...

Specifically in regards to your posts, you proposed that (paraphrase) "even with nearly equivalent bandwidth the nv30 underperforms the R300", to which I pointed at the 128-bit bus with memory and core clocks being equivalent, a 256-bit bus (or another method of increasing bandwidth, though the 256-bit bus would be the most effective way if memory and core clocks remain close) would address this problem to a very significant degree (i.e., the pixel fillrate would stop "choking"...but again, the 4x2 theory would change all of this).

As far as the architecture goes, my only point here is that even if it is loosely based around nv25, it is still brand new, and as such must be considered as a whole--unless the core is physically segregated somehow....which of course it isn't

If you were saying something new in this text, I wasn't able to catch it...I'll just mention that my comments were addressing "The nv30 is a *new architecture*--it's not a "new architecture based on the TNT/GF core"--it's an entirely new architecture, period." comment. Dropping "entirely" and "period" I don't disagree with the statement...

(We don't really even know that much about it.)

Well, I didn't emphasize "period". ;)

Clearly it isn't a revamped GF4 (which might not be true if Dave's 4x2 speculations hold water, however), so it's a new architecture for nVidia.

That 4x2 speculation isn't the only reason it isn't "clear" that is more than a "revamped" GF 4...there are a whole host of others that have been put forth.

As such, there are no guarantees as to the success of the architecture which can be based on the success of the GF4. And of course the success of the GF4 was not built on facing competition from chips like R300 (with which it cannot compete.)

Well, the success of both against the R300 isn't so terrific, so that sort of invalidates the distinction you were trying to draw, atleast IMO. :-?
More seriously, the nv30 gained performance through clock speed, while retaining MANY GF4 characteristics (and maybe even worsening in some regards), atleast to my perception, and this was the focus of my comment.

At the same time, I think there are all sorts of positive comments about the R300 in comparison to the nv30 possible even ignoring the 256-bit bus advantage, and all sorts of opportunities for the R350 to warrant even more such comments.

Certainly--I can see that. I guess it boils down to me being more pessimistic than you about the competence at nVidia's helm right now.

I'll leave it here since for the rest of your comments we're at the point where "I don't necessarily agree with you (though I very well might in all significant points overall)."
 
Sxotty said:
While everyone is on the integrated kick, youguys think that GPU's will become like CPU's ever and people will just buy a new chip and plop it into their board? I realize currently they are changing much to quickly, but they are also catching up to themselves at an amazing rate. I think in 5 years, or less, they will have very slow progress on them.
Micron wanted to do something like that around the time they bought Rendition, IIRC. It obviously went nowhere. And I don't foresee a GPU socket sitting next to a CPU socket on MB's until system RAM closes the gap in speed with video RAM.

OTOH, it would be nice in that it would lock GPU manufacturers into certain resolutions due to UMA speed, thus forcing them to work on prettier, rather than faster, pixels.

To dream....
 
I dunno. Nvidia droped the ball. There is no other way to put it. Okay ati starts unleashing some very nice cards . radeon... nice almost as fast as the gts. 8500 faster than the geforce 3 ti 500. welp this time ati hit one out of the park it was bound to happen . Nvidia didn't expect the R300. Even if they started to know exactly what it would be in early summer they didn't know the clock speed since its my understanding ati didn't know it till right before launch. They figured that the geforce fx clocked at around 400mhz would be a big enough leap over the geforce 4 which would (based on past history) leap frog ati. This wasn't the case and while ati put out the r300 nvidia was having trouble with the .13 mircon and mabye hardware issues. First metal back from the lab showed that a 400 was to slow against the r300 and so they upped the clock speed till it was performing the same as the r300. Where does this leave the nv35. Well really the nv30 hit the wall. Normaly nvidia releases a faster clocked chip. If they have trouble making 500mhz chips i don't see them pushing it anymore. So where does that leave the nv35. New design ? wider bus? I dunno but i don't think the nv35 will not change nvidia's standing right now.
 
John Reynolds said:
caywen said:
It's going to be really interesting to see how NVidia management gets out of this stinker.

What stinker? You mean the stinker of actually growing your market share in spite of the fact that a competitor has a faster product than you do in basically every segment? The stinker of being in the black while your competitor is operating in the red?

The competitor is growing also, even higher percentage...
 
Back
Top