Uttar said:
Agreed. Although I wouldn't word it that way. I'd rather say that a large part of the reason that the NV3x performs poorly is because of how the chip's design was implemented.
The design isn't bad IMO. Quite a few good ideas. But they didn't think about some major problems, didn't have enough workforce/budget in the beggining ( I'm sure management must have given them way more men than they could use in the end though, eh - too bad they probably knew nothing about the project... )
Saying they aren't responsible or anything is kinda BS IMO. They took risks, and they failed in a lot of POVs. What more is there to it, really?
Uttar
If you look at nv30 and what's happened since it merely follows the same pattern nVidia's taken consistently since 1999--the difference being that this time their aggressive position toward adopting a new FAB process ahead of everybody simply blew up in their faces. The strategy they'd used successfully in that regard since 1999 backfired on them.
nVidia's always been weak in gpu core design, IMO, but very strong in FAB process implementation. nVidia got lucky moving from .25 microns to .18, and then from .18 to .15--pretty much ahead of everybody else. Being aggressive about it allowed them to ramp up MHz without paying much attention to performance increases relative to the core design architecture (not to say there weren't any--just nothing revolutionary.) Where the other guys were conservative in their approach to adopting new processes, nVidia has been consistently aggressive. At .13 microns their luck ran out and exposed the soft underbelly of the company--its lack of imagination in core architecture design.
I mean, this is something that to me is very clear. It matches the historical product record, and to back it up you have the public record of the nVidia CEO, JHH, saying well over a year ago that nv30 was impossible at .15 microns--hence they were going to .13 for it. These are statements he made long before nVidia even knew if if could do a viable, competitive nv30 at .13 microns--which proves the point conclusively, I think, of just how process-heavy nVidia's strategy had become.
By way of yet another concrete example--immediately after ATi had shipped R300 in August of last year with its 8-pixels-per-clock architecture, details on the architecture of nv30 remained a matter of gossip and conjecture until the nv30 reference design was officially unveiled at Comdex. nVidia officially released it as an 8-pixels-per-clock chip--it was in their marketing literature and could later be found even on their product boxes. Direct questions asked in interviews to nVidia such as "Is nv30 an 8-pixel-per-clock architecture?" were answered with this kind of reply, "Yes, we do 8 ops per clock." For a long time nVidia would not even answer the question--but sought only to evade and dodge it at every opportunity. nVidia simply didn't want it known what enormous differences there were between R300 and nv30--nor that nVidia was unable to match R300 architecturally--it's only hope of doing that--extremely high MHz afforded by a good-yielding .13 micron chip--now dashed.
So what was nv30 apart from a .13 nv25 with enhanced integer precision and a bolt-on fp capability? Very little else I suspect. It wasn't really "new" or "revolutionary", despite nVidia's promotional efforts to the contrary. But most of all, relative to being weak in gpu architecture design but top-heavy in advanced process implementation, the gpu-design chickens finally came home to roost with the .13 nv3x gpus.
There are lots of of supporting examples we could discuss--like nVidia's public show of blaming TSMC and moving to IBM for fabbing, only to back off of that substantially later on--to illustrate that their core strategy has always been fab-centered, but the real question is:
What can nVidia do to break out of the mold it's been in since 1999 relative to gpu design? Short of cleaning house and bringing in some fresh blood, I can't see a lot they can do. IMO, nV3x probably represents the very best effort the current gpu-design teams at nVidia are capable of at present. Yes, you can teach old dogs new tricks--but it's never easy...