Don't forget that nVidia once went in a similar direction...(the die size change in the GeForce architecture was much smaller, and there has been no die size change in the GeForce3 architecture).
You missed my point. No, nVidia has never done what ATI did with R-300.
nVidia has done two things in the past, with respect to new cores and new processes:
1) They either introduce a new core (with significantly higher transistor count) on a mature process - which has resulted in lower than 'anticipated' clock rates. (Example: Riva 128 to TNT-1, TNT-2 to GeForce 256).
Or
2) They introduced a new core on a new process, thus 'maintaining' and possibly getting higher clockrates. GeForce2 Ultra to GeForce3, and GeForce3/4 to NV30.
nVidia has not to date ever done what ATI just did: Introduce a new core with significantly new complexity, on an old process, while maintaining / bumping up clock rates.
In other words, my basis for saying that ATI could in the future maintain it's "significant new core every 12 months",
despite a new process not being available, is because they have just demonstrated that they don't 'need' a new process to churn out a very impressive part.
While nVidia has demonstrated that they will produce a new core on an old process, they have at the same time demonstrated that those particular products are not overly impressive. nVidia has historically 'needed' a new process to do that.
As for how "easy" it is to use a more advanced process, I don't think it's easier in any way,
Allow me to clarify. You face
different issues when using a bleeding edge process, than you do trying to push the limits of a mature process. Neither route is easy. But from the standpoint of simple economics....with a given number of transistors, it's cheaper ("easier") to make the part on a more advanced process.
nVidia obviously thought it would be "easier" to make the NV30 on 0.13. (In fact...didn't their CEO say that 0.13 is really
required for such an advanced part like the NV30?
)