I'm not sure how to go about wording this, but people have said ATi has had to take a similar approach to designing the R300 that Intel has designing their CPUs. Something along the lines of customly configuring each transistor as opposed to using "libraries" provided by their foundry.
But anyway, considering that point above, and realizing this is a gross over simplification of the senario. Will graphic chip makers be forced to devote more and more engineering resources and/or time to the design of future chips to remain competitive?
Up until the .13 micron issues NVIDIA is having with the super complex NV30, AFAIK things have been relatively smooth sailing down to .15um...
It seems to me that NVIDIA (and ATi up to R300) has been using cutting edge process technologies as a crutch to keep beefing up their GPU's, while ignoring squezzing the most out of mature processes...
Intel has been able to ramp up their .13um process so fast because of what? More R&D cash, correct? Intel spends more on R&D in one year then NVIDIA and ATi put together have for the past few years.
I guess the final question is. Are we going to see product cycles lengthen and innovation slow down really soon? (At the IC level, not memory technology)
But anyway, considering that point above, and realizing this is a gross over simplification of the senario. Will graphic chip makers be forced to devote more and more engineering resources and/or time to the design of future chips to remain competitive?
Up until the .13 micron issues NVIDIA is having with the super complex NV30, AFAIK things have been relatively smooth sailing down to .15um...
It seems to me that NVIDIA (and ATi up to R300) has been using cutting edge process technologies as a crutch to keep beefing up their GPU's, while ignoring squezzing the most out of mature processes...
Intel has been able to ramp up their .13um process so fast because of what? More R&D cash, correct? Intel spends more on R&D in one year then NVIDIA and ATi put together have for the past few years.
I guess the final question is. Are we going to see product cycles lengthen and innovation slow down really soon? (At the IC level, not memory technology)