Well, it is an editorial. I think quite a few of us have been guilty of producing one of those now and again. I don't know how much Kyle knows about the internal workings of these chips, or why they perform the way they do, but he certainly knows things from the end users perspective. Just because he is taking that point of view for most of his arguments does not mean it is invalid... because this is supposed to be a customer-centric set of businesses.
While I don't agree with a lot of his contentions in the article, he certainly has a right to put forward his point of view.
Some years back I wrote an article called "Slowing Down the Process Migration" and in it I talked about what I thought some trends were going to be. One of the biggest points of the article was that these companies are going to reach a point in that die shrinks are going to become harder and harder, and to keep die sizes to more manageable levels they are going to have to do more functionality with fewer transistors. NVIDIA's use of custom scalar units certainly underlines this point, as they have significantly increased performance without doubling that particular transistor count... and they do it by increasing the speed. While the promise of the Intrinsity technology has never appeared to go anywhere, NVIDIA taking the tough choice to do custom portions of the GPU is certainly a far reaching decision. We will end up seeing in within two generations time more portions of the GPU going full custom, which will make for better performance and power consumption vs. a traditional standard cell design. So instead of having 256 scalar units of standard cell we have 128 units running at more than double the speed without much more power consumption. I certainly hope ATI/AMD can do something like this in short order...