Bouncing Zabaglione Bros. said:
However you have taken those few words very much out of context, and not really addressed the main points that I made. If your product doesn't have some kind of lifespan and some kind of regular improvements, it probably isn't economically viable. I can't see the market changing so that you get one revolutionary card every three years that costs $1200 and that's all you get to buy until the next three year cycle begins.
For instance, we have the R350 six months after the R300, and it's basically an evolutionary improvement. Are you suggesting that ATI should not have launched the R300, and waited until now so that it had time to "evolve" into the R350? Obviously that would have lost ATI a lot of sales at a time when they were the only show in town, and much, much better than the competition, and thus still giving customers a great benefit.
Frankly, I don't understand what you are arguing.
What people have been saying in this thread is that while providing small increases in clock speed during the life time of a chip is nice and all, it doesn't really provide a sufficient step for an upgrade decision if you own a previous lower clocked variation of the same chip. I contributed that the benefit to the manufacturer was rather in the relative market postioning vs their competition, but agreed that small clock hikes didn't provide much impetus for upgrades.
Do you agree or disagree with those sentiments?
As I said, I don't really understand what you're arguing, so forgive me if I respond in a vague manner. We
are talking about semiconductors here, and additionally about a class of ASICs that are amenable to parallell processing, thus directly benefitting from increases both in clock speed and circuit density when new lithographic process become available. (Thus it is quite reasonable to assume that GPUs should grow faster than CPUs in processing power. There are also natural generational steps for such devices, that coincide with the progression to more advanced lithography. It isn't as clear cut as all that of course, since there are quite a bit of differences, and evolutionary tweaking possible, using the same wavelength. But that was why I expressed a belief that it would be difficult, within the present power constraints, to provide a factor of 2 performance improvements across the board by simply moving from TSMC 0.15 to 0.13, never mind the memory technology necessary.)
There has been some talk, primarily from nVidia, about lengthening product cycles, but it's a double-edged sword. The benefit is obviously that you can spread development costs over a longer time. The less obvious drawback is illustrated by the sentiments in this thread - if you don't progress as fast, then neither do people feel the need to upgrade as often, leading to reduced sales, primarily in retail. Furthermore, generally consumer interest in your field of products will wane with a lack of interesting development.
The last may be another reason for releasing these relatively pointless clock revisions - they help maintain an impression of continous progress, and give the hardware sites/rags something to write about. Keeps the pot simmering, so to speak.
I guess I examplify the second consideration. Some time ago, I said that I would stop visiting B3D until we hit the 0.09um node, because I didn't think that ATI/nVidia would be able to come up with other than more-of-the-same until their density/power budgets changed substantially. And to me, who is not professional in this field, doing the same things only at one step higher resolution just isn't all that interesting. The benchmark cheating changed that though. Very interesting to see the reactions and responses.
Peace,
Entropy