I think ISVs tend to be conservative, and deeply interested in risk/reward. The more niche-y a market is (some of the gpgpu areas), the more likely either side is liable to be able to convince an ISV "on the merits". The broader based (and, definitionally, lower priced per unit) the market is (like, for instance, gaming), the harder the ISVs will be to get to move outside their comfort zones and the "fat part of the market", because the less control there will be over creating a compatible client base.
That's true; but aren't they as conservative as you say because some if not all of them had to learn a few things the hard way?
My point is that yes both sides will have a segment of the overall graphics market, where CPUs and GPUs can compete; yet it'll take a hell of a lot more before the one would replace the other for the entire market.
And pardon me, if a GPU or CPU has X GPGPU abilities might interest me as a gamer from a technological perspective, but it won't in the least define my future bying decisions either.
I think history shows that the gpu guys are much more used to those scenarios than the cpu guys. Doesn't mean the cpu guys can't learn tho, and clearly the evidence in the 2nd half of 2007 is that Intel seems to be moving to shore up the software end. So it's a little better than it looked in the early part of the year. Tho I still think that's a big mountain they've only begun to climb. . .but at least they have some experienced climbers on the team now.
Depends where the experience of those "climbers" come from; I'm only aware about some of the former 3DLabs folks. Great professional accelerators, but that was about it.
Personally I've been so to speak fooled too several times in the past with high hopes for X or Y supposed new contender in the graphics market, until it always turned into a soap bubble in the end. I raise an eyebrow for the past years every time something like that comes up, since the amount of resources X large company has isn't the defining factor in the end, but rather how much resources that company really intends to devote in the end and if they're really wanting to tolerate red numbers from such a startup project for a couple of years.
Look at Hyperthreading, and how often dual-core optimizations when they finally started happening noted that Hyperthreading got serious benefits too. . . . years later. Or to counter your example, why did NV3 begin NV's climb to success?
Multi-threading has been on GPUs since their birth; I guess it was inevitable for CPUs to start working in that direction in order to increase efficiency. The META core is one example that surfaced long before than hyperthreading did in the mainstream CPU market. In other words it was already proven technology in other areas and it was only a matter of time until it got incorporated in mainstream CPUs too. Ironically the META was actually developed to avoid mutliple-cores on one die.
With NV3x now you're making it a wee bit more complicated; for D3D9.0 several IHVs proposed their sollutions before the API was defined and conincidentially ATI provided the better and more efficient sollution (R3x0). If NVIDIA learned a lesson from NV3x it's in my book two folded: one side is to try to provide the best sollution possible and the other side was to minimize risks and maximize margins. In which case after the NV3x did NV release any high end GPU on the latest and smallest manufacturing process? Of course do I understand what you're trying to get at, but I doubt that NV didn't make any serious changes in their roadmap when they started to feel that NV30 is going to flop. It took them several years to recover from that one, until they reached the peak of their success with G80.
***edit: ...and G80 was/is as successful because (like the R300 in the past) it combines the best possible balance between features/performance and IQ for it's timeframe. We unfortunately don't get such gifts as gamers very often now do we?