I just do not see how asking tens of thousands of companies to modify their software in hope of Intel agreeing to bless everyone with a higher-lowest common denominator is more reasonable than Intel doing it first and then allowing the software companies to catch up.
I guess it's my fault for expecting Intel to be the leader instead of the follower.
For the last couple of decades we've seen remarkable and relentless improvement in single-threaded performance of x86 cores. This has been driven by extremely impressive advances in the sophistication of the core architectures, and in the manufacturing processes. Like them or not, Intel has been the leader here (on average). Accusing them of being a follower is disingenuous, verging on clueless.
Why have these improvements in the single-threaded performance been necessary? Why build Pentium Pro when they could have thrown half-a-dozen 486 cores on a chip? Because the software people claimed it was too hard to use multiple cores. Twenty years later they're saying the same thing, but look at the scale of increase in complexity that Intel have managed in the core in the same time-frame. Today Intel are parallelising your single-threaded code for you, in real-time, in hardware. Just so you don't have to think about it.
Before then, they were patting themselves on the back for releasing the i486 at all, a modest improvement about 4 years after the introduction of the i386 in the absence of serious competition (I remember someone in the industry said he attended a talk where that was essentially the message). They also aggressively tried to stifle competing innovation with their rebates program and litigation against companies that had reverse engineered their instruction set, which in the end cost them a piddling 1.25 billion dollar fine. This was a very successful program for Intel, but terrible for the world of computing.
It's not really a chicken and egg problem per se. Software with sufficient expressed parallelism will not pay any penalty running on fewer cores. People just have to stop doing parallelism by "moving X into another thread" and stopping when they get adequate use of 2-4 cores...
Cilk/Cilk++/Threading Building Blocks/Grand Central Dispatch... solved (
Intel and AMD have already done it - even quad core processors are barely utilized! If IHVs were "following" software you'd still just have dual cores. And 6/8 cores isn't really too ridiculous to get either... but most consumers rightly don't bother because it doesn't make anything they do faster!I just do not see how asking tens of thousands of companies to modify their software in hope of Intel agreeing to bless everyone with a higher-lowest common denominator is more reasonable than Intel doing it first and then allowing the software companies to catch up.
I'm well aware - realize that I'm a software developer, not a hardware guy Making things parallel is what I do for a good chunk of my job today, and even more in the past.Things become more complex when the more parallel software has a higher aggregate runtime, for example because of higher memory footprint putting more pressure on caches or doing some operations redundantly between threads. It really can become a case of making tradeoffs.
Dude... did you miss his emoticon? I'm hope you did because if you really think that Ing doesn't know what he's talking about then I'm sort of uninterested in whatever else you have to say I'll assume a miscommunication here.No. Nothing in your list solves anything but the very easiest problems in parallel programming. Thinking that they solve anything that wasn't solved 30 years ago demonstrates that you understand absolutely nothing about why MT is hard.
I guess it's my fault for expecting Intel to be the leader instead of the follower.
No. Nothing in your list solves anything but the very easiest problems in parallel programming. Thinking that they solve anything that wasn't solved 30 years ago demonstrates that you understand absolutely nothing about why MT is hard....
And what a beast it was!
It also includes a power control unit on board to control things like Intel's Turbo Boost that is itself equivalent to a 486 processor.
There is an interesting dilemma and an opportunity. I worked with Intel for many years. We had an opportunity to bring out 486 series of chips — 386 was one of my children. The 386 series was working great, it was highly manufacturable and very cost-effective.
Then, we introduced the 486 — it was expensive and it was hard to manufacture. So what did we do? We got rid of the 386 as fast as we could and moved to 486. It was a lousy business decision, until a year or two later.
We had to eat our children — may be that is a bit too graphic — but if we didn't do it, there was the risk of somebody else doing it. That's how I see the dilemma that the IT services companies are facing.
The cloud is a radically more efficient model for delivering services and applications. They may see the revenue opportunity declining in the short term when they make the transition, as Intel did when they moved from 386 to 486.
Profit margins will be lousy, until you get to the other side. The transition will be painful. If they don't evolve and transition, they will increasingly become a boat anchor for the customer.
Here's a relevant quote from Gelsinger about Intel's situation then from http://blog.smartbear.com/community/gelsinger-and-meyer-two-cpu-designers-who-changed-the-world/
Great foresight on his part but it's clear that Intel made this decision largely in the absence of direct competition from other contemporary companies in those days. It's not too different from Apple's story in the last decade or so.
Those other CPUs you mentioned were sold in higher priced and lower volume computers compared to the ubiquitous x86 PC. It was only when AMD and Cyrix really emerged w/ their 486 clones that Intel was forced to dramatically drop their prices and started fighting for their lives. (RISC CPUs may have less of a direct economic impact on Intel's bottom line at that time, but they were showing much better integer performance than Intel parts before the coup that was the Pentium Pro. I think the switch to internal uops by AMD and Intel clearly shows the impact of certain RISC philosophies on their internal designs.)I don't understand how you get to that conclusion. Gelsinger is saying they pushed the 486 even though they made less money doing so. If not because of competition, then what?
Up until the 486, Motorola's 68K series was competitive. You also had a rich set of RISC competitors, MIPS, SPARC, PA-RISC, Power and Motorola's 88K.
The 486 was a *huge* improvement on the 386 with more than twice the IPC and integrated FPU. It marked the beginning of Intel entering the workstation market.
Cheers
Intel said the new 1,000-piece price for the 66 MHz Pentium processor would be $750, while the 60-MHz would be priced at $675 each, down 14 percent from the current prices. It said it would also cut the price on the 66-MHz Intel 486 DX2 processor to $360 each in 1,000-piece quantities, down 18 percent from current prices. And it plans to lower the prices of other 486 processors.
...
Although the 486 price cuts will most directly affect Advanced Micro Devices Inc., which sells 486 clones, Ben Anixter, the company's vice president for external affairs, also said the cuts were anticipated. "We're shipping 486 DX2 chips now, and our prices are essentially their prices," he said. "This does exemplify what competition will do."
EDIT: http://processortimeline.info/proc1980.htm
indicates that the 386 intro'd at $299 wheas the 486 intro'd at $900. This is a rather huge premium even when taking inflation into account; compare this to how new models are introducing at about the same prices as prior ones now-a-days. Prices dropped when clones went to market:
You're comparing apples to organges.
The 386 wasn't faster than the fastests 286 running 16 bit code when it launched. The 486 was an instant doubling of performance. You also need to factor in the 387 co-processor to get a fair comparison, that doubles the cost of the 386 system.
If you look at competitors pricing (as per your link), the 33 MHz 68030, used in Apple Macs, was nearly $700 and very much inferior to the 486.
The price is a result of supply and demand, so yes, the high price is in part the result of lack of competition, but it is also a result of supply constraints; The die size of the 486 was initially more than three times the size of the 386. It wasn't until the 486DX (a shrink and a redesign) that the 486 hit mainstream prices and volumes.
The story repeated itself with the Pentium, launched initially in 0.8um BicMOS: Big, hot and hard to manufacture. It repeated itself again with the PPRO, which didn't sell in large quantities until the P-II was released.
Cheers