Intel Loses Steam Thread

And though Intel is ahead process wise, its competitors are not as far behind as the name of their nodes seems to indicate. (See: http://www.chip-architect.com/news/2010_09_04_AMDs_Bobcat_versus_Intels_Atom.html) It's looking more like Intel is sitting amongst peers as time goes on.

Intel used double patterning for 45nm whereas TSMC used immersion lithography. Double patterning implies stricter rules on layout, - and lower density; Something the vertically integrated Intel can more easily enforce internally than TSMC.

I don't think Intel will lose their process lead, quite the contrary. It is capital intensive to be on the bleeding edge. The (much) higher gross margin of Intel's products means they can throw more money at being first at new nodes.

Their 450mm wafer fab is postponed because they currently have capacity excess. They will react like they have in the past by aggregating more of the value silicon in a computer (southbridge/northbridge/GPU) into their CPUs.

Haswell GT3e holds the blueprint of what is to come, the 128MB DRAM on a logic process is just the initial test run. Next we will see a stack of multi-Gbit proper DRAM dies attached directly to the CPU. For the laptop/desktop market, that enables more integrated GPU silicon, in the server market it means more cores per socket.

Cheers
 
Intel used double patterning for 45nm whereas TSMC used immersion lithography. Double patterning implies stricter rules on layout, - and lower density; Something the vertically integrated Intel can more easily enforce internally than TSMC.

I don't think Intel will lose their process lead, quite the contrary. It is capital intensive to be on the bleeding edge. The (much) higher gross margin of Intel's products means they can throw more money at being first at new nodes.

Their 450mm wafer fab is postponed because they currently have capacity excess. They will react like they have in the past by aggregating more of the value silicon in a computer (southbridge/northbridge/GPU) into their CPUs.

Haswell GT3e holds the blueprint of what is to come, the 128MB DRAM on a logic process is just the initial test run. Next we will see a stack of multi-Gbit proper DRAM dies attached directly to the CPU. For the laptop/desktop market, that enables more integrated GPU silicon, in the server market it means more cores per socket.

Cheers

Many of Intel's business strangle-holds from the past are unravelling, and although it will be solidly profitable as far as I can see, it won't be able to dwarf rivals like Qualcomm, TSMC, or Apple (the latter two I consider as rivals) as it did AMD, Cyrix, etc. through the means it once used.

Intel will have to move into lower margin "good-enough" mobile territory to continue to grow and be able to fund these ever more expensive fabs and process refinements. Its high margin PC CPU market is shrinking in the last year or two as people move into mobile devices for their everyday needs. I'm also not seeing as much of a performance boost to upgrading desktops CPUs in recent years as Intel's marginal improvements have been slowing down since the integration of the memory controller on die with Nehalem.

As you say, the next quantum leap in performance will be moving more ram closer to an APU or CPU core. The fairly low bandwidth 128mb victim-cache on the Iris Pros didn't bring much better general performance but it does seem like a tentative first attempt as you mentioned. We will see if their next try will be a compelling enough boost to significantly increase profit growth by bring people back to Intel's high margin businesses, but I suspect not since the competition is not sitting still and there is a race to the bottom for when it comes to price since the playing field is much more level than before.
 
Many of Intel's business strangle-holds from the past are unravelling, and although it will be solidly profitable as far as I can see, it won't be able to dwarf rivals like Qualcomm, TSMC, or Apple (the latter two I consider as rivals) as it did AMD, Cyrix, etc. through the means it once used.

Intel will have to move into lower margin "good-enough" mobile territory to continue to grow and be able to fund these ever more expensive fabs and process refinements.

I think Qualcomm, Samsung and Apple are at least as vulnerable as Intel; Once the chinese get into gear (Rock Chip, Mediatek, Allwinner etc for SOCs, Lenovo, Eben, Vido etc for systems) these companies are going to see their margins erode, - possibly fast.

The only winner here, regrettably, is Google.

Its high margin PC CPU market is shrinking in the last year or two as people move into mobile devices for their everyday needs. I'm also not seeing as much of a performance boost to upgrading desktops CPUs in recent years as Intel's marginal improvements have been slowing down since the integration of the memory controller on die with Nehalem.

They have a stranglehold on corporate, they have a stranglehold on servers. Amd's new 8 core ARM soc, with it's specInt-rate 2006 equivalent to a dual core i3, won't change that.

And I don't think we've seen the end of WinTel in the mobile space. When Microsoft decides to do it right and converge every consumer platform into one Windows, we might see a few shakeups. A dual core Silvermont with 4GB of RAM would make for a perfectly capable PC for most people.

Edit: Wrt. performance: While they have only doubled performance from Nehalem to Haswell, they have done so while lowering power consumption to one third.
Cheers
 
Last edited by a moderator:
I assume you're talking about floating point performance using AVX 2 vs. SSE 4.2 which requires a rewrite and recompile; I don't think bread and butter integer and memory operations saw a similar uplift. The improvement in performance per watt is quite impressive, I agree.

Intel and AMD themselves took business from brawnier, more expensive CPUs in the past, it's not out of the question ARM or lower end x86-64 cores can pull off the same coup in the era of Linux and cheap and good enough, do it yourself microservers. Intel will also have competition from above as well: Power 8 looks extremely brawny with leg room in its Centaur memory interface for stacked memory. I also doubt IBM would have sold off its x86 server division to Lenovo if it thought it had better margins than its big-iron.
 
Intel and AMD themselves took business from brawnier, more expensive CPUs in the past, it's not out of the question ARM or lower end x86-64 cores can pull off the same coup in the era of Linux and cheap and good enough, do it yourself microservers.

That worked because there's isn't (or wasn't) a great deal of difference between a PC and a workstation, and between a workstation and a server. There is a big difference between a tablet and a server, even a microserver.
 
I think Qualcomm, Samsung and Apple are at least as vulnerable as Intel; Once the chinese get into gear (Rock Chip, Mediatek, Allwinner etc for SOCs, Lenovo, Eben, Vido etc for systems) these companies are going to see their margins erode, - possibly fast.
Intel and Qualcomm may be in the same ballpark in terms of vulnerability, as they do not own a platform, provide content portals, or have much in the way of complete devices.
They are component providers to the likes of Samsung, and in shrinking fractions to an integrated device and content provider like Apple.
Apple goes over this by also being a total platform and content provider.

The thing I am curious as I watch the consumer market carved out into walled gardens and platforms is how they'd view Intel's small chip entreaties.

Intel is a component provider, and an uppity one at that. Old giants like HP and Dell were on the same level as Intel, and why would any of the new giants want to replicate that?
Intel has tried branching out, as seen with the McCafee purchase and its IPTV effort.

Even if Intel provides technically compelling components, is it enough to make a difference to the owners of the walled gardens?
The danger there is if they can collectively shrug off better chips and vertically integrate with in-house tech or component suppliers with much more modest aims.

Perhaps if they move ahead with controlling consumer consumption with cloud services, Intel might see a gain on the server side. Consumer mobile may be iffy. The money isn't in the components, so if vertical integration becomes more extensive, the large corps may be able to simply crowd out the market Intel's business model caters to.
 
Exactly they dont want to be intel oems dancing to its tune.... and thats the only business model intel has right now. The phone/tablet makers are in a very strong position because unlike RISC vs intel of old, the CPU is an order of magnitude further away in terms of consumer mind set.
 
I don't think Intel will lose their process lead, quite the contrary. It is capital intensive to be on the bleeding edge. The (much) higher gross margin of Intel's products means they can throw more money at being first at new nodes.
I wonder how you figure that. Intel is behind both Samsung and TSMC in investments for example. Also, they share certain underlying tools, so while being aggressive in transitioning to a new node is certainly possible, there are limits to how far ahead you can go (see below for a little more on this).
Not only is it difficult to be much more ahead than intel currently is, there are definitely signs of TSMC and Samsung lessening the gap as the pressure on advances from mobile chips are stronger than the pressure from the PC market that drives Intels volumes.
On top of that, the benefits to advancing to smaller process nodes are diminishing. In the past, not only did you have a density gain that lowered cost per gate almost proportionally, you also got substantially lower gate (and propagation) delays, lower power draw per gate et cetera. Those benefits have lessened considerably, making the benefit of being a node ahead proportionately less. It is a much more valid proposition today to simply make a bigger chip on an older node than it was ten years ago.
Ergo - regardless of how large you estimate Intels process lead to be any point in time, the reality of the situation is that it matters less today than it used to in days gone by, and that the competitive advantage from being on the bleeding edge of lithographic density is less than it was.

Their 450mm wafer fab is postponed because they currently have capacity excess. They will react like they have in the past by aggregating more of the value silicon in a computer (southbridge/northbridge/GPU) into their CPUs.
Intel is currently number six in wafer capacity, with roughly half the capacity of Samsung and TSMC.
They have cooperated with Samsung, TSMC, Global Foundries and if I don't misremember, IBM, in funding and developing 450mm wafer technology. As you say, 450mm wafers ar put on hold because the pressure for volume isn't there strongly enough to justify the expense for the interested parties. But it isn't really Intels decision to make. (Even if Intel wanted, they aren't big enough to go it alone, and given that they already have overcapacity and just a few days ago made public that they won't outfit a couple of fabs for production at all, I'm not at all sure that they are terribly anxious to absorb the transition costs at this point in time, even if they could.)

3dilettante said:
Even if Intel provides technically compelling components, is it enough to make a difference to the owners of the walled gardens?
The answer to that one is simple - it makes a difference, but not a decisive difference.

However, even though Intel is held in high regard in internet fora like this, I don't understand how it is possible to disregard that Intel has been trying for years to gain a foothold, and has failed even to produce mobile components that are attractive, much less offered at an attractive price. On the contrary, even now they have to subsidize products that are already produced at a loss in order to have any hope of uptake.
The only ones who ever built phones with their chips were PC-market partners who they can easily "encourage" to produce them.

If it was possible for Intel to be competitive in this segment, why aren't they, almost a decade after they started their efforts?
If Intel managed to become competitive, why would the two parties that make money in mobile, Apple and Samsung, choose to become dependent on them?
Why would Intel even try to compete with for instance Qualcomm and Mediatek in a struggle for bottom feeder market share? Could it ever offer a very good ROI by Intels standards?

Intel in mobile is an iffy proposition - they have tried and failed, and even if they were successful, their gains would be negligeable by their own standards. Furthermore by strengthening the mobile market, they would help undermine their cash cow PC market. Even if they "win" they lose. And they aren't winning.

For Intel, mobile is the wrong battle to fight.
 
Intel is currently number six in wafer capacity, with roughly half the capacity of Samsung and TSMC.

TSMC owns four 300mm, four 200mm (but have rented two, so operate six) and a bunch of 150mm fabs.

Don't know Samsung's aggregate fabrication facilities, byt the bulk of Samsung's wafer starts are Flash and DRAM anyway.

Intel has eleven 300 mm fabs, three @ 14nm, three @ 22nm and the rest at 32/45/65nm.

Cheers
 
That's one way of looking at it I guess, although I see it more as micro servers encroaching on Intels turf, and Intel trying to stem the tide by offering similar products. It's a choice between two evils for Intel. Either they do nothing as ARM based platform start eroding their market from below, or they compete with similar products, thus helping legitimize and develop a market niche they would really prefer didn't exist at all, and where their margins are much thinner.

The same goes for the entire mobile segment (...)

Assuredly. There will probably a narrative that 8-core Atoms and their competition are limited to peripheral roles like some networking, storage and front ends while all the usual serious "server farm" stuff belongs to Ivy Bridge and Haswell. (Even now Atom is used for high end consumer NAS / entry business NAS)

But who knows. Xeon are still cheap and very high perf/watt, and the Sun Niagara didn't take over the world (or the PowerPC A2 types)

When the ARM SoC go for crazy I/O, what happens, though?

They have a stranglehold on corporate, they have a stranglehold on servers. Amd's new 8 core ARM soc, with it's specInt-rate 2006 equivalent to a dual core i3, won't change that.

Dual core i3, that's still some big oomph for a server, relatively speaking. Don't servers just spend their life idling, waiting for disk, network or maybe limited by memory capacity?, or I'm wrong because consolidation and use of wasteful java, javascript, ruby etc. can peg the CPU.

No idea what the long term or end game is but for the foreseeable future, probably there's "pacific" coexistence between ARM and x86.
/edit : The stranglehold on corporate is also Microsoft's. A lot of servers just run Windows. It's probable Server Windows will still run only on x86? (I can't imagine Windows Server ARM with Terminal Services serving regular win32 desktop applications for one thing)
 
Last edited by a moderator:
That worked because there's isn't (or wasn't) a great deal of difference between a PC and a workstation, and between a workstation and a server. There is a big difference between a tablet and a server, even a microserver.

There used to be a difference between a PC and a workstation, the former were 8088 shitboxes running DOS (glorified CP/M) while the latter had 68010 and up, running UNIX. 386/486 vs RISC. Then, the rest is history (when SGI switched to Pentium III, Itanic vs the Opteron, and well, Windows NT/2000/XP unified the PC and workstation).

Funnily, the first tablet computer is the GRiDPAD. It has a CMOS 10MHz 8086. A decade later, the first Nokia smartphones were x86 :LOL: but I digress, both were much lower end, lower power, older x86 stuff than the contemporary x86 state of the art. I'm poking fun.
 
The answer to that one is simple - it makes a difference, but not a decisive difference.
It's a bit philosophical to ask if a difference that doesn't make a change to decision making is a materially significant one.

If it was possible for Intel to be competitive in this segment, why aren't they, almost a decade after they started their efforts?
I think there are some legitimate technical and IP reasons, which Intel has been working to correct.
Things like the purchase of Infineon wireless (the initial lack of this kind of tech is where my initial personal evaluation of Intel's efforts was dismissive) and the creation of an SoC process to provide the expected features and provide silicon that is outside of Intel's traditional strength in high-performance but comparatively leaky digital logic with insufficient support for mixed signal.
The combination of the IP and physical acquisitions hasn't quite happened yet, but there is an apparent will to make a real effort.
Inertia and margin concerns are another that would have added drag to any change in direction.

As Intel improves its products from a technical and physical standpoint, non-technical reasons become more dominant.
It's only fair, I suppose, that x86 find itself in this situation as it wasn't technical reasons that gave it supremacy in the first place.

If Intel managed to become competitive, why would the two parties that make money in mobile, Apple and Samsung, choose to become dependent on them?
Without something compelling, I'm not sure why they would. They provide platforms, so the amount of bother they are willing to accept for a component is commensurate to that reduced status.
If Intel managed to get to 5-7nm with significant power/performance advantages and significant cost savings that the foundries effectively could not get to while being stranded on a 20-16nm range with production issues (might not be as impossible as some other alternatives), it might matter if the manufacturers start fighting for some kind of competitive edge once the mobile market becomes mature.

One problem for providing a platform component without a much better scheme for software portability is that dual-sourcing concerns come into play for x86, much as they did back in the day.
The only secondary source of note right now, however, is either incapable of or unwilling to function in that role.

Why would Intel even try to compete with for instance Qualcomm and Mediatek in a struggle for bottom feeder market share? Could it ever offer a very good ROI by Intels standards?
That's a question of a lot of silicon chasing markets that were not growing with Moore's Law.
Even before the current situation, there was concern about the end game with deep sub-micron processes and larger wafers making it so the whole world's demand of silicon could be managed by several massive (and exponentially more expensive) fabs.
Intel's success is bound in a feedback loop of physical and design success fed by the revenue feeding into the same. However that cycle is run on an increasingly top-heavy, front-heavy, and glass-jawed paradigm, and it seems Intel's foundry, memory production, and capital conservation motions are indicative of the difficulty of that problem.
However, Intel is still profitable enough such that it has the means and time to keep trying to break through that barrier or to work around it.
In the case of lithography tools and the like, it's a field Intel originally felt was economical to no longer do in-house, but may not evalaute as such if other pressures persist.
ASML, and the pond it operates in, is an order of magnitude smaller. That doesn't mean Intel can just replace it, at least not in total. If Intel can get something like exclusive access or independent development of a subset of tools needed for EUV, 450mm, or sub-7nm geometries, it won't need to.

However, the big money is in getting a stranglehold on consumers, and the consumption platform owners have their own concerns for profitability. In that regard, they are the bigger fish with a bigger pond, and their efforts have already had knock-on effects as they route revenues to different architectural directions. Other things, like the increasingly less open technical disclosures and emphasis on lock-down are perhaps another sign of the times.
 
Other things, like the increasingly less open technical disclosures and emphasis on lock-down are perhaps another sign of the times.

I'm quite disappointed that little to nothing comes out of Apple about its APUs. The A7 seems like quite a marvel and all we about it are those terribly spartan apple slides with just a vapid but eye catching "2x CPU performance" or "2x GPU performance" during their press events.

It lets Apple have temporary deniability for things like:

http://www.dailytech.com/University...+Circuitry+in+A7+iPhone+Chip/article34252.htm

I doubt lawsuits like this will make Apple more open about its engineering achievements, and if anything they'll try to lock things down even tighter. I curse Apple for the new reality of opaqueness when it comes to architectural details.
 
The A7 isn't the first to that party, but the way patent infringement penalties and big business have worked leaves me uncertain what the first shot would have been.
Intel didn't lock down after the Intergraph and Transmeta suits, for example.

However, the traditional lack of transparency in the mobile field is something Intel is touching upon, as is the reduced upside to disclosure when consumption platforms don't benefit that much by spreading knowledge about a component. It doesn't help people buy the things nor their content.
 
TSMC owns four 300mm, four 200mm (but have rented two, so operate six) and a bunch of 150mm fabs.

Don't know Samsung's aggregate fabrication facilities, byt the bulk of Samsung's wafer starts are Flash and DRAM anyway.

Intel has eleven 300 mm fabs, three @ 14nm, three @ 22nm and the rest at 32/45/65nm.

Cheers
And then you can break it down by total wafer capacity et cetera. It's a changing landscape, albeit relatively slowly. The point still stands though that Intel can't on its own decide fab supply chain schedules. ASML made a public statement back in the middle of December declaring 450mm wafer technology on hold until there was sufficient customer demand. link in Dutch ASMLs part has been highly publicized, but they are not the only technology supplier involved. My points were: Intel isn't strong enough to call the shots alone, and at a given level of fab infrastructure, there are limits to what you can do, Intel can't get arbitrarily far ahead of its competitors who use similar equipment.

3dilettante said:
I think there are some legitimate technical and IP reasons, which Intel has been working to correct.
Things like the purchase of Infineon wireless (the initial lack of this kind of tech is where my initial personal evaluation of Intel's efforts was dismissive) and the creation of an SoC process to provide the expected features and provide silicon that is outside of Intel's traditional strength in high-performance but comparatively leaky digital logic with insufficient support for mixed signal.
The combination of the IP and physical acquisitions hasn't quite happened yet, but there is an apparent will to make a real effort.
Inertia and margin concerns are another that would have added drag to any change in direction.
Good reasons all. I might add a couple that would still be in the technical domain. But again, Intel has to
a, make a family of technically compelling devices
b, sell them at attractive prices
c, at a profit, or you'd have to be an idiot to commit to their products in the long term.
To date, they have at best achieved one out of three in mobile.

But even if they were capable of nailing all three, they would still lack a market. Why choose Intel in mobile?
3dilettante said:
Without something compelling, I'm not sure why they would. They provide platforms, so the amount of bother they are willing to accept for a component is commensurate to that reduced status.
If Intel managed to get to 5-7nm with significant power/performance advantages and significant cost savings that the foundries effectively could not get to while being stranded on a 20-16nm range with production issues (might not be as impossible as some other alternatives), it might matter if the manufacturers start fighting for some kind of competitive edge once the mobile market becomes mature.

One problem for providing a platform component without a much better scheme for software portability is that dual-sourcing concerns come into play for x86, much as they did back in the day.
The only secondary source of note right now, however, is either incapable of or unwilling to function in that role.
And this is so completely damning for Intel. Not only do Samsung and Apple have multiple sources for their devices, they can source themselves! (Apple as a fab-less chip designer.) And do so already quite successfully. There simply is no realistic scenario where the leadership of Samsung and Apple would transition to being completely dependent on Intel x86 for their products.

Which leaves Intel fighting over scraps in mobile space. And frankly, I'm not sure they could compete all that well with MediaTek for the medium or small scale chinese manufacturers.

Intel's success is bound in a feedback loop of physical and design success fed by the revenue feeding into the same. However that cycle is run on an increasingly top-heavy, front-heavy, and glass-jawed paradigm, and it seems Intel's foundry, memory production, and capital conservation motions are indicative of the difficulty of that problem.
However, Intel is still profitable enough such that it has the means and time to keep trying to break through that barrier or to work around it.
As well they should! This is their area of strength, after all, and they should push that envelope as far as they can make it go. I don't question that Intel has really sharp manufacturing prowess, or that they can be successful in the future. I just question the wisdom of spending their money and engineering talent on trying to break into the phone/tablet SoC market where getting decent ROI seems like a hopeless proposition for them, at least with x86 products*.

They should look for other, juicier, fish to fry.


(* However if they ditched x86 and somehow managed to make some seriously fabulous ARM products, where they could say "We offer by far the best, at an attractive price, and you're not locked in! If you don't like what we offer in the future, you are completely free to source compatible products from other suppliers". And then proceeded to run their competitors out of business until there was little competing development going on, at which point they could hike margins. But it's a thought experiment only. It won't, maybe can't happen, and there is no indication that they are working on anything of the sort.)
 
Last edited by a moderator:
And then you can break it down by total wafer capacity et cetera. It's a changing landscape, albeit relatively slowly. The point still stands though that Intel can't on its own decide fab supply chain schedules.
It doesn't need to decide the full chain. Manufacturing at new nodes leverages existing and older tech above the critical layers, so it comes down to a subset of the industry and a more rarified stratum where moves need to be made.

The leading-edge stuff already has something of a special relationship with Intel, because they are the first adopters for a number of things.
It would take a heavy investment for Intel to replicate the expertise of a tool maker over a whole range of products, but that doesn't preclude Intel's paying for early or exclusive access to tools it contributes heavily to commercializing.
ASML took advantage of this a little when it goaded additional investment outside of Intel for its 450mm research, if only to prevent Intel from having dibs on it.

If it cannot buy a tool maker, and it cannot buy access to tools that the other players couldn't manage to use until Intel figures it out anyway, sufficient paranoia may motivate Intel to spend its war chest at in-house development in a bleeding edge tool or two. It would take a long time, but Intel used to do this, and it is getting back into fields it abandoned long ago.
It's not wanting for money, although such an approach would be very high risk.
However, the rewards are great. The original endgame of the fab treadmill was that the one that transitions furthest wins.

But even if they were capable of nailing all three, they would still lack a market. Why choose Intel in mobile?
Intel is the gateway to a very strong ecosystem, and it does provide a lot of technical, software, and infrastructure expertise.
For a smaller player that can't manage a full-court press for architectural design, manufacturing, and software platform development, I can see the temptation.
It doesn't quite matter as much if the big fish succeed in raising the barriers of entry sufficiently to eliminate marginal players.

I just question the wisdom of spending their money and engineering talent on trying to break into the phone/tablet SoC market where getting decent ROI seems like a hopeless proposition for them, at least with x86 products*.
They need the volume.
There is physical silicon that needs to go *somewhere*.
 
I agree, a fair race to the bottom to razor thin margins would probably only hurt Intel since the margins are made now in the final sale packaged products and app stores / content portals, so Samsung and Apple are probably all too happy to watch their suppliers scrap over a few bucks per device.

Even if Intel magically had impossibly good technology for much less, device makers would be too smart to let Intel have all of the ARM business at the cost of competitors going out of business. Intel's business shenanigans of the past two decades are probably standard reading for courses in supply chain logistics by now, and I'm sure more regulatory eyes are Intel now as well. So neither Intel nor device makers probably want to visit that scenario.

As for tech, one place where Intel currently lags other manufacturers is packaging ram atop APUs dies; pictures of Intel tablet and phone configurations seems to confirm a less compact configuration of ram aside APU than many ARM based designs. I know Intel has started a push for more on-package memory with Iris Pro and TSV based stacked memory with HMC, but Iris Pro is, hot, high-end, expensive, and HMC will probably be exotic and far from mass adoption for a while. It seems like Apple, Qualcomm, and Samsung have been stacking (with lower speed ram and without TSVs) DDR memory on top of their APU dies for years now from teardowns of any recent mobile devices.
 
Last edited by a moderator:
http://www.digitimes.com/news/a20140212PD209.html

Intel's upcoming 14nm Broadwell-based processors were previously scheduled for mass production at the end of the first quarter for release in the third; however, sources from the upstream supply chain say the processors have recently been delayed and will not be available until the fourth quarter.

Broadwell's mass shipments will also be postponed to the first quarter of 2015. The sources believe the delay is partly due to the slow digestion of Haswell processor inventories in the market.

The sources pointed out that Broadwell processors will only have limited volumes in the fourth quarter and only U- and Y-series models will be available initially. Pentium and Celeron models will be released in the first quarter of 2015.

Without any strong drivers, the sources expect notebook demand to remain weak in the first half and will have a chance to pick up in the third quarter through price cuts. In the fourth quarter, since only limited Broadwell processor supplies will be available, the industry is unlikely to see any replacement trend.
 
Need to clear out Haswell inventories, blame the delay on yields. They've done it in the past. Of course yields will surely get better with the extra time, but I doubt that's the main reason for the delay.
 
Maybe they could(and i know this sounds crazy) offer discounts on haswell? If i could grab a 4670k for $180(i don't live in NA) i would replace my seasoned PII x4-955, instead of waiting for broadwell.
 
Back
Top