Nehalem - 16 Thread x86 Monster!

must admit the "coolest" thing i saw ( not sure if this was nehelem) was the ability to notice the chip was only running single threaded...... and automatically overclock one core..

obv all the "shut this bit down" is great , but just an extension....

auto-turbo-boost sounds like a great idea :)

[edit]
should search first,, thats in penryn :D
"Enhanced Dynamic Acceleration Technology"
 
Last edited by a moderator:
anyway.. If we say quad or more cores are mostly useless for desktop usage, then why Cell should be less useless?

Well, Cell is largely useless/overkill as a desktop chip.
This incarnation of the Cell was born out of the realization that a modern processor, even a very modest one, is ample for most tasks asked of it. The tasks where it falls short cluster around certain types of data/computation. Hence the idea of having a central housekeeping processor, capable of farming out tasks to surrounding units that are specifically tailored to the kind of computation where the general purpose CPUs fall short, and a programming model, architecture and memory hierarchy that is designed for the purpose.

It's a good approach, but only a small subset of general computer users actually need all that extra floating point oomph, so for general computation the Broadband Engine carries way too much extra silicon to be optimal, (just as for instance a 4- or worse, 8-core x86 processor does).

Out of 200+ million PC buyers per year, how many would prefer an 4-core Nehalem over a 8-core one at more than twice the cost to produce, and with higher power draw? Hell, how many would be just as well served by todays Conroe? The vast majority, that's who. This presents Intel and AMD with a bit of a problem, because they want to keep selling CPUs, and they want to keep selling them at decent profit, to an audience that by and large just don't need all that much more in the way of computing power - a tiny x86 chip coupled with video accelleration would serve 9X% of the market beautifully.

Even though I largely sympathize with the design ethos of the Broadband Engine, it is not targeted at general PC use - it is a console processor, an environment where its qualities will be taken advantage of to a far larger degree than on the desktop. (Even there though, it is still an open question if all its extra abilities will be sufficiently utilized to justify its cost - after all, the CPU of the Wii is less than one tenth(!) the size of the BE, which lowers cost and power draw, and allows a tiny, cool, quiet console. )
 
Even a VIA C3 at 533 Mhz is more than enough for the vast majority of people. And it consumes all of... 8 watt at peak.
 
A VIA C3 is not expected to run a web server, work in a render farm, run in a workstation, or play any high-end game.

An x86 design or its derivatives must span all those markets. It's a significant way of saving costs. A single design serves a lot of markets that make a lot more money.
That's a distinction I had forgotten between x86 and Cell. Cell's target markets are really very similar to one another, because of the workload Cell favors.

The price of the broad application of designs is less than optimal resource use. The savings in design effort on top of already incredibly high overhead for MPU design and implementation is significant. The volumes allow for better designs, so it's probably likely a highly underclocked Core will trounce a C3 anyway.

What happens to VIA when Intel puts out some 45nm single-core low-watt x86 chip and VIA's foundry chips are still at 65nm?
Intel's already indicated it plans on putting out very low power variants, and they'll likely be coming from a similar base design as the high-end ones.
 
Then again, those applications are only a very small part of the total market. Like with GPUs: why do they sell the budget ones? Even more, why does the majority of people use integrated GPUs? And they're much smaller, and thus cheaper.
 
All those markets make more money and generate more business than the C3's target market.
Why do you think VIA's still a bit player and puts out designs at a glacial pace compared to Intel and AMD?

Electricity use is more wasteful for those GHz designs, but up to a certain point, most markets don't care.
The utility for a corporation that needs to compensate for rising costs is focused more on reuse.
So what if the processor they've designed for render farms burns 50 more watts than the home user needs?
The've probably saved tens to hundreds of millions of dollars in design and manufacturing costs over the product cycles of five markets.
 
A VIA C3 is not expected to run a web server, work in a render farm, run in a workstation, or play any high-end game.

An x86 design or its derivatives must span all those markets. It's a significant way of saving costs. A single design serves a lot of markets that make a lot more money.
That's a distinction I had forgotten between x86 and Cell. Cell's target markets are really very similar to one another, because of the workload Cell favors.

The price of the broad application of designs is less than optimal resource use. The savings in design effort on top of already incredibly high overhead for MPU design and implementation is significant.

This is true and, I believe, why we are seeing a change in x86 processor design, and why Intel made a major point of scalability. Using the same piece of silicon and selectively crippling it to sell it at different margin to different markets is no longer sufficient. Saddling all of your CPU offerings with the same number of cores is too damn inefficient for those who do not have those needs, and allows other players to undercut your offerings by cutting out the excess baggage. No, better to use the same basic core (efficient use of design) and offer CPUs that vary the number of cores allowing excellent coverage of different market strata, at the cost of producing a wider variety of physical silicon chips, something that shouldn't be much of an issue at Intels volumes.

This makes a lot of sense, but Intel hasn't really done it before to my knowledge they still don't offer a single core variant of the C2D and their quad core is two C2Ds on the same package.

I still can't see how Intel will be able to keep ASPs up on the lower end offerings, but at least it ensures that they will maintain their marketshare across the spectrum.
 
"core 2 solo" is coming soon (under the celeron name I think). obviously Intel still had to sell the netburst chips, a core 2 solo would have undermined them.
 
"core 2 solo" is coming soon (under the celeron name I think). obviously Intel still had to sell the netburst chips, a core 2 solo would have undermined them.

Yes.
Another way of looking at the above is that Intel needs to offer something attractive at that price point since it constitutes a large part of the global market. And for the same reason, they want those parts to provide good profit margins. Using the same basic core, differentiating based on number of cores per CPU makes excellent sense, as I wrote above it allows them to be competitive at all levels, at the comparatively small (for Intel) cost of having to produce a larger number of physically different silicon chips.

They have been manouvering to get Via to withdraw from the x86 market, lowering the risk of low-power/embedded x86 competion at finer lithography pushing down ASPs, leaving AMD as part competition and part doupoly collaborator. And AMD is now in a position where it is unlikely that they will initiate a price war at the low end.

Intel has the benefit of owning the major part of a big cake, but the disadvantage that the cake isn't growing at a very high rate, and that the pressure is always there from the buyers to achieve lower total cost. Further, the computing needs of the bulk of their customers haven't really increased much and seem unlikely to do so either in the foreseeable future, hence selling increased performance at the same price that Moores Law made into such a successful business practise may not be fully effective in keeping ASPs up for the next decade. Intels response so far has been to deliver a larger part of the entire PC - core logic, sound, wired and then wireless networking, disk controllers, integrated graphics, even full motherboards - Intel can now supply most of the widget, apart from drives and memory. (Their becoming more active in graphics is reasonable, since it is a multi-billion dollar part of PCs where they could conceivably grow their marketshare and total revenue to compensate for the overall price pressure. Almost no such areas left.)

For a single user computer, Amdahls law implies that having a large number of cores may not be cost effective. While I may not agree with John Carmacks attitude, he is perfectly correct in pointing out that there are fundamental problems when it comes to general code parallellization, so even gamers (who are one of the few categories of any size that still care about CPU performance) may not find 8, or even 4-core CPUs terribly compelling.

It will be interesting to see how this plays out over the next few years and how Intel will manage to keep prices up.
 
While I agree that an ever increasing number of cores doesn't make sense for desktop use, I'm not entirely convinced that that will stop Intel continuing to add more. I'd argue that Intel are very good at selling people performance that they either don't need or can't use. For the vast majority of desktop PC users, they've been doing that for a decade or so now (try buying a 1GHz Intel anything today).

Sure adding more cores increases costs, but there's a finite annual market for x86 CPUs, and Intel has a given amount of fab capacity to keep busy. There's no point them producing more 1-/2-core CPUs than the world can consume, better maybe for them to produce 4-core CPUs and sell them for more money, regardless of whether it's what the customer wants or needs. If they stop offering 1- or 2-core CPUs, what's the customer going to do? Well he's going to buy a 4-core CPU. Or no CPU. But he wants a computer so he's going to buy a 4-core CPU.

What else are Intel going to do with the quadrillions of 45/32/22nm transistors they're going to be producing each year?

Much depends on what AMD does here too I suppose. If they were to continue offering 1- and 2-core CPUs at very low cost, and folks could see from benchmarks that Intel's high-core-count offerings don't give any better real-world performance, that might keep 1- and 2-core CPUs as a viable option in the market (Intel would be forced to counter). I don't know if AMD would do that though. Given their capacity constraints the impetus might be stronger on them to do so than it would be on Intel, however does AMD really want to go back to being known as the supplier of cheap and cheerful low-end-of-the-market CPUs like it was before K8? I dunno. Interesting marketing decision for them there I think.
 
Why would they reuse their nehalem moniker from their 10 GHz net burst they at one point had slated for 2005?
 
Back
Top