I Can Hazwell?

It's already been "confirmed" that broadwell will receive a new graphics architecture, one which is not built on the old sandy bridge foundations like the haswell iteration is...

Like Wynix says, there's really nothing much to spend resources on except graphics (well, and more FPU power, but that won't take up huge die space...)
 
welp broadwel is what i'm waiting on at this point. I have a surface pro and a fx 8150 . I want 8 hours battery life on the surface and the ability to really play civ 5 and x-com which surface pro and surface pro 2 struggle at .

I want to replace the fx 8150 with something much more powerful that also uses a lot less power , i'm also hoping for ddr 4 at that point
 
They can afford to slack on their CPU performance, their GPU performance on the other hand is still playing catch up.
A 5-10% GPU performance increase will not suffice, so it's pretty much guaranteed that the GPU will receive a good boost.

The pictures of the package shows that the die is physically smaller than Haswell by a non-insignificant amount. I can see them gaining performance based on TDP headroom they get from moving to 14nm process, and maybe increasing the EU by 20%(20 to 24 and 40 to 48). But absolute performance wise they'll probably relegate to Iris 5100 and Iris Pro 5200 successors.

I'd like them to introduce a Iris Pro version for Broadwell that will firmly reach x60m performance of that year. The current Iris Pro is a bit of a disappointment for its price and power savings. If the Iris Pro 5100 reached even 650M performance it would have started to make sense. 760M would have been a big hit.

one which is not built on the old sandy bridge foundations like the haswell iteration is...

Ivy Bridge, not Sandy Bridge. It's not like the known information for Broadwell shows that the difference will be greater than between IVB and SNB anyway.
 
Last edited by a moderator:
broadwell with new graphic architecture will be aiming for faster 3D performance?
im afreaid intel will more focused on power consumption and forget the 3d :(

ugh.. to buy haswell tablet or wait for broadwell...

Before haswell, intel IGP was a think not to see. With haswell, its getting into my radar. With broadwell? hmm
 
I doubt desktop Broadwell will see much of a reduction in power at the high end, what is the need to drop below 70-80 watts? the mobile versions will obviously have aggressive power savings.
I hope we finally see a core increase, but Intel seem unwilling to give that to us.

I'd wait for broadwell based tablets.
 
There's a fundamental issue with PCs and if Intel and manufacturers can't fix that, there's no future for them in PC. Their only hope then becomes being a big player with new fangled gadgets like Smartphones and Tablets.

-Even Bay Trail devices are nothing much more than Netbook but with a CPU that should have been there 1 year ago, and a touchscreen
-Ultrabooks are marginally better but the ones we really want cost even more than before and the CPU and GPU isn't really that much faster than the ones in 2011
-Intel's "supposedly" superior process technology that yields minimal efforts over foundry's "2 year's behind" process based Apple chips while requiring multiples of TDP. Maybe process technology isn't (a) panacea Intel claims to be, (b) intel's "experience" in CPU is really worth nothing (c) Intel's "lead" in process is a lie and 22nm is really same as foundry's 28nm
-Ok fine, let's say CPU can't get much more faster. We should be getting much bigger GPU gains with full process advances like 14nm Broadwell right? They were talking about mere 40%. We should be seeing 2x gains in ULT/ULX space and 3x+ gains in Iris/Iris Pro space.

Remember back in the Ultrabook presentation where we'd be getting the following in 2013?

1. "Mainstream" prices of $999 and under
2. 7x the graphics performance of 2011
3. All day battery life
4. Super thin and light

I bet you if that really existed today, people would be all over it. However the reality is really that we gotta choose 2 out of the 4. And the #2 isn't doable even in 2014 with Broadwell. 7x performance would require Broadwell ULT's iGPU to perform like the Iris Pro 5200. It's quite a stretch to say 14nm 15W Broadwell ULT is going to perform like 22nm 55W Haswell H graphics. The kind of gains were rather common in the shining days of the PC.

Without single big leaps, its likely we'll be waiting and waiting until Tablets and Smartphones fully take over and PCs become truly irrelevant.
 
Last edited by a moderator:
FIrst and foremost, where is this presentation that you speak of regarding ultrabooks and all these claims you're making? Intel didn't start even talking ultrabooks until around May of 2011. Really, the only contention I have is that Intel said anything about "7x performance" for graphics output.

Let's address the others:
  • BayTrail is nothing but netbooks with touchscreens. Uh, what? I don't even know what your complaint is here. Baytrail is a marvelous and welcome departure for Intel; they demonstrably provide double the performance of their old Atom line with even better power savings. This is perfect for machines that don't need EPIC horsepower, and despite what you might read here at B3D, a very large portion of this planet doesn't need compute power in excess of a Baytrail device.
  • Ultrabooks now aren't much better than 2011. Really? If you were buying an ultrabook in 2011,you were getting a thermally limited ULV Sandy Bridge, along with probably a spinning disk rather than SSD, and likely no more than 4GB of ram at best. And you surely weren't paying less than $1000 for it.

    Today's Haswell based ultrabooks will, in no uncertain terms, completely annihilate an SB platform at equivalent power draw, will still get better battery life, is going to have at least a hybrid SSD if not a full SSD, can be had with 8GB of ram without much trouble, and will come with a touchscreen. Oh, and it will probably still be less expensive than whatever ultrabook you may have purchased in 2011. You're going to be awfully hard pressed to tell me that it got no better.
  • Intel's process not any better than everyone else. Tell me, how many people are building x86 processors that you're using to compare with? Oh, nobody? Ok, that's cool. Show me some numbers from DRAM cells that Intel creates and the others do, at the "comparable" process node. Nothing? I tell you what, find me a factual basis in which you can draw the claim that their process is no better than someone else's, and we can have this discussion. Sure, other foundries may have gate lengths that are equivalent, or "finfet" designs, but the entire process is more than gate length and "3D transistors". You're comparing things that cannot be compared, because the true indicators of process maturity are things that you simply are not allowed to know unless you work there: yields and profit margin.
  • Intel should be able to pull triple-digit GPU performance per-generation Given that even the "big boys" of AMD and NVIDIA haven't been able to pull off a doubling of single-GPU graphics performance generation-over-generation (and sometimes over two generations), I don't really see how you're able to claim that Intel should magically do it better. This is an absurd request.
  • Ultrabooks do not have mainstream prices of $999 or below. Here's Newegg's ultrabook list sorted by lowest price first.There are three pages of ultrabooks (at 20 options per page) at or under your required price point. Sure you can get better and better versions with more and more hardware, but mainstream pricing? This is not the issue you say it is.
  • 7x the graphics performance of 2011. You'll have to show me where Intel claimed this. I can't find it anywhere.
  • All day battery life. Depends on your definition of all day, I suppose. Asking for 24 hours of continuous power-on time is absurd. Asking for all-work-day battery life is already here and available, my work day is between nine and eleven hours, and not every single waking moment of that work day is with the machine powered on and me typing into it. I can take my 2012 Lenovo x220 off my desk dock at 7:50am, go to a seemingly endless number of meetings (most, but not all of which I take notes with my laptop and look through email on wireless), go to lunch with the laptop in suspend, come back and start hitting meetings again until 5:00 with occasional note taking and email reading again. Nearly every day will result in the laptop needing to charge by the time I'm at home sitting at my office there, but not enough for it to die. I think you're over-stating this issue as well.
  • Super thin and light... Is this really a problem? Have you seen a Lenovo U430 or an X1? Have you seen a Macbook Air? Hell, even the Macbook Pro's are getting pretty damned skinny and light these days and they really aren't "ultrabooks" yet. Have you seen any of the Asus ultrabook models? Are you even paying attention to what's available, or are you just rattling stuff off?

I think I've covered it all. You're welcome to disagree, but in my opinion, the facts are fully against your stance.
 
Netbook is a 2008-2009 device. The battery life gains are all due to getting low power states that should have been there from the start. S0iX and such similar states is what the iPad has been using since then, and also the reason we got 2x the battery life increase from Z670 to Z2760 Atoms. Nothing about process, nothing about TDP.

I own a 5-inch UMPC(when Intel were pushing them hard) that lasts 10 hours, doing nothing, with screen off, but last 5 hours playing Warcraft III. The idle power barrier of the platform was the BIGGEST issue for portable x86 devices until Clover Trail and Haswell.

2008-2009 to now we jumped not only in architecture but 2 generations ahead in process as well. Very good, but nothing sensational, since Atoms were just crippled for too long.

Ultrabooks in 2011 had hybrid, or cached HDDs as well. That requirement is no different than today. Majority of $899 or lower priced devices require some form of hybrid setup to reach that price. And just like in 2011, getting anything lower than $999 meant significant compromise in one way or another. The biggest problem remains that the ultimate comparison is the MacBook Air, which gets fundamental things right(like WiFi/Keyboard), and does it with lower price than most of the competition. Part of the problem is due to Windows itself, like with the trackpad. The conclusion does not change.

Intel's process: Yes you have Apple on the "outdated" 28nm foundry process offering performance that threatens Tablet Bay Trail in a Smartphone form factor.

Intel should be able to pull triple-digit GPU performance per-generation: Triple digit gains were what they had until Ivy Bridge, when all the hype about new architecture brought 20-40% improvements. That's impressive for a CPU, not so much for a integrated GPU that's on so low of a bar that it continuously needs those big gains.

7x Graphics gain: Clearly some at B3D has seen the Intel presentation about Ultrabooks in 2011 where they claimed 2013 platform would provide 7x the graphics of 2011. Even assuming "2013" is really 2014 that's still far away from what Broadwell promises to offer.

Ultrabooks do not have mainstream prices of $999 or below: An issue not because there aren't any devices that exist below that price point, but all the desirable devices in the $999 or above price range. You have 4-5 pound "Ultrabooks" in the cheap category. They often cater to using SSHD(or cached HDD) purely for boot up and resume times rather than being a real cache and actually accelerate applications.

Super thin and light: Yes, because the ultimate promise they gave is "Performance of a PC, Attributes of a Tablet, All Day, Every Day". If you look at the same presentation that claims 7x performance increase, you see a device that's barely over 10mm. And more importantly, weight is an issue because they originally pitched it as a Tablet competitor with Convertibles. I believe in this category, but it really needs to bridge the gap in portability between Laptops and Tablets. Convertibles would actually make sense with light enough weight and thickness. Think 13mm and 1kg or under.
 
The things made worse is that Intel and the PC manufacturers don't seem to be pushing 4th Gen Core based Convertibles and Ultrabooks as hard as last year. There was a flurry of advertisements for Sandy Bridge and Ivy Bridge ones from both Intel and PC guys. And its extremely hard to get hold of one. IDF Fall 2013 claimed 50-60 designs for Convertibles yet the ones you can buy can be counted with fingers in one hand.

That will enable a vicious cycle where less commitment from the PC manufacturers result in slightly less sales, which result in further reduced commitment, and on and on. That's true for software as well, where increasing numbers of applications are moving to Android and iOS which decreases the attractiveness of the Windows PC platform, thus lowering software development. Look at the number of poor Console ports for PC games! The decline of PC, if it happens will be slow, because they are resisting that change, but enough clues are there to suggest nothing dramatically is being changed to stop, or even reverse that.

Charlie at SA might have been right on when he said Intel is getting killed by not few massive cuts and bruises, but endless tiny ones. Economy, Windows 8, Apple A7, Tablet, Commitment issues, Countless computing competition, internal struggles(like for example Valve and some others regarding Windows 8, and even Microsoft itself).

Really, in the business world, it doesn't matter what "technical" or "internal issues" you have, you either deliver or you perish. The biggest issue with Ultrabooks is that for its performance and features, the pricing is usually higher. Saving 0.5 lb weight is not a big deal when you generally pay more, sacrifice in upgradeability, and you have to opt for a U CPU with lower performance than SV and Quad core ones. Being significantly different in mobility features would better justify the price differences. And now mobility users have an option with a Tablet.

A serious Convertible Ultrabook IMO is:

-2.5 pound or lighter
-13-15mm thick
-Keyboard with excellent travel(for example the XPS 12 I use is quite excellent, while Yogas and few other Ultrabooks suck)
-Thermal management that can handle full specifications of the CPU, and is very quiet
-WiFi that can not only pick up APs well and have good range and throughput, but can do extra features, like WiDi really well
-Decent Digitizer with pen slots
-1920x1080 to start. Excellent factory color production and brightness ideal. No glaringly obvious issues with the touchscreen not working after few hours of use(which XPS 12 suffers from)
-Dual channel memory(surprising how many popular Ultrabooks don't have that)
-10 hour battery life
-Excellent quality control with zero light bleed on the screen, no bend on the chassis and keyboard, and tech support that can actually fix issues
-$999

May seem obvious but surprising how there's nothing that meets such specs. That means in the Haswell generation, you have less enthusiasts buying because enthusiast options suck, you have less value pickers buying because of excellent really cheap Tablets, and you have less Ultrabook users buying because you have minimum options to choose from.

(One report suggested that in first half of 2012, there were more sales of MacBook Air than entire combined numbers of Ultrabooks from various different manufacturers. How's that for "success?")
 
Last edited by a moderator:
I just want to point out that the manufacturing process is not equal to the design of the processor being manufactured. Apple's got a better CPU design but a worse process. The main thing is their CPU design is good enough to counter the worse process to a large enough degree.

But, do not get confused. If it was manufactured on Intel's process, it'd be clocked higher and run cooler/with lower power.
 
I just want to point out that the manufacturing process is not equal to the design of the processor being manufactured. Apple's got a better CPU design but a worse process. The main thing is their CPU design is good enough to counter the worse process to a large enough degree.

But, do not get confused. If it was manufactured on Intel's process, it'd be clocked higher and run cooler/with lower power.

So you're implying that Apple is better overall at designing CPU architectures than Intel? Sorry, I could not properly identify the context of your statements.
 
So you're implying that Apple is better overall at designing CPU architectures than Intel?

One is x86, one ARM.
One has a terrible front-end that eats power, space, is extremely complex, requires moving heaven and earth to make it... the other has an almost fixed front end.
It is not 'being better', Intel one is better than anyone or so. It's x86 legacy, and Intel cannot remove it, like AMD.
 
One is x86, one ARM.
One has a terrible front-end that eats power, space, is extremely complex, requires moving heaven and earth to make it... the other has an almost fixed front end.
It is not 'being better', Intel one is better than anyone or so. It's x86 legacy, and Intel cannot remove it, like AMD.

Here's a paper that compares different implementations of ARM and x86 and finds ISA has very little to do with power consumption and performance. Everything is down to micro-architecture.

Numerous other papers out there with the same findings.

Cheers
 
Here's a paper that compares different implementations of ARM and x86 and finds ISA has very little to do with power consumption and performance. Everything is down to micro-architecture.

Numerous other papers out there with the same findings.
Can you list some of these "numerous other papers"?

The above paper draws too many conclusions with too little evidence...

That said I agree that as soon as you start targeting performance, the ISA cost is becoming less and less important. If you target size and very low power then it starts mattering a lot. I bet Intel Quark instruction set will be severely castrated.
 
DavidC, you've made a lot of claims that are unsubstantiated. I pointed every single one of them out, and your replies were "well I have an anecdote" or "well it isn't awesome" or "well someone had to have seen it, so it must exist."

Your netbook was slow as balls when it came out, and is even slower than balls compared to any modern ultrabook. It was also limited to 2GB of ram, a 32-bit operating system, and had terrible idle power draw characteristics.

Your assumption of Baytrail getting all it's power savings from enhanced idle and nothing else is absurd, and provably wrong.

Your assertion that x86 devices should have always had connected standby (S0ix) is laughable, as it wasn't even supported by the operating systems until Win8. ARM and the underlying operating systems that run iPhones and iPads were specifically built for this use case, x86 was built 30 years ago and wasn't ever imagined for this capability. The Windows architecture never considered such things until only the last few years...

You keep talking about Intels' x86 process being bad, but then you point at Apple working on their own ARM. You obviously do not understand: these two things are not the same, and the differences are far larger than "the process". Apple doesn't even build their chips, so to even begin making sense, you'd first have to start talking about Samsung's process. You're comparing the color of the sky to the flavor of the icecream in your hand -- it makes no sense at all, they aren't comparable in that way.

You somehow think that Intel was making triple-digit performance increases in GPU's before Ivy Bridge? Where? Can you show me even a single example of that? I can't find anything close to that, anywhere, anytime in their history.

There was no claim, that I can find, of 7x GPU performance increase. The burden of proof is on YOU who made this statement, not someone else.

There are no 5 pound ultrabooks, the very definition of ultrabook precludes that possibility.
You claim that "Desirable" ultrabooks are more than $1000, that's personal preference and not an indicator of availability and certainly not an indicator of "mainstream".

I'm done replying to you in this thread, because you're on a tirade of non-factual drivel, and I have no clear understanding of where your complaints are specifically trying to drive the discussion.
 
Last edited by a moderator:
Can you list some of these "numerous other papers"?

I have a dead link in my bookmarks that links to AMD's "x86 everywhere" presentation from 2005 where they stated x86 decode was 4% of the 30mm^2 of an Athlon 64 core in 90nm. The dual issue in-order cores in 360 and PS3 was of similar size and a fraction of the performance.

Here's a breakdown of AMD's bobcat core. Decode is a tiny fraction of this tiny core (it's smaller than an ARM Cortex A9 on a process normalized basis).

The increase in decoder complexity is counter balanced by the better code density of x86 and AMD64.

I'd bet the implicit shift in ARM instructions, post increment addressing and move multiple is a *much* bigger problem for high performance implementations than x86 decoding is. Of course all stuff that has been jettisoned in ARM64 and with good reason.

The above paper draws too many conclusions with too little evidence...

Really ? They had a fairly comprehensive benchmark suite and used the same compiler across all CPUs. Certainly better than the Geekbench micro benchmark crap where the same CPU sees 50% difference in performance from one OS/compiler combo to another.

Cheers
 
I have a dead link in my bookmarks that links to AMD's "x86 everywhere" presentation from 2005 where they stated x86 decode was 4% of the 30mm^2 of an Athlon 64 core in 90nm. The dual issue in-order cores in 360 and PS3 was of similar size and a fraction of the performance.

Here's a breakdown of AMD's bobcat core. Decode is a tiny fraction of this tiny core (it's smaller than an ARM Cortex A9 on a process normalized basis).
Citing some AMD propaganda doesn't say much sorry ;) I thought you had some independent study to show.

The increase in decoder complexity is counter balanced by the better code density of x86 and AMD64. Of course all stuff that has been jettisoned in ARM64 and with good reason.

I'd bet the implicit shift in ARM instructions, post increment addressing and move multiple is a *much* bigger problem for high performance implementations than x86 decoding is.
It's not a bigger problem. And why do you put your focus on decoding? ISA differences are not located only in decoders.

Really ? They had a fairly comprehensive benchmark suite and used the same compiler across all CPUs. Certainly better than the Geekbench micro benchmark crap where the same CPU sees 50% difference in performance from one OS/compiler combo to another.
Some of their various "findings" are hilarious. I'll pick just one for the fun:
Finding P12: A9 and i7’s different issue widths (2 versus
4, respectively)10 explain performance differences up to 2×, as-
suming sufficient ILP, a sufficient instruction window and a well
balanced processor pipeline. We use MICA to confirm that our
benchmarks all have limit ILP greater than 4 [20].
Do I have to explain how stupid that is? :D

And their key findings have been known by designers for so long it makes the paper uninteresting (though I agree some of the collected data is interesting, it's just the findings that I criticize).
 
So you're implying that Apple is better overall at designing CPU architectures than Intel? Sorry, I could not properly identify the context of your statements.

This one Apple architecture is possibly beating this one Intel architecture(I haven't looked at the data myself). Compare a Cyclone clocked at 3.6 GHZ to a Haswell based Core i3 clocked that high and well, it won't stand a chance against against the Core i3. But perhaps against Silvermont it has an advantage.

My point was more that Silvermont isn't as good as Cyclone does not mean that the manufacturing process is defunct. There's a reason Intel has FinFETs and the entire microprocessor semiconductor industry is moving to them as fast as they can. A very good reason.
 
It could be that one design is a large core that must scale to server and desktop loads with high single-threaded performance, high multithreaded utilization, very wide vector units, high system bandwidth, with pipeline clocks going nearly to 4 GHz.

The other design focused on a specific lower-performance and power-sensitive niche, which meant it could forego pipeline logic and circuits tuned for high clocks it doesn's strive for, or the additional resources for extra threads it doesn't have.
 
Citing some AMD propaganda doesn't say much sorry ;) I thought you had some independent study to show.

Wonderful. So you don't believe anything AMD says about their own chips. The 4% decode overhead wasn't in regular marketing material, it was part of a slide deck for their 2006 yearly analysts meeting.

I guess you don't believe the Bobcat floorplan either. Even if you count the instruction microcode ROM as part of decode it is still a very modest fraction of the core size, which itself is small. And microcode ROM doesn't burn any kind of power, it just sits there.

It's not a bigger problem. And why do you put your focus on decoding? ISA differences are not located only in decoders.

I put emphasis on decoding because that is what is always brought up when discussing the "complexity" of x86. The only others are segments, which nobody has used since the 16bit days and partial register updates which was solved a long time ago.

ARM has a lot more ISA issues for medium to high end implementations. I've already mentioned post increment adressing, move multiple and implicit shift. There is also condtional execution. All of which was ejected from the ARM64 spec. ARM64 is just a cleaned up MIPS64 AFAICT.

And their key findings have been known by designers for so long it makes the paper uninteresting (though I agree some of the collected data is interesting, it's just the findings that I criticize).

Their conclusions may be inane, but their empirical work isn't, AFAICT, and it shows no x86 handicap whatsoever.

Now, do you want me to list the things in x86 that actually benefit performance (and power) ?

Cheers
 
Back
Top