Windows tablets

Last two years have brought us ~2x every year improvement and I think we have 2 more years of such growth left in 1W SoCs.
Isn't Tegra 3 already consuming 4W on peak? That's not far away from dual core Bobcat based Z-01 (5.9W TDP). Haswell is supposed to bring Intel's top of the line architecture to under 10W as well. It's going to be really interesting to see how things proceed in the ultraportable/tablet segment in the next two years. ARM is scaling up, and x86 is scaling down. Intel still has superior process technology to all the competitors, but is that going to be enough to stop the (slightly) more efficient instruction set with no legacy baggage. Apple has become a key player, so their decisions will affect the outcome a lot. They already ported OSX from PPC to x86. With their own ARM SOC and all their other devices running on it, they might be very tempted to port OSX to ARM as well. 2GHz+ quad core A15 based Macbook Air wouldn't be that bad.

Either way, I think we need OS and process to shrink quite a bit before Windows tablets will be useful.
Ivy Bridge is launching on Q1 (good time before Win8), and other companies are touting their forthcoming 28 nm products as well. We will surely see several high end tablets sporting a 28 nm CPU instead of the current 40 nm ones at Win8 launch. Win8 itself fulfills the second part of the equation. The prerelease version is already using half the memory of Win7 (fresh install) and running existing programs a few percents faster. Microsoft has stated that they aim to reduce the number of background processes even further before the launch. Win7 isn't a touch optimized OS to begin with, so the current Win7 based devices should be considered as pure prototypes (suiting mainly specific professional needs, not designed for mass market). Win8 launch is the real deal. That's when all the big companies will release their Win8 tablet hardware.
 
Last edited by a moderator:
Ivy Bridge is launching on Q1 (good time before Win8), and other companies are touting their forthcoming 28 nm products as well. We will surely see several high end tablets sporting a 28 nm CPU instead of the current 40 nm ones at Win8 launch. Win8 itself fulfills the second part of the equation. The prerelease version is already using half the memory of Win7 (fresh install) and running existing programs a few percents faster. Microsoft has stated that they aim to reduce the number of background processes even further before the launch. Win7 isn't a touch optimized OS to begin with, so the current Win7 based devices should be considered as pure prototypes (suiting mainly specific professional needs, not designed for mass market). Win8 launch is the real deal. That's when all the big companies will release their Win8 tablet hardware.

Aye Win8 has me quite excited with regards to my Windows tablets (have 2 of them). But not due to the UI. I feel the UI is generally fine for touch + pen.

The more exciting thing that Win8 is going to do, IMO, is force developers to take touch into consideration when developing apps for the Metro UI. Apps are by far the most frustrating thing when it comes to Windows on a tablet. Many are perfectly fine, but you have some applications that insist on using extremely tiny clickable area's (pretty much all PDF readers for example, and 3rd party ones are the worst).

So my biggest hope is that Metro UI brings with it more touch conscious app developement by 3rd parties. That's the main reason, IMO, that most people feel Android tablets and iPad give a better tablet experience. Apps on those were designed with touch in mind so shouldn't run into issues such as clickable area's being only 3x3 pixels with multiple's areas sharing a small space. Stuff like that works fine with a mouse, but is hell even with an active digitizer + pen.

Regards,
SB
 
Isn't Tegra 3 already consuming 4W on peak? That's not far away from dual core Bobcat based Z-01 (5.9W TDP). Haswell is supposed to bring Intel's top of the line architecture to under 10W as well. It's going to be really interesting to see how things proceed in the ultraportable/tablet segment in the next two years. ARM is scaling up, and x86 is scaling down. Intel still has superior process technology to all the competitors, but is that going to be enough to stop the (slightly) more efficient instruction set with no legacy baggage. Apple has become a key player, so their decisions will affect the outcome a lot. They already ported OSX from PPC to x86. With their own ARM SOC and all their other devices running on it, they might be very tempted to port OSX to ARM as well. 2GHz+ quad core A15 based Macbook Air wouldn't be that bad.


Ivy Bridge is launching on Q1 (good time before Win8), and other companies are touting their forthcoming 28 nm products as well. We will surely see several high end tablets sporting a 28 nm CPU instead of the current 40 nm ones at Win8 launch. Win8 itself fulfills the second part of the equation. The prerelease version is already using half the memory of Win7 (fresh install) and running existing programs a few percents faster. Microsoft has stated that they aim to reduce the number of background processes even further before the launch. Win7 isn't a touch optimized OS to begin with, so the current Win7 based devices should be considered as pure prototypes (suiting mainly specific professional needs, not designed for mass market). Win8 launch is the real deal. That's when all the big companies will release their Win8 tablet hardware.

Yea but Tegra 3 is quad core on 40nm. and it has many other advantages such as the shadow core and SMP. which help it reduce power doing menial tasks.
I very much doubt that a quad core krait @28nm at the same frequency would consume 4w...i could be wrong though, Krait/A15 will be arguably faster IPC than comparable x86.
Try sticking 4 bobcat/atom cores together and measure that at full chat!

What is going to be very interesting is silvermont, i wander with Haswell ULV getting under 10w, would Intel with silvermont Atom try to unify the architectures? It would seem logical in the long run.
 
Isn't Tegra 3 already consuming 4W on peak? That's not far away from dual core Bobcat based Z-01 (5.9W TDP). Haswell is supposed to bring Intel's top of the line architecture to under 10W as well.
AFAIK, Haswell's range is 15-35W.

Lower TDP's are up to vendor.

It's going to be really interesting to see how things proceed in the ultraportable/tablet segment in the next two years. ARM is scaling up, and x86 is scaling down. Intel still has superior process technology to all the competitors, but is that going to be enough to stop the (slightly) more efficient instruction set with no legacy baggage.

IMO, it is fairly clear that ISA matters very little for high performance designs, which is where both are headed. Process, implementation, micro-architecture, physical design matter a lot more.

Apple has become a key player, so their decisions will affect the outcome a lot. They already ported OSX from PPC to x86. With their own ARM SOC and all their other devices running on it, they might be very tempted to port OSX to ARM as well. 2GHz+ quad core A15 based Macbook Air wouldn't be that bad.

I'd like something beefier that A15 though, not that it sucks. I think ARMv8 cores will run Mac OS X very well, but I am skeptic of Apple porting it. They are enjoying double digit growth rates in a stagnant/slowly declining market. Why take the risk? What's the upside?
 
but I am skeptic of Apple porting it. They are enjoying double digit growth rates in a stagnant/slowly declining market. Why take the risk? What's the upside?
Upside is of course that they will get better revenue from selling their products (something Apple is very good at, and something that must have motivated their own SOC decision for iPhone as well). A SOC of their own would be more cost effective than buying very expensive cherry picked ULV processors from Intel. The current Sandy Bridge CPUs found in Macbook Air cost 250$ a piece (http://ark.intel.com/products/54620).

It's a good business practice to spend money to improve your future profits when you are doing very well (and have the extra money to spend). A decision like this could be seen as reducing future risks (reduced dependancy to other companies). Apple just recently purchased Anobit (a flash memory maker) for 500 million dollars. It seems to me that they want to manufacture more of their product parts themselves in the future.
Yea but Tegra 3 is quad core on 40nm
The AMD Z-01 5.9W processor I used in comparison is 40 nm as well. x86 power usage will drop as well when going to 28 nm.
 
Yea thats true. as cortex a15 will consume more peak power than a9, do you think bobcat will be able to match the power of a15 at 28nm?
 
Yea thats true. as cortex a15 will consume more peak power than a9, do you think bobcat will be able to match the power of a15 at 28nm?

You weren't asking me, but..

Ontario is about 6W for 2x 1GHz Bobcat cores, although I don't know how much of that thermal budget is taken by the GPU - and note that this does not include the Hudson I/O chip. I doubt a straight shrink alone will bring that much below 4W. No smartphone chip will be released that requires over 4W (I/O stuff included here) for any kind of continuous load, and it's not going to take that much for a dual core Cortex-A15 to beat dual Bobcat cores. Actually I think they could do it at even less than 1GHz per core, I really think Cortex-A15 can offer better perf/MHz than Bobcat. The GPU capabilities, especially something like what's on NovaThor, will be pretty competitive too. To some extent Ontario suffers die space requirements due to its DX11 featureset which is for now wasted on phones.

However it's possible that AMD's design could be leaving a lot of room for optimization. Word is that Brazos is for the most part a synthesized design and hand-tuning could yield a much better result (as well as offer much higher clock speeds, ~1.6GHz is pretty low for something with its design characteristics, on 40nm.. even Atom goes a lot higher) So maybe. I think that AMD really focused on optimizing cost, time to market, and die density, which sounds pretty sensible considering what it got them.
 
Thanks. Yea i always thought they were being conservative, especially as they built a brand new cpu from the ground up, where as Atom is a frankenstein'd pentium 2 or something.

But never the less i am suprised that you consider a15 to be really that powerfull, if so we are talking low-mid range core 2 duo levels of performance aren't we? I dont doubt your judgement, i just didn't think ARM could produce something quite that powerfull.

I cant help but think that AMD has let a great opportunity slip, they sold the imageon mini radeon gpu to qualcomm, which as you know became adreno (fun fact; radeon - anagram)
Really if atom could get a design into smarphones and competitive with A9, then surely AMD with a newer smaller core could do something better?
Combine that with the Adreno and maybe their ram business/global foundry and you potentially had a AMD version of tegra.

Also why has AMD not encorperated SMT? seems a no brainer to me..but thats a different story altogether.
 
Last edited by a moderator:
But never the less i am suprised that you consider a15 to be really that powerfull, if so we are talking low-mid range core 2 duo levels of performance aren't we? I dont doubt your judgement, i just didn't think ARM could produce something quite that powerfull.

I don't expect A15 to perform to the levels you're suggesting, I think you're just overestimating Bobcat's performance. I expect A15 to have performance per clock maybe around what K8 had. In some ways it's clearly better, in other ways it's clearly worse. In the long run memory subsystem will be a deciding factor too.

As a reference point, ARM expects Cortex-A15 to have about 50% better IPC than Cortex-A9.. so we're talking a massive upgrade in performance per clock.

I cant help but think that AMD has let a great opportunity slip, they sold the imageon mini radeon gpu to qualcomm, which as you know became adreno (fun fact; radeon - anagram)

Ha, I never caught that. Do you think it could just be a coincidence?? I agree that AMD shouldn't have sold Adreno, but on the other hand I'm not sure they would have had a broad number of takers for licensing. I guess some pertinent questions are: would Qualcomm have bothered using them if it meant licensing third party (acquiring and using their own IP seems to be Qualcomm's MO, they've hit all the checkboxes now) and if Freescale would have continued using them instead of switching to Vivante. Without either of them the IP would have been dead weight.

Really if atom could get a design into smarphones and competitive with A9, then surely AMD with a newer smaller core could do something better?
Combine that with the Adreno and maybe their ram business/global foundry and you potentially had a AMD version of tegra.

Maybe they can, maybe they can't, really Bobcat doesn't give much of an indication. Atom was a really long time coming and Intel has more resources than AMD. (plus, AMD really doesn't have a RAM business)
 
A nooby question..how do you multi quote like that, i can only either copy paste or quote the whole essay!:p

Yea maybe i am overstating bobcat performance somewhat, architecturely it has to be very similar to A9, and maybe performance as well.
Does the A15 power consumption scale with that 50% as well?

Adreno- no i dont think its a coincidence, its too good!
I was thinking them not licence the gpu core out, keep it in house and develop a SOC for smartphones, (ala tegra)

If they stopped this stupid 'were too good to copy intel tech' nonsense and adopt HT for bobcat, along with some power saving/l2 power planeing then i could see it work.
Either way selling for a paltry $60mill is farcical considering the way technology has been progressing.(smaller, mobile)

Im sure many of the mistakes/late to market ideas were more infuenced by finance than technical capability.
 
Last edited by a moderator:
I don't think there's a way to semi-quote like that through the GUI, I just manually create multiple quote open and close tags. Usually by copying + pasting the original one.

This is what I expect for A15's power consumption, but note it's very speculative:

1) At the same clock speed perf/W will be worse than Cortex-A9 if implemented on the same process with the same design rules. Especially if we're talking high clocks like > 1.5GHz. Doubly so if we're talking clock speeds that are too high for the Cortex-A9 in question to hit (A15 should scale higher)
2) If you scale the two processors so they have the same performance but different clock speed the A15 may use less power, if the clock speed is substantially lower enough. This is because power consumption doesn't scale linearly with clock speed, largely because it requires different voltages. Voltage domains for ARM SoCs are often not really continuous but in pretty broad discrete levels, so if you can go a level or two lower on the A15 that could be a big win.

But I definitely don't think you'll see a 50% perf/W improvement in most cases.

What'll really be good for a balanced (power consumption-wise) system is ARM's big.LITTLE scheme.
 
Yea big-little will rock. I wander whether qualcomm could encorperate a low power a7 up to say 600mhz- then let the kaits (on HP process) scale up in increments from there. (again ala tegra 3).
 
The options to mix LP with HPM isn't really available and LPG just doesn't offer the same scalability as HPM. Current Tegra companion cores are possible because LPG allows you to mix both G and LP transistors.

When the move to HPM happens, heterogeneous big.Little will be really necessary as you can't throw in an LP version of the same core anymore.

Note that Krait's perf/W is significantly superior to that of A15, so a big.Little configuration isn't as necessary.
 
The options to mix LP with HPM isn't really available and LPG just doesn't offer the same scalability as HPM. Current Tegra companion cores are possible because LPG allows you to mix both G and LP transistors.

When the move to HPM happens, heterogeneous big.Little will be really necessary as you can't throw in an LP version of the same core anymore.

Note that Krait's perf/W is significantly superior to that of A15, so a big.Little configuration isn't as necessary.

Right got that, i will have to study the processes a little more.

I take it you are privy to some info that is not public? or is that due to the specs knocking around? SMP, the seperate L2 cache on its own power lane, the L0 cache. are the main differentials to save power that im aware of.

Do you know of any more ways Krait saves power compared to A15?, as im sure the A15 will have higher IPC.
 
There have been some info on dmips/W released (I know, not exactly the most relevant information). But also, the uarch has had some coverage. Significantly shorter pipeline being one of the few reasons Krait will be much lower in power -- albeit scale to lower frequencies.
 
A nooby question..how do you multi quote like that, i can only either copy paste or quote the whole essay!:p

Either do the lazy way and manually insert <quote> quote body </quote>

Or copy and past the first quote, in this case <QUOTE=french toast;1611967> at the start of each quote block and using the same </quote> to end the quote block.

Replace <> with [] in a real post. :)

I'm lazy so I use the first method after the initial quote block. :)

Regards,
SB
 
There have been some info on dmips/W released (I know, not exactly the most relevant information). But also, the uarch has had some coverage. Significantly shorter pipeline being one of the few reasons Krait will be much lower in power -- albeit scale to lower frequencies.

Ah i see so wouldn't the Krait also have higher IPC all other things being equal? and the A15 have higher frequency potential for the same reason?

However i heard the A15 has twice the execution units or something (8v4)
So that should have higher IPC? and inversely mean the Krait can scale to higher frequencies?? confused...

Either do the lazy way and manually insert <quote> quote body </quote>

Or copy and past the first quote, in this case <QUOTE=french toast;1611967> at the start of each quote block and using the same </quote> to end the quote block.

Replace <> with [] in a real post. :smile:

I'm lazy so I use the first method after the initial quote block. :smile:

Regards,
SB.

Cheers SB!.
 
Last edited by a moderator:
Cheers SB!.

BTW - if you want to quote from multiple posts. The easiest way to do that is to just click the button with the "+" next the quote button of a post. That will add it to the list of posts to quote. Then on the last post you want to quote, hit the quote button.

And the button changes to a "-" after you click the "+". Click that to remove it from the list of posts to quote. :)

Regards,
SB
 
Back
Top