Intel's smartphone platforms

Dumb question: couldn't reviewers just approximate display brightness on devices they want to compare and then with full batteries run on all of the exact same routine consisting of web browsing, video and some 3D amongst others and then note how much battery has been left?

No need to approximate much. A lightmeter capable of ensuring normalized screen brightness is dirt cheap, and the procedure would take a minute or so at most.
 
and then note how much battery has been left?
Is there an easy and accurate way to get this number? I'm afraid not.

So one would have to loop the tests until battery drains, but this creates another issue: you'd need to measure the number of loops you made or else you are missing an important term in efficiency measures. As far as I know no review ever specifies that number of loops that were achieved.
 
Geekbench as run on ARM Android doesn't use FTZ. It's just that ARM CPU handle subnormals (new terminology as per IEEE754-2008 ;)) in hardware rather than relying on microcode as Intel CPU.
Depends. armv6 fpus don't seem to be able to handle denormals in hw at all (needs to be done in exception handler). Looks like all vfp3-capable cpus though can do it in hardware (well intel cpus also do it in hw, though yes that's some slow hw there using microcode, also some newer chips should be able to do SOME operations with denormals without any performance hit, but that doesn't apply to atom...). There seems to be some performance hit even on vpf3 fpus with denormals but it is way lower than with atoms (http://rcl-rs-vvg.blogspot.ch/2012/02/denormal-floats-across-architectures.html). NEON would not support denorms, but it isn't used in this case afaik.
 
Is there an easy and accurate way to get this number? I'm afraid not.

So one would have to loop the tests until battery drains, but this creates another issue: you'd need to measure the number of loops you made or else you are missing an important term in efficiency measures. As far as I know no review ever specifies that number of loops that were achieved.

There will always be "buts" for any sort of measurements I guess. One advantage of the proposal is that if reviewers keep measurement scenarios random, IHVs can't optimize as eagerly for it as for a sterile synthetic benchmark. Besides if I as a reader have similar data to that along with the typical synthetic benchmark results I will have a slightly more complete picture than with just the latter.
 
Depends. armv6 fpus don't seem to be able to handle denormals in hw at all (needs to be done in exception handler). Looks like all vfp3-capable cpus though can do it in hardware (well intel cpus also do it in hw, though yes that's some slow hw there using microcode, also some newer chips should be able to do SOME operations with denormals without any performance hit, but that doesn't apply to atom...). There seems to be some performance hit even on vpf3 fpus with denormals but it is way lower than with atoms (http://rcl-rs-vvg.blogspot.ch/2012/02/denormal-floats-across-architectures.html).
Very interesting link, thanks! It certainly explains why blur and sharpen get awful scores on most Intel chips, even if subnormal density is not 100% as claimed by an Intel guy.

NEON would not support denorms, but it isn't used in this case afaik.
No it isn't, only the VFP is used.
 
I know its digitimes. But it unless they've just started to make up codenames, they must have gotten this info from somewhere.

http://www.digitimes.com/news/a20130821PD203.html

Intel will launch new tablet platforms, 14nm Cherry Trail in the third quarter and 14nm Willow Trail in the fourth quarter of 2014, as well as new smartphone SoC, 22nm Merrifield, at the end of 2013, Moorefield in the first half of 2014 and 14nm Morganfield in the first quarter of 2015, according to Taiwan-based makers.

The sources pointed out that the Merrifield will have a performance boost of about 50% and a longer battery life compared to Intel's existing smartphone platform Clover Trail+.

Intel is already set to unveil its 22nm Bay Trail-based processors with the Silvermont architecture at the Intel Developer Forum (IDF) from September 10-12 and will later release its Bay Trail-T processors for tablets, supporting both Windows 8.1 and Android 4.2, the sources said.

The Bay Trail-T platform also adopts the Silvermont architecture and supports a battery life over eight hours when active and weeks while idling. The Bay Trail-T will have two clock speed specifications, 1.8GHz and 2.4GHz and a Gen 7 GPU, the sources noted.

Intel will distribute its Cherry Trail samples to partners at the end of 2013, unveil the platform at Computex 2014, in June and announce the CPUs in the third quarter of 2014. The Cherry Trail features Intel's 14nm Airmont architecture with a clock speed of 2.7GHz and a GEN 8 GPU.

The Willow Trail & Derivatives series will adopt Intel's 14nm Goldmont architecture with a Gen9 GPU, supporting both Windows and Android operating systems.
 
Isn't Baytrail exclusive to tablets and not smartphones?
 
Yes.

For a next generation, 22nm mature trigate process gpu performance is pretty disappointing.
Shrink A6x down onto the same process and clock accordingly..you get my point.
Do you have a comparison in terms of performance per area?
 
Do you have a comparison in terms of performance per area?
Intel havnt released any socs die sizes for baytrail

However the tech report review has some interesting remarks.

States that one of the largest areas on the soc is the graphics. Assuming the two identical areas on the soc picture are 2x2 atoms, then I imagine the large area at the bottom is the gpu. Looks about about 1/5 of the entire space to me.

They also did some basic power test.

"While gaming, the Clover Trail system's graphics drew about 650 mW, and the CPU drew 700 mW. The Bay Trail system's total power use wasn't far from Clover Trail's, but the mix was very different, with 1.2W going to the IGP and 100-150 mW heading to the CPU. To be fair, though, the Bay Trail IGP was driving a much higher-resolution display."

Clovetrail is single core 544. ( if I remember correctly), on 32nm.
 
Do you have a comparison in terms of performance per area?

No sorry I dont, however what I meant was the power per watt, or out right graphics performance for a tablet form factor, you would have to imagine a6x aging sgx cores performing considerably better if baked onto that advanced 22nm process.

What im getting at is, unless drivers are severely crippled at this stage (possible) graphics performance is underwhelming for a next gen tablet part.
 
No sorry I dont, however what I meant was the power per watt, or out right graphics performance for a tablet form factor, you would have to imagine a6x aging sgx cores performing considerably better if baked onto that advanced 22nm process.
You were linking gfx performance to the process, which doesn't mean much if we don't even know the area devoted to graphics in various SoCs. You should try to find out how much area is Apple (or Qualcomm, etc..) devote to gfx and then compare to Bay Trail.
 
No sorry I dont, however what I meant was the power per watt, or out right graphics performance for a tablet form factor, you would have to imagine a6x aging sgx cores performing considerably better if baked onto that advanced 22nm process.

What im getting at is, unless drivers are severely crippled at this stage (possible) graphics performance is underwhelming for a next gen tablet part.

I'm not that stoked about Baytrail's GPU performance, but then again it is probably sufficient for the majority of Android games, no one is really pushing the envelope that far, which is
understandable with a wide range of hardware to support.

But having used a Motorola Razr i, with a measly single core old style Atom, I was surprised how fluidly it drove the UI, so I wouldn't write Baytrail off, it may actually offer a superior user experience, it's weird but having used many AMD & Intel systems, the Intel CPUs always felt a touch snappier, post P4.
 
You were linking gfx performance to the process, which doesn't mean much if we don't even know the area devoted to graphics in various SoCs. You should try to find out how much area is Apple (or Qualcomm, etc..) devote to gfx and then compare to Bay Trail.

No what I mentioned was the likelyhood of frequency headroom for the sgx 554 mp4 inside a6x if it had been cast on the more advanced 22nm intel process, in other words bay trail has comparatively weak graphics performance compared to last gen tablet gpu/soc that was cast on older 32nm? Process.

Performance per mm is of not much consequence to the consumer if tablet prices is not affected, outight performance for the form factor is.

I wasnt comparing gpu uarch efficiencies or die area cost, just the gpu performance for the form factor, taking into consideration the vastly advanced process, if intel hd4000 is more efficient than sgx (you would hope so considering tha is power vr last gen product) then intel obviously didnt include enough execution units and/or frequency.
 
What im getting at is, unless drivers are severely crippled at this stage (possible) graphics performance is underwhelming for a next gen tablet part.
Drivers should be fairly robust at least for windows, since the gpu is just about identical to what's used in Ivy Bridge (there's only one difference I know of you need to care for in the driver, which is that Bay Trail unlike IVB graphics can decode ETC textures which is obviously trivial to implement, there's also some minor differences setting caching bits up etc. as there's no LLC but that's about it, in all other respects the gpu is set up just like a IVB GT1).
Not sure what graphics stack intel is using on android actually, possibly mostly the same as on linux (which is also fairly robust these days and which is where I got that it's identical to IVB), but I could really be wrong here.
So yeah ok about equivalent to a Adreno 320 isn't exactly mind-blowing - at least it's got a very solid (d3d11) feature set (better than what you'll find in that rogue in the iphone5s or any other chip found in a tablet except those from AMD, not that it matters much).
 
No what I mentioned was the likelyhood of frequency headroom for the sgx 554 mp4 inside a6x if it had been cast on the more advanced 22nm intel process, in other words bay trail has comparatively weak graphics performance compared to last gen tablet gpu/soc that was cast on older 32nm? Process.
Process is just one of many variables, saying that this or that SoC, if manufactured with a different process, would be better or worse means nothing. Parts are designed to fulfill many requirements, process is just one of them. It is an important one but it is hardly the end of the story.
 
Process is just one of many variables, saying that this or that SoC, if manufactured with a different process, would be better or worse means nothing. Parts are designed to fulfill many requirements, process is just one of them. It is an important one but it is hardly the end of the story.

You are correct it isn't the only parameter to consider, but lets be fair you would expect the likely hood of better performance on a much more advanced process.
In layman's terms...im just saying I expected a little better that is all.
 
I'm not that stoked about Baytrail's GPU performance, but then again it is probably sufficient for the majority of Android games, no one is really pushing the envelope that far, which is
understandable with a wide range of hardware to support.

But having used a Motorola Razr i, with a measly single core old style Atom, I was surprised how fluidly it drove the UI, so I wouldn't write Baytrail off, it may actually offer a superior user experience, it's weird but having used many AMD & Intel systems, the Intel CPUs always felt a touch snappier, post P4.

Yes for android games as they stand it will be sufficient, we're allowed to dream though right ;)
 
Drivers should be fairly robust at least for windows, since the gpu is just about identical to what's used in Ivy Bridge (there's only one difference I know of you need to care for in the driver, which is that Bay Trail unlike IVB graphics can decode ETC textures which is obviously trivial to implement, there's also some minor differences setting caching bits up etc. as there's no LLC but that's about it, in all other respects the gpu is set up just like a IVB GT1).
Not sure what graphics stack intel is using on android actually, possibly mostly the same as on linux (which is also fairly robust these days and which is where I got that it's identical to IVB), but I could really be wrong here.
So yeah ok about equivalent to a Adreno 320 isn't exactly mind-blowing - at least it's got a very solid (d3d11) feature set (better than what you'll find in that rogue in the iphone5s or any other chip found in a tablet except those from AMD, not that it matters much).

Yea thanks, looks like intel optimised the soc for die area and power consumption, perfectly understandable, just for ky own selfish geeky needs I expected something mind blowing with that advanced process..perhaps thats what we get with the cpu/encode/decode/memory logic and all round system fabric instead.
 
Back
Top