Intel Atom Z600

Why are we assuming it must necessarily be either MP4 or single core? It could just be the same Exynos chip with only 2 or 3 cores enabled out of 4 for yield reasons.
 
Intel surprisingly streamlined their embedded SoC for the embedded market, leaving the DirectX 9.3 focused SGX535 for an OpenGL ES focused core. They could easily go for a 543MP2 update, taking the same path as Apple but with different initial motivations.

They really should go for a 554MP2 (not to be confused with 544MP2), though... including a DirectX target would make sense again and the performance numbers would make a statement!

I'm pretty sure a 543MP isn't really close to doubling the die size of a 540 yet can yield proportionally large increases in some performance areas, just as a point of comparison.

Among heat, size, and power, though, die should be the last concern.
 
Intel surprisingly streamlined their embedded SoC for the embedded market, leaving the DirectX 9.3 focused SGX535 for an OpenGL ES focused core. They could easily go for a 543MP2 update, taking the same path as Apple but with different initial motivations.

They really should go for a 554MP2 (not to be confused with 544MP2), though... including a DirectX target would make sense again and the performance numbers would make a statement!

I'm pretty sure a 543MP isn't really close to doubling the die size of a 540 yet can yield proportionally large increases in some performance areas, just as a point of comparison.

Among heat, size, and power, though, die should be the last concern.

Did you mean to put 544 instead of 535?
Has the 545 got twice the compute power of the 543? but the same TMU's or something?
 
No, I meant the 535, the SGX core with which they started back with Poulsbo/Menlow and at times tried to position at form factors vaguely resembling something mobile.

The 545 is still a four ALU pipeline core like the other 54x cores without the benefit of the newer Series5XT architecture enhancements (like better floating-point performance), but with a focus on DirectX 10.1 level compatibility and some areas of throughput like polygon and Z performance rebalanced higher.
 
Intel surprisingly streamlined their embedded SoC for the embedded market, leaving the DirectX 9.3 focused SGX535 for an OpenGL ES focused core. They could easily go for a 543MP2 update, taking the same path as Apple but with different initial motivations.

They really should go for a 554MP2 (not to be confused with 544MP2), though... including a DirectX target would make sense again and the performance numbers would make a statement!

I'm pretty sure a 543MP isn't really close to doubling the die size of a 540 yet can yield proportionally large increases in some performance areas, just as a point of comparison.

Among heat, size, and power, though, die should be the last concern.

Wasn't Intel's SGX535@400MHz estimated to have a die area of a bit less than 9mm2? I don't know how SGX540@400MHz would look like, since Intel seems for some reason to be more than just generous with die area on small form factor embedded GPUs (I bet it's not any different on Cedartrail either especially considering the peak 640MHz frequency).

Instead of a 540 now in Medfield they should had gone for a SGX543MP2 in the first place even if the frequency wouldn't had reached even full 200MHz.

The 554MP2 you're suggesting would buy them twice the ALU amount compared to a 544MP2, which wouldn't naturally equate to twice the performance presupposing same frequencies. More like 40-50% depending on case. While I realize that ALUs are typically relatively "cheap" in terms of transistor budgets, I'd still rather prefer a 544MP4 instead even if it would mean slightly more die area.
 
The benefit to fill rate would definitely be useful, so I can't argue with a 544MP4 instead, especially since Intel's already down to 32nm geometries.
 
Well i think they are going duel core Atom, so that leaves less die space.

No idea about Atom CPUs, but ARM CPU cores are relatively small. Typically way smaller than a decent GPU block would consume.

554mp2 seems more likely than 544mp4

ALUs are usually cheaper in hw than TMUs amongst other things. With a 554MP2 you get SGX544MP4 ALU amounts and 544MP2 TMU and z check unit amounts amongst others.
 
No idea about Atom CPUs, but ARM CPU cores are relatively small. Typically way smaller than a decent GPU block would consume.
If you look at the die shots from Silverthorne, I'd say that an Atom CPU core at 45 nm is a little less than 10 mm^2 (Silverthorne as a whole is 25 mm^2). That's just the CPU core though, so without the L2 cache.
 
If you look at the die shots from Silverthorne, I'd say that an Atom CPU core at 45 nm is a little less than 10 mm^2 (Silverthorne as a whole is 25 mm^2). That's just the CPU core though, so without the L2 cache.

Is Silverthorne part of the z series? how does that compare to an a9 @40nm?
 
If you look at the die shots from Silverthorne, I'd say that an Atom CPU core at 45 nm is a little less than 10 mm^2 (Silverthorne as a whole is 25 mm^2). That's just the CPU core though, so without the L2 cache.

That IS a lot if true.
 
Can't say, implementers tend to not make such figures public :)

Take a Tegra3 SoC die shot then and estimate how much of the entire die estate the 4+1 CPU cores could capture and consider that the entire SoC is around 80mm2.
 
Well we can gather that 2 A9s are smaller than Atom on similar process, and likely out perform it on every metric single and duel thread clock for clock....outside of Anandtech that is:rolleyes:
 
Last edited by a moderator:
That's kinda a bold statement. Architecturally, an A9 should be better performing than an Atom. But when we speak of performance, particularly in reference to Android, end-user applications is what matters. And from that respect, benchmarks like Sunspider and particularly the GUI measurements of Vellamo are a lot more relevant than, say, SpecInt.

Granted, the software layer has a lot to do with performance in those particular benchmarks. But I think we're entering an age where chipmakers can't sit idle and "blame it on software" anymore. Companies today have to be systems companies with reference designs that guarantee better user experience. Intel has done this and is probably the industry leader in that. TI and Qualcomm and to a lesser extent Samsung are totally new to this field.

nVidia can be considered a close second behind Intel.
 
..I dont know have they? if you take the saltwell and put it up against a duel core A9 clock for clock, on the same process and running EXACTLY the same version of android, i would bet the A9s at worst equal the atom, and at best smoke it..including power consumption.

I appreciate there is more to a SOC than just cpu, and dedicated hardware do video better for less power, but Medfield SOC doesn't include baseband, and power management (unless i have got that wrong)and are also for less cpu intensive scenarios, so the benchmarks provided for that from Intel are dubious IMHO.

I know what you are saying though, hardware manufacturers are responsible these days for delivering the performance downwind to the consumer..and that means optimising.

However i would say Intel, whilst having alot of experience of that with cpus, have done VERY poorly with graphics side of things.
Nvida can be considered king i agree.

Qualcomm, samsung and Apple have also done alot of optimising as well.
 
Last edited by a moderator:
Back
Top