Intel Atom Z600

Is the GUI in Vista / 7 crippled in anyway by DX9L? That to me would be very significant in a netbook, moreso than raw performance I think.
 
requirements may be more loose now actually, with dx11 introducing the direct3D 9 feature level as well as the 10.x ones, so more forward compatibility for the d3d9 feature set.
 
At that point, Medfield could become the one chip that does all.

Then what purpose does Cedar Trail serve exactly?

This is not as much of a counter-argument as a serious question.

If it comes down to nothing more than Cedar Trail going to 2GHz instead of 1.5GHz that's going to be pretty lame. Because that's not much differentiation, but also because Lincroft series CPUs already go to 1.5GHz and it'll seem ridiculous if Medfield can't go higher.

The only real differentiator I see available is Medfield having a single core option (we know Cedar Trail has two). But then it won't survive against current high end ARM SoCs. Seems to me like either Medfield will make Cedar Trail look ridiculous at its TDP or Medfield will be too underpowered to compete... I just don't see how Intel can move forward in this.

(but we all know Wichita is going to destroy Cedar Trail anyway so I guess its market position vs Medfield is sort of moot)
 
the differentiation would be at least package size and inputs/outputs (you really need PCIe lanes, more USB, a SATA controller etc. on a laptop, and why not thunderbolt)
then the speculative part about the GPU, I would side with the smaller GPU idea.

I've just read though that current NM10 chipset will be used. that's lame : still stuck with 100Mb ethernet and USB 2.0, as on an old PC or on a 486-based SoC.

I wonder about virtualization extensions. still crippled or not? all core i-something have it now. yes, a computer with a slow 4-thread CPU, 4GB memory and a big hard drive should allow you to toy with linux and windows VMs.
 
I'll have a damn hard time justifying that design decision.

I agree with you there, it makes sense if it was a smartphone or tablet part, but for a chip intended for netbooks this is a major fail. Its going to make a netbook useless for anything but surfing and word processing, etc. Ontario/Brazos is going to kick its butt hard, and with Krishna/Wichita scheduled for Q1 2012, Atom is going to get whooped pretty hard till they get a new arch on 22 nm (2013 according to the roadmap). The small plus side is, battery life should go up tremendously. (At least 20-30% im estimating)
 
Cedar Trail is 10W TDP for 2GHz, compared to Atom D525 at 1.8GHz which has a TDP of 13W. Both have IGPs, and I'm operating under the assumption that SGX545 is a lot more efficient even at the same process node. So I think the move to 32nm didn't actually win very much, unless Cedar Trail is rated very conservatively.

Which is not a good sign given Intel's hype that Medfield would be as power efficient as current ARM SoCs.
 
Cedar Trail is 10W TDP for 2GHz, compared to Atom D525 at 1.8GHz which has a TDP of 13W. Both have IGPs, and I'm operating under the assumption that SGX545 is a lot more efficient even at the same process node. So I think the move to 32nm didn't actually win very much, unless Cedar Trail is rated very conservatively.

Which is not a good sign given Intel's hype that Medfield would be as power efficient as current ARM SoCs.

In an awful calculation 45/32=1.4. 13/1.4=9.3. So an awfully rough calculation would suggest that shrinking pinetrail down and doing nothing else would get you around 9-10W.

Cedar trail has full hardware video decode built in, which was a separate chip (broadcom ?) on pinetrail, and not included in pinetrails power figures. Additionally it also handles full blu-ray decode. Cedartrail (according to Intel) also has double the graphics performance, and Dx10.1 compatibilty. So you might say they've added a lot of extra functionality/performance/compliance without dipping into the power saving they got from the process shrink and also got rid of a chip.
 
Then what purpose does Cedar Trail serve exactly?

This is not as much of a counter-argument as a serious question.
Maybe Medfield and Cedarview will be the same chip, with Cedartrail using NM10 for added connections and functionality for netbooks\tops.

As far as power consumption goes, they could both be dual core, with Cedarview aiming at >2GHz (maybe even 2.5GHz) and Medfield going to sub-1GHz and being higher-binned, like the ULV versions of higher performance CPUs.
As far as Javascript performance goes, benchmarks indicate that a single-core 1.6GHz Atom easily beats a dual-core 1GHz Cortex A9, so I think a ~900MHz dual-core, quad-threaded Atom would beat higher-clocked A9s.
And Medfield will probably use slower LPDDR2, as opposed to Cedartrail's DDR3.

Furthermore, as Blazkowicz said, Medfield could save a lot of power on reducing I/O ports, as we've seen with that 5W version of C-50.

Is the TDP given for the netbook or tablet variant?

I'd also like to know where those 10W for Cedarview are coming from. Is it for the CPU only? Does it include the NM10? The DDR3 RAM?
 
Last edited by a moderator:
Maybe Medfield and Cedartrail will be the same chip, with Cedartrail using NM10 for added connections and functionality for netbooks\tops.

medfield will be a different chip.
when Intel produced pineview from the menlow platform, they dropped a lot of the power saving techniques that were in menlow. So expect medfield to have those. Also medfield will include video encode I/P from IMG, which cedartrail does not. Also I expect medfield to have a smaller package.

I'd also like to know where those 10W for Cedartrail are coming from. Is it for the CPU only? Does it include the NM10? The DDR3 RAM?

Cedarview is the Soc, Cedartrail is the chipset.
 
Cedarview is the Soc, Cedartrail is the chipset.

I'm sorry, what exactly is Cedartrail? The platform for Cedarview? As in Cedarview + NM10?


Here the last known roadmap for Cedartrail:
imageviewd.png


In this roadmap, there's a 1.86GHz D2500 without HT and a 2.13GHz D2700 with HT. These are the nettop parts so both should have the SGX540 running @ 640MHz.



The latest rumours from fudzilla mention a N2600 single-core and a N2800 dual-core (I'd bet it's a mistake and it's actually both dual cores with the latter supporting HT).
There's a couple of other interesting things in there, like Intel limiting the batterys to 4-cell minimum and raising the amount of out-of-the-box RAM to 4GB DDR3.

I'd say the N2800, if clocked at 2.13GHz, will be a bit faster than the 1.6GHz Bobcats in CPU intensive tasks, specially with multitasking in mind.

That said, with the 2.13GHz system (Cedarview + NM10) consuming ~10W, a medfield @ <900MHz may hover around 1W with higher-binned parts, no NM10 and low-power memory controller.
 
I'm sorry, what exactly is Cedartrail? The platform for Cedarview? As in Cedarview + NM10?

Yeah, my understanding is the cedartrail is a cedarview + NM10, regardless of what its clocked at, which is the same naming convention that was used for pineview/pinetrail. To be accurate with regard to quoting TDP, a particular N or D part number needs to be referenced.
 
Maybe Medfield and Cedarview will be the same chip, with Cedartrail using NM10 for added connections and functionality for netbooks\tops.

Yeah it's possible, although there'd be different SKUs to prevent Medfields from actually interfacing with NM10.

As far as power consumption goes, they could both be dual core, with Cedarview aiming at >2GHz (maybe even 2.5GHz) and Medfield going to sub-1GHz and being higher-binned, like the ULV versions of higher performance CPUs.
As far as Javascript performance goes, benchmarks indicate that a single-core 1.6GHz Atom easily beats a dual-core 1GHz Cortex A9, so I think a ~900MHz dual-core, quad-threaded Atom would beat higher-clocked A9s.
And Medfield will probably use slower LPDDR2, as opposed to Cedartrail's DDR3.

Judging CPU performance based solely on Sunspider is foolhardy, there are all sorts of variables that have nothing to do with CPU. I have good confidence that clock for clock and with similar memory subsystems Cortex-A9 out-does Atom, and Atom will be up against dual-core A9s at much higher than 900MHz in 2011. Sure, if it's dual core it'll have an advantage with stuff that's well threaded (until the quad-core A9s come out anyway) but how much on phones and tablets do you seriously expect to be well distributed among four threads?

At 1.6GHz Atom would have an advantage over 1GHz Cortex-A9s in typical scenarios, but Atom won't be 1.6GHz on phones and Cortex-A9 won't stop at 1GHz.

Furthermore, as Blazkowicz said, Medfield could save a lot of power on reducing I/O ports, as we've seen with that 5W version of C-50.

That isn't where the power saving comes from, those reductions are all in the Hudson chipset which the TDP doesn't include.

I'd also like to know where those 10W for Cedarview are coming from. Is it for the CPU only? Does it include the NM10? The DDR3 RAM?

Regardless of what Anand's calling the preview those numbers are attached to Dxxx chips, not the platform, and won't includes NM10 (which has a TDP of 2.1W for what it's worth).

By the way, Oak Trail's SM35 PCH is 0.7W, so I'm sure there'll be a tablet variant paired with something like this.
 
- By going PowerVR with a VXD, all Atoms will now support video acceleration for FullHD High-Profile decoding, which was unprecedented in the low-cost versions and is actually good (it felt ridiculous that netbooks had inferior video performance than most mid-to-high end smartphones).
- They can brag about supporting DX10.1 with hardware vertex shading this time (whooohoo), which means it'll support some more games (horribly) and maybe they'll even come up with an OpenCL driver for it, just for the lulz.
Well I think the expectation was that intel would use some HD graphics derivative, which would do all that too. Though HD2000 (on 32nm) is about 30mm² I think, which might be too big (and power hungry) and I don't know if it downscales further well. Not to mention it might lose some of its appeal if there's no L3 cache it could use, coupled with the pathetic memory bandwidth these platforms have.
 
Looks like D2500 is going to have a big price advantage to make anyone want it over N2800, yet the cpu-world page suggests it'll actually cost more o_O

I also like that they're calling SGX 545 "GMA 3650", making the previous Intel GPU feel even more replaced.
 
Looks like D2500 is going to have a big price advantage to make anyone want it over N2800, yet the cpu-world page suggests it'll actually cost more o_O

I also like that they're calling SGX 545 "GMA 3650", making the previous Intel GPU feel even more replaced.

"All four microprocessors are expected to launch in the 4th quarter 2011, priced lower than $55 for D2xxx chips, and less than $50 for N2xxx."

Not really, it would suggest N2800 costs less than D2700, and the latter is faster so...

How does the 545 compare against the 543?

BTW, someone mentioned that GMA 3150 is a half clocked version of the 945G. That doesn't matter too much because GMA 900/950 architecture wasn't limited by fillrate, but rather by vertex shaders, which is then powered by the CPU. Going from 200MHz to 400MHz showed a gain of 10-20% for lot of games and applications.

Cedar Trail is coming in the fall, which would give 5-6 months lead time over Wichita.
 
Last edited by a moderator:
How does the 545 compare against the 543?

543 has no formal Dx compliance, has lower OpenGL compliance, and only has embedded OpenCL, not full profile.

according to IMG marketing, 543 has slightly lower poly rate 35M, compared to 545 40M, both @200Mhz. Both have the same fill rate.

Of course 543 in ipad2 is kept company by another 543. no one knows what ipad2 is clocking 543 at, a totaly guess would be 200MHz. If that is true, then one might see the 400Mhz cedartrail have similar performance to ipad2, and the 640Mhz version have 50%-ish more.

For me, it would make much more sense for medfield to have 543MP2
 
Cedar Trail is slated for Q4 2011, Wichita for Q1 2012. That could mean 6 months, but it could also mean under 1 month.

Unless something has changed in IMG's numbering SGX545 is Series5 while SGX543 is Series5XT, meaning that the 543 has wider ALUs. According to IMG SGX545 still has 2 TMUs and probably 4 ALUs, but they list a triangle rate of 40M at 200MHz (as opposed to 20M given for 540), so maybe they doubled triangle setup rate somehow. I would still expect it to usually perform like a much higher clocked SGX 540, though. I expect A5's SGX543MP2 to beat the 400MHz SGX545 most of the time and the 640MHz one occasionally.
 
Back
Top