Opinion: Silverthorne fails but PowerVR impresses (+Montalvo trouble)

Discussion in 'PC Industry' started by B3D News, Apr 2, 2008.

  1. B3D News

    B3D News Beyond3D News
    Regular

    Joined:
    May 18, 2007
    Messages:
    440
    Likes Received:
    1
    All the Silverthorne information you'll ever want is now available in articles from The Tech Report and AnandTech - but while the coverage is decent in terms of architecture, they both miss the mark completely in terms of market dynamics. And in other news, Montalvo looks like it's in big trouble...

    Read the full news item
     
  2. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,033
    Likes Received:
    1,606
    Location:
    Winfield, IN USA
    I still think Sliverthorne sounds like a +2 elven dagger or something. :???:
     
  3. Hannibal

    Newcomer

    Joined:
    Mar 19, 2007
    Messages:
    16
    Likes Received:
    0
    Are you kidding?

    I'd like to know where Intel has advertised this as a mobile phone part. It isn't. It's aimed at a totally different form factor, namely the MID.

    Now, the MID as a form factor is going nowhere, so that's a legit knock against Silverthorne and its prospects in the mobile space. But Arun's post as written reads like someone with an apple complaining that he can't make good orange juice with it. It sort of misses the point.
     
  4. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Sorry to be blunt, but I'm not the one missing the point... :) I'm not referring to the Silverthorne CHIP - I am referring to the Silverthorne ARCHITECTURE. Intel has clearly time and again claimed that Moorestown could be used in 'larger-than-iPhone Premium Smartphones' and that the 32nm shrink of that could be used in iPhone-sized smartphones. What I'm saying is clear: those claims are completely false. Reducing idle power consumption and going to a lower process node will not make the situation look any better for Intel.

    I should have made it clearer that I was referring to the architecture and not the chip though (although I did say 'architecture' and also mentioned Moorestown and the 32nm shrink explicitly). I'll fix that right now, cheers.
     
  5. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    As far as NVIDIA not bailing out Montalvo - and who knows if they will or won't it's all just rumors right now - it is pretty clear NVIDIA is not desperate to acquire x86 technology. I'm not saying they won't as they very well might, just as they are not desperate. And I think given that they have been in the PC market for 15 years and have been planning for this day for quite some time, it is unlikely that they will be in a desperate position.

    One could argue they have been late to the mobile game, or that their strategy was flawed, but no company is perfect and this is a new market. If something is a legitimate threat to their core business that's another story.
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,098
    Likes Received:
    2,814
    Location:
    Well within 3d
    Is the ARM111 core used in mobile phones?
    I think it might be, but I'm not up on the mobile side of things.

    If Intel compares Silverthorne to a mobile phone processor, what is the desired impression?

    I'm not so sure about waiting on future ARM Cortex products, though.
    The chatter I've seen thus far on how such products will be implemented seems to indicate that the numbers might be too rosy.
    If licensed Cortex chips are physically implemented by licensees in different ways, the best-case numbers--and the performance--might not materialize.
    Some haven't been impressed thus far.

    I admit I am not all too familiar as to how this segment works, so I can't really gauge what would be considered credible.
     
  7. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I agree. And I also believe that not acquiring one yet is the correct strategy for now, at the very least for a couple more months and possibly much more. What I do think matters, however, is that they are able to react quickly if the situation changes rapidly, which it very well could. It is in that context which I feel Montalvo for example could be an attractive safety net which, if I were them, would try not to lose for at least some more time. There's a big difference between writing a $15M check to keep them running and actually buying the company.

    Oh, I agree. Desperate is very unlikely, especially for their core business. I do worry that they might miss the revenue opportunities associated with the commoditization of the x86 market however, which are significant (and even more for NV than for Intel, as for the latter that also represents some lost revenue in other ways). I'm also unconvinced their MCP business can be sustained at its current and future cash burn rates unless they expand into the single-chip x86 SoC market.

    Well, on the plus side of things, it's worth pointing out that Mike Rayfield (their handheld general manager; google his bio) only joined the company sometime in 2005, and NV's strategic decisions in the market since then have been relatively good (not stellar either imo, but it's hard to judge that very precisely without knowing what resources they had, what was already decided before, etc.) - so certainly I have some respect for that and the fact their mobile strategy seems much more viable now. Although I still have some real points of disagreement and am worried about cost-efficiency. But this isn't the right place to debate that...

    Yes, in terms of GPUs I'm not too worried; in terms of MCPs though, we'll see. Certainly NV's execution there has been underwhelming lately, and their roadmap was subpar even if they had executed on it - so I'll remain slightly skeptical about their capability to react to market dynamics on time there.

    3dilettante: ARM11 is used in a huge amount of smartphones today, including the iPhone, yes (although it's clocked slightly higher than the OMAP2 Intel is using as a comparison point there). As for Cortex, are you talking about the A8? Because I have indeed heard about some bad stories there (which explain why so many are remaining on the ARM11), but the A9's final RTL has only become available to lead customers very recently. I'm not sure it's very likely anyone has a perfect idea of how clock rates will turn out, so I suspect your relations were talking about the A8? EDIT: BTW, fwiw, I'm using ARM's Cortex-A9's numbers as if they were valid for 40nm instead. So I'm already including a huge amount of 'heh ARM might be wishful thinking again' pessimism in my estimate!
     
    #7 Arun, Apr 2, 2008
    Last edited by a moderator: Apr 2, 2008
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,098
    Likes Received:
    2,814
    Location:
    Well within 3d
    It's possible that they were talking about A8, as this was a while back.
    The general tenor was just that ARM's estimations about what could theoretically be done didn't match up with what the actual implementations provided.

    The way it was described to me, licensees have so many things they can implement or omit, and even then so many ways the actual circuits can be implemented, the actual cores that result may lag in either power consumption or performance.

    ARMs numbers were also not entirely clear when they were talking about best low-power and best performance, and how exactly the performance/power aspect changed depending on the implementation.

    (edit: not that ARM can fully outline such a relationship on products that don't yet exist, but the closest analogy I have is that ARM's numbers at the time could in the worst-case be like Intel telling customers that Core2 has amazing performance/watt, then only states the TDP of the low-power sku, and then in the performance bracket uses a different higher-clocked version without mentioning the switch)
     
  9. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Yeah, must be the A8 then if it was a while ago - and so that also matches what I heard.

    Hmm, things they can implement or omit? My impression is the only major thing there in the A8 is the NEON unit, which has been described to me off-the-record as not being worth the silicon area at all but often necessary because there is no 'plain FPU' option. The A9 has a FPU option on the other hand which is what several companies will use, although I wouldn't be surprised if TI sticked to NEON but who knows. If there are smaller things they can choose to implement or omit though, I would be very interested in a few examples, on or off the record!

    Yes, this is a tendency they have; heck, many IP houses have that tendency sadly. I always make special effort in my pieces to do the extra research and make sure I'm conservative so as not to just be reiterating PR-ish wishful thinking, but there are limits to what I can do sadly... In the Cortex-A9 case, as I said, I pretty much just used 65nm numbers instead of 40nm ones because I did supect they were too optimistic.
     
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,098
    Likes Received:
    2,814
    Location:
    Well within 3d
    It seems reasonable that they were talking about the NEON unit.
    I don't really have much special knowledge on this, as this was just something mentioned in passing that stuck in my brain all this while.

    The power impact that a vector of FP unit has is one area where the numbers might be different then expected.
    I was also told that when licensees "roll their own" implementations, depending on how much they get from ARM, they can make other choices in implemenation to make the circuits match their criteria at the expense of power or performance.
    I may have misremembered this part, but my impression is that licensing from ARM can involve multiple levels, where more can be ready-made (not sure about the range from high to low level detail) rather than done in-house.

    This overlaps somewhat with concerns about the process used achieves vs what the process's ad copy says.
    If we were to assume (naively?) that the implementation used the minimum feature size advertised for a given process node (like TSMC's tiny SRAM cells, just as an example), we would probably find that undesirable side effects of pushing the envelop would lead to poorer performance or leakage characteristics.
     
  11. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Fully agreed. Heck, NEON isn't even part of the Cortex-A8 area estimates even though it's huge and skipping it isn't very viable! Similarly, the Cortex-A9 estimate of 1.5mm² on 65nm likely is area-optimized, is pre-layout and doesn't even include the plain FPU... heh!
    Ahhh, I get what you mean now - design options that are functionally equivalent but will result in different perf/power trade-offs. It does make sense they would propose those, yeah (and it also opens the door to a lot of creative marketing, sadly...)
    Yes, there are some cores they offer as Hard IP, especially the Cortex-A8. AFAIK the A9 doesn't have a Hard IP option (yet?), though.
    Indeed. Where have I read that before? Oh right: http://www.beyond3d.com/content/news/568 :) (of course as you point out even the main claimed SRAM size might be too aggressive for certain applications!)
     
  12. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,314
    Likes Received:
    1,117
    I hate to break it down this way but it seems the great stuff that rescued Intel is coming out of Israel (Core2dou) and this came out of Austin, so I questioned it from the start.

    OTOH this would seem ideal for something like the Asus eee, seems it would give similar performance as the celeron-m 900 mhz in there at 1/10 the power. And that might be a burgeoning market, as that thing has taken the market by storm.
     
    #12 Rangers, Apr 3, 2008
    Last edited by a moderator: Apr 3, 2008
  13. iwod

    Newcomer

    Joined:
    Jun 3, 2004
    Messages:
    179
    Likes Received:
    1
    CSR, Icera and PicoChip - Are they all wireless silicon based chip maker?
    It is interesting because when you say the best semiconductor technology in UK i would instantly think of ARM and PowerVR.

    I have always wonder why ARM never make it on desktop. The point of silverthrone is that it will take x86 to true mobile form. Surely ARM also has huge amount of software development on it already when compare to other RISC like Power.
     
  14. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Well, they're doing what they can with what they have, which is an awful ISA and a stupid corporate philosophy. As I said there are at least a few nice innovations in there (the way AnandTech claims it's more than in-order because of the way it treats the FPU is ridiculous though, that's as far from being a new idea as you could ever get!) which actually makes me quite skeptical Bobcat and Montalvo's mini-core could ever be suitable for handhelds either even in a billion years.
    Yes, it's very attractive for UMPCs/MIDs/Ultraportables as I said. Isaiah's ultra-low-voltage SKU will likely be competitive there though, and would probably be superior in every way for the 2-3W market once Isaiah will move to 45/40nm (but by then Intel will have moved to Moorestown too)
    Yes, well PicoChip is basestation-oriented and not as exciting as Icera for example IMO (although their claimed numbers are certainly very impressive and I'm willing to believe them) but they're all wireless.
    Yeah, I could have listed ARM too - they certainly have a good track record and are being quite innovative with the A9. However, they do have some much more direct competitors which also have interesting products/designs that are arguably better for some markets (Tensilica, MIPS, etc.) - on the other hand, CSR and Icera are clearly leading the pack in every way.

    Well, CSR is lagging behind Atheros for WiFi, but I'm talking core businesses here - they were actually on par or ahead of everyone else (Broadcom/Marvell/etc.) until late 2007 even though they didn't get many design wins until recently... And now their traction will likely fade away in favour of Atheros until their next-gen 90nm WiFi chip is ready (and maybe even afterwards). We'll see how that goes, certainly all this makes WiFi extremely attractive for future handhelds.
    It isn't like they didn't try many many years ago; research the early history of ARM. Either way, the mistake you are making here is the exact same mistake Intel and AMD are doing: they assume that all software is equivalent. But no, it isn't; software developed for a 17"+ screen isn't going to be very usable on a 3" screen. Software developed for Mobile Linux won't be very usable on the desktop either. Certainly there is a market for AMR in 'larger-than-iPhone-pocketables' though...
     
  15. crystall

    Newcomer

    Joined:
    Jul 15, 2004
    Messages:
    149
    Likes Received:
    1
    Location:
    Amsterdam
    I also heard a lot of bad stuff on A8, mostly from fab guys.

    Looking at the A9 documentation I have at hand I guess that - on the same process - it's going to clock lower than the A8 but perform significantly better in most of the workloads you could throw at it. The A8 is a super-pipelined in-order monster with horribly long latencies (both in the cache and execution departments). It basically sucks at everything which is not made of nice, large, well behaved loops processing data in a regular stream. Being very different from previous ARM cores it also sucks on legacy binaries, think of Prescott but for ARMs instead of x86. The A9 on the other hand has a shorter pipeline with limited OoO execution capabilities and nice SMP capabilities. It's likely to be a much better processor than A8 from every possible point of view.
     
  16. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,406
    Likes Received:
    149
    Location:
    0x5FF6BC
    I think its far too early to determine what the power/performance will be of a next gen platform (45Nm Moorestown) and the next next gen (32Nm Moorestown) when the current next gen is literally just starting to go out the door. Intel definitely are stating that Moorestown will be suitable for "premium smartphones". Its re-confirmed in slides 4 & 7 from this presentation at the IDF

    https://intel.wingateweb.com/SHchina/published/MIDS001/SP08_MIDS001_100r_eng.pdf

    One of the major power benefits will be gained from integration of functionality on the Soc. Remember the graphics, video and memory controllers are at 130nM on Atom Centrino, the power reduction from these items alone will be significant when brought down to 45nM and then 32nM
     
  17. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Yes, that was also my conclusion after studying the documentation a bit... :) This presentation is interesting, especially Page 22: http://www.jp.arm.com/event/pdf/forum2007/t1-2.pdf

    Based on that graph, I estimated the Cortex-A8 to have 2.0 Dhrystone/MHz while the Cortex-A9 has 2.3 Dhrystone/MHz. ARM11 MPCore delivers 'only' 1.2 Dhrystone/MHz. Heck, Page 23 is also really impressive assuming those numbers are real and not too creatively selected. I'd be very very interested in the same programs being run on, say, Yorkfield or Barcelona... (btw, I'm not saying Dhrystone is the most representative benchmark around, it likely isn't, but it's the only real datapoint we have sadly!)

    EDIT: Oh, and I'm also not convinced Cortex-A9 would be clocked substantially lower than the A8 for a given process; maybe 10% or so though, I wouldn't exclude that. Not that we'd know anyway since I don't think anyone will synthetize it for 65nm, TBH...
     
  18. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Yes, and that's actually why I wrote this opinion piece - because I think Moorestown will be an awful platform for premium smartphones and the 32nm shrink will be an awful platform for smartphones and even handhelds in general (and no, I'm not getting paid by ARM to say this!) - now, as to why I think that...

    You need to be able to read between the lines: the real gain with Moorestown comes with idle power, not active power. Effectively, they'll likely shave off 90% of the idle power, and shave off ~75mW at full load too by doing that. There will indeed be power benefits from integrating the memory controller, but don't expect miracles. In effect, Moorestown will improve idle and 'average' power (since the latter is defined as being under C6 80-90% of the time - how practical, as if ARM processors couldn't also do that!) but full load power will not be substantially reduced, and that's what should be compared with the Cortex-A9's numbers.
     
  19. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,406
    Likes Received:
    149
    Location:
    0x5FF6BC
    I fully understand that Intel has been massaging power figures for Atom to give a best possible view, in fact I hear that C7 power usage is even better (but that requires the battery to be removed !), but even allowing for that, the Atom centrino platform is clearly meeting the targets it set for its intended segment, demonstrated that 30+ companies have evaluated it and are going into production on with it.

    I also *assume* if Intel begin to talk about a fore-coming platform being suitable for a particular segment, they have to have done some sums to justify it, and I mean realworld sums. For example if Intel are courting apple for future iphone derivatives as is being suggested, then they have to go to Apple with a realistic power/performance platform, otherwise Apple will comeback a couple of days later and say "no thanks, this thing sucks power like a 100W lightbulb". Marketing is fine, but engineering evaluation tells all, and I can't really beleive Intel will let their marketing dept get so far ahead of reality that it makes a complete fool out of them. This is Intel, not the bitboys :) Surely Xscale taught them something. I suppose time will tell.
     
  20. DavidC

    Regular

    Joined:
    Sep 26, 2006
    Messages:
    347
    Likes Received:
    24
    This is where the Cortex presentation pulled the Dhrystone 2.1 figures from:
    http://homepage.virgin.net/roy.longbottom/dhrystone results.htm

    Core 2 Duo gets 6400 with 2.4GHz.

    If you go match the presentation score to Dhry2 Opt VAX MIPS, you get the same results.

    I would take the benchmarks with grain of salt, unless some are willing to believe the 4 A9 cores will be faster than the Core 2 Duo.

    Here's Intel's numbers for Silverthorne against A8:

    http://pc.watch.impress.co.jp/docs/2008/0402/kaigai432.htm

    EEMBC Suite v1.1(compared to ARM 11 400MHz)
    Cortex A8 600MHz: 3.3x
    Cortex A8 1GHz: 5.4x
    Intel Atom Z510 1.1GHz: 6.8x
    Intel Atom Z530 1.6GHz/w HT: 13x

    Here's another number from a PC point of view: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3276&p=19

    "Under SYSMark 2004, a 1.6GHz Atom is around 20% faster than an 800MHz Pentium M (90nm Dothan)."

    You know how IGP in Poulsbo performs?? It also says there: "Intel told us to expect a 3DMark '05 score around the 150 point mark."

    Inquirer also shone the light on Poulsbo: http://www.theinquirer.net/gb/inquirer/news/2008/02/06/ever-wanted-know-silverthorne

    "They expect this much lower power version to score a bit less than LR in 3DMark, 500 in '03, 150 in '05."

    Take it how you want.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...