Out-going Qualcomm COO on the iphone

tangey

Veteran
Outgoing chief operating officer for Qualcomm, Sanjay Jha had some glowing remarks about the iphone graphics and power usage:-


"That rivalry we referred to with Apple was delivered by Jha with nothing but praise for the Apple iPhone design, “The power consumption on 3D graphics has to come down,” he said. “The Apple iPhone display pipeline and graphics are deeply integrated and we need to do the same on a single chip, to empower other devices to have the same kind of graphics capability. It is not a trivial problem. When a handset is rendering a web page, it has to deal with six or seven different media types, served from 10 or 12 different servers. The iPhone has done a good job of this, but Qualcomm thinks it can do a better one at the chip level"

Quite a compliment to IMG technology.

http://www.theregister.co.uk/2008/08/18/qualcomm_vampire/page2.html
 
IMHLO Qualcolmm would be much better off with another license in the future, in order to have less chances to get dumped half way down the road.

In any case I guess he's comparing MBX with whatever Z4x0 IP they had licensed from ATI/AMD. Since SGX powered devices should appear in the second half of this year, we'll have to wait and see if they truly can do better than that after all.
 
Intriguing. What he's pointing out there is actually very subtle - the interaction of the display/image and graphics pipeline; i.e. in the case of a webpage, you get a lot of images in a lot of different formats and need to convert them rapidly and efficiently into RGB textures. I can see how bad HW integration or subpar SW might hurt this, although I'll always be shocked at just how bad many companies' are at doing this kind of thing. Probably not a small part of the reason why Apple is making their own SoCs ever since the iPhone (or possibly the iPod Nano 2G?)

This does seem to imply PowerVR's IP is more easy to integrate in this way, but of course that's hard to say. It might just as well be Qualcomm's fault for not doing it properly. Heh.

---

Anyhow, I'm not sure why I should listen to the ex-COO of a company with a baseband that will be 2-3x larger than some of the competition for LTE, with CPU IP that seems inferior to ARM's Cortex-A9 despite a massive investment, with connectivity IP that simply cannot compete, audio/video IP that has consistently failed to deliver in the mid-range, no clear strategy for aggressive digitization of CMOS RF, and the list goes on. And to make matters worse, good luck finding any other industry player who actually likes them.

Heed my word: Qualcomm is a disaster in the making. They currently have very high market share in a variety of markets, but very few things are going their way at both a technical and a strategic level. That market share is not sustainable, and the market won't grow quickly enough to compensate. I'm not saying they'll have massive financial troubles or anything like that, but losing, say, a third of your market share doesn't tend to come without consequences. The real question, therefore, is how much royalties they'll manage to get out of LTE, and I won't try to speculate on that one.
 
They could always try to quash competition through buy-outs or price wars.
Qualcomm's stock has been on a roll for the last year or so, with a particularly big jump just a few weeks ago.
They've survived the Broadcom imbroglio relatively unscathed too, besides the whole Nokia strong arming.

But yeah, all indications point to it being deflated due to declining revenue from licensing 3.5G/4G/LTE tech.
I had the chance to try out one of their CPU's (in the HTC Touch Diamond, no less), and it felt sluggish, despite being over 500MHz, having 192MB of RAM, etc. Battery life wasn't good either, suggesting poor power management from the chipset.
 
They could always try to quash competition through buy-outs or price wars.
Not possible (anymore), market is too big.
Qualcomm's stock has been on a roll for the last year or so, with a particularly big jump just a few weeks ago.
They've survived the Broadcom imbroglio relatively unscathed too, besides the whole Nokia strong arming.
Yup, they're definitely doing a good job in the present both financially and legally. My point is that technically and strategically, their positioning will likely weaken substantially in the near future.

But yeah, all indications point to it being deflated due to declining revenue from licensing 3.5G/4G/LTE tech.
Most likely, yes - although at least they'll get revenue from Nokia now. I didn't even consider the licensing equation because it's so unpredictable regarding LTE; my negativity was solely based on non-licensing factors, but as you point out I'm skeptical they'll be able to maintain their licensing revenue in a few years...
I had the chance to try out one of their CPU's (in the HTC Touch Diamond, no less), and it felt sluggish, despite being over 500MHz, having 192MB of RAM, etc. Battery life wasn't good either, suggesting poor power management from the chipset.
That's a 528MHz ARM11, not their own CPU IP which I was thinking of (used in Snapdragon & the MSM7850). I think the fact it's sluggish is most likely related to non-CPU HW factors in the SoC, as well as driver factors and especially SW. Anyhow, 528MHz is a respectable clock speed for an ARM11 on 65nm; not quite as exciting as the 800MHz+ Cortex-A9s we'll get next year in 40nm SoCs though :) (which should be ~3x as fast overall, therefore)
 
But yeah, all indications point to it being deflated due to declining revenue from licensing 3.5G/4G/LTE tech.
I had the chance to try out one of their CPU's (in the HTC Touch Diamond, no less), and it felt sluggish, despite being over 500MHz, having 192MB of RAM, etc. Battery life wasn't good either, suggesting poor power management from the chipset.

I actually own a touch diamond, and its not slow or sluggish. Not anymore anyway. The 1.35 and to a lesser extent 1.37 roms wernt that fast yes, but the new 1.93 does a good job. Besides that I doubt the touchflow3d interface is that well optimized yet not to mention it has to run over the default wm6 so you are hardly talking efficency here. Still, mine works pretty good now I think. Battery drains quickly indeed, about a day if you use it alot but that probably has alot to do with the VGA screen too which drains alot of energy when on.

If I play a video in coreplayer, lets say a standaard 1.5gb or so dvd rip it works perfectly fine so I doubt horsepower is much of a problem. Most slowness is probably to wm6 and badly optimized Manilla.
 
If I play a video in coreplayer, lets say a standaard 1.5gb or so dvd rip it works perfectly fine so I doubt horsepower is much of a problem. Most slowness is probably to wm6 and badly optimized Manilla.

That's possible too.
I own a Nokia smartphone based on Symbian S60 3rd Edition FP1 with a single 369MHz ARM11 CPU from Freescale, and Coreplayer 1.2.5 runs any DivX/XviD content smoothly (ironically, DivX's own native Symbian video player has extremely choppy performance and low image quality, go figure...).
 
Probably not a small part of the reason why Apple is making their own SoCs ever since the iPhone (or possibly the iPod Nano 2G?)
Apple isn't making their own SoCs.

They spec them out to Samsung, and Samsung designs and fabs them.

Things likely will change with the purchase of PAS, but up till now, its all Samsung doing the implementation.
 
Apple isn't making their own SoCs.

They spec them out to Samsung, and Samsung designs and fabs them.
If you consider RTL as a spec, sure! :p The way I look at it is they're basically in the exact same position as PortalPlayer for all the SoCs they ever sold to Apple, with the slight catch that instead of third-party IP or engineering teams (which PP used for other things than just ARMs), most of their non-in-house stuff comes from Samsung itself. It's not clear how much RTL Apple did, but honestly PP didn't do that much either...

That Apple has direct involvement in their chip development, presumably pre-PA Semi, was even confirmed by Jen-Hsun Huang in an interview a few months ago. As for PA Semi, obviously it strengthens their RTL team and allows them to bypass Samsung for synthesis/etc., which probably allows them to negociate pricing more aggressively too since now they could switch completely to TSMC for example.
 
Portal did their own design and implementation.

Samsung has a design team especially for apple, and they certainly do the implementation and fabrication. Apple hasn't been particularly pleased with the quality of the design/implementation, which brought about PAS, which is changing the working dynamic between Samsung and Apple.

Note that these things are stated as facts, not as speculation. They are not positions found by 'triangulating' on interview responses.

Now, I won't doubt that Apple has told Portal and Samsung what features need to be there, and given performance targets, but they weren't involved in the design side (RTL) of things. Portal sold the same part to other companies (Sandisk, for one), so its just not possible. The first iPod samsung chips were, AFAIK, off the shelf samsung SoCs, rebadged as apple parts. AFAIK, the Apple team did not have any real RTL capability before PAS. And your supposition about moving to TSMC is possible, but not feasible due to economic reasons.
 
Portal did their own design and implementation.
PortalPlayer made it very clear in their last conference calls that the chip they taped-out in late 2006 (aka GoForce 6100) is the first they ever made via Customer Owned Tooling. Previously, they were not responsible of synthesis etc... Of course, they were fully responsible for the RTL and firmware.

Samsung has a design team especially for apple, and they certainly do the implementation and fabrication. Apple hasn't been particularly pleased with the quality of the design/implementation, which brought about PAS, which is changing the working dynamic between Samsung and Apple.
That is also my understanding, but with one caveat as I'll point out in a second...

but they weren't involved in the design side (RTL) of things. Portal sold the same part to other companies (Sandisk, for one), so its just not possible. The first iPod samsung chips were, AFAIK, off the shelf samsung SoCs, rebadged as apple parts. AFAIK, the Apple team did not have any real RTL capability before PAS. And your supposition about moving to TSMC is possible, but not feasible due to economic reasons.
I think I wasn't clear enough. Apple indeed never sent any RTL code to PortalPlayer. PortalPlayer made the RTL with some performance targets from Apple, and they could generally sell the same parts to other companies. However before the GoForce 6100, they focused on the RTL and the software; synthesis steps were entirely or at least mostly done by a third party (the foundry itself perhaps? I can't remember who fabbed their 180nm designs though)

In the case of Apple, the chips that got the design slots in the iPod Nano/Shuffle in 2006 are very very likely rebadged Samsung parts as you point out. What I'm saying is that the iPhone/iPod Touch chips had RTL collaboration from Apple already. That might also be the case for the 2007 iPod Video, but there I simply don't know. I don't know how much of the iPhone RTL Apple made, but I would suspect it is primarily the SoC Logic which interfaces directly with the Operating System, rather than the video/audio subsystems themselves.

Moving forward, Apple will use PowerVR IP for video decode and in-house IP for other subsystems such as the image signal processor. They'll also be able to handle the synthesis in-house, although presumably that'll only happen two generations from now given when they acquired PA Semi.
 
PortalPlayer made it very clear in their last conference calls that the chip they taped-out in late 2006 (aka GoForce 6100) is the first they ever made via Customer Owned Tooling. Previously, they were not responsible of synthesis etc... Of course, they were fully responsible for the RTL and firmware.

RTL that hasn't been through synthesis, interresting concept but not particularly useful to produce a device with as,

1) It is highly to unlikely synthesise against any specific foundary library
2) It is highly to unlikely to time in any sensible way

More likely that they delivered "known to be" synthesiseable code and just let somone else deal with layout. Even then the layout process can throw up problems that need to be fixed in RTL so even that isn't an open loop...

John.
 
Looking at http://www.freescale.com/webapp/sps/site/overview.jsp?nodeId=0121005654 I'd say PortalPlayer moved from "Register-Transfer Level (RTL) Sign-off" to "Customer Owned Tooling (COT)" - in the former case, they deliver "Verified RTL". I wish I knew what that implies exactly in this case though! In Apple's case for the iPhone, I'm saying the solution was seemingly kinda mid-way between "Turnkey Solution" and "Register-Transfer Level (RTL) Sign-off"
 
Looking at http://www.freescale.com/webapp/sps/site/overview.jsp?nodeId=0121005654 I'd say PortalPlayer moved from "Register-Transfer Level (RTL) Sign-off" to "Customer Owned Tooling (COT)" - in the former case, they deliver "Verified RTL". I wish I knew what that implies exactly in this case though! In Apple's case for the iPhone, I'm saying the solution was seemingly kinda mid-way between "Turnkey Solution" and "Register-Transfer Level (RTL) Sign-off"

Verified RTL usually means that your service provider will redo the synthesis after hand-off, but it doesn't mean that the designers don't do synthesis themselves: the service provider (such as LSI Logic, IBM, ...) provides all the libraries and tools in a pre-packaged tool flow. The designers iterate through it and make sure all rule checks and timing violations are clean.

In practise, there is really not much difference between an RTL sign-off flow and a COT flow: COT companies often have a separate back-end department or team and the hand-off between the front-end and back-end is still relatively rigorous in terms of passing design rule checks. It's just that with a COT process, there's a little more flexibility wrt bending otherwise hard and fast rules wrt DFT and design practises, due to better communication between the teams and both teams having the same final goal. (Read: no legal contract between them and far less incentives to start a blame game war when something goes wrong.)

Having working in both models, the COT flow is generally easier and more pleasant for the designers.
 
That's a 528MHz ARM11, not their own CPU IP which I was thinking of (used in Snapdragon & the MSM7850). I think the fact it's sluggish is most likely related to non-CPU HW factors in the SoC, as well as driver factors and especially SW. Anyhow, 528MHz is a respectable clock speed for an ARM11 on 65nm; not quite as exciting as the 800MHz+ Cortex-A9s we'll get next year in 40nm SoCs though :) (which should be ~3x as fast overall, therefore)

I'm curious what leads you to the conclusion that Snapdragon and the MSM7850 will be underpowered. The specs for the Snapdragon that I have seen seem like they are at least in line with the competition and the power consumption is supposed to be substantially lower at the same clock speeds. Of course, we need to see some real devices to see the actual performance, but on the surface, both solutions seem promising.
 
I'm curious what leads you to the conclusion that Snapdragon and the MSM7850 will be underpowered. The specs for the Snapdragon that I have seen seem like they are at least in line with the competition
Obviously there are two aspects here; the CPU, and everything else. GPU-wise, both are based on a single-pipeline AMD Mini-Xenos GPU @ 150MHz. This results in *raw* 3D performance that is ~2x lower than the SGX 530-based OMAP3430 (or the dual-pipeline Mini-Xenos in the STn8820) and ~4x lower than the APX 2500.

Video-wise, it has 720p video decode and 12MP imaging, but *seemingly* no 720p video encode so it's behind all of its major competitors (TI/NV/ST) in that aspect if they indeed lack that feature. Snapdragon is also obviously superior integration-wise, but then again it's a 15x15 package versus a 12x12 package for all of their major competitors so the real-world footprint advantage isn't huge.

Regarding the CPU, aka Scorpion, which I presume was your main area of concern (but I wasn't sure, thus the above)...
and the power consumption is supposed to be substantially lower at the same clock speeds. Of course, we need to see some real devices to see the actual performance, but on the surface, both solutions seem promising.
The Snapdragon CPU has a clear advantage against both ARM11 and the Cortex-A8 in terms of raw performance and it definitely has better power efficiency than the A8 (who knows versus the ARM11 though). Die area-wise, nobody knows, but presumably it's somewhere between the ARM11 and the A8, so it's *possibly* universally superior to the latter.

However, I was thinking very specifically of the Cortex-A9, not the A8. The former should achieve roughly similar clock speeds compared to Scorption, yet it uses fewer pipeline stages and sports (fairly basic) OoOE. It should be extremely competitive power-wise and cost-wise (especially when you consider that, frankly, the NEON/VeNum SIMD engines are rather pointless/overkill for today's and even tommorow's handhelds) and sports 2.5 Dhrystone/MHz, versus 2.0 for Scorpion and that's before you consider the slightly deeper pipeline and other factors which don't affect Dhrystone.

So Qualcomm will likely have one generation of advantage with Scorpion. For an extra development cost likely of many tens of millions of dollars, and the fact they'll be at a disadvantage afterwards, there's no way to financially justify such an investment. Of course, ARM could have underdelivered with the A9 or maybe they still will (wait and see...) so it's not really their fault, but the end result is not that impressive IMO.

It could be argued that it's a really good in-order design, and they could have beat the competition if they went OoOE; however, that'd also have been a lot more risky and I doubt management would ever have approved that. ARM, on the other hand, already had plenty of solid in-order designs so there was no real risk in 'risking' the transition since they always had their previous IP to fall back on.
 
Obviously there are two aspects here; the CPU, and everything else. GPU-wise, both are based on a single-pipeline AMD Mini-Xenos GPU @ 150MHz. This results in *raw* 3D performance that is ~2x lower than the SGX 530-based OMAP3430 (or the dual-pipeline Mini-Xenos in the STn8820) and ~4x lower than the APX 2500.

Video-wise, it has 720p video decode and 12MP imaging, but *seemingly* no 720p video encode so it's behind all of its major competitors (TI/NV/ST) in that aspect if they indeed lack that feature. Snapdragon is also obviously superior integration-wise, but then again it's a 15x15 package versus a 12x12 package for all of their major competitors so the real-world footprint advantage isn't huge.

Thanks for the detailed response. I was really thinking of the CPU, but the graphics comments are interesting as well. IMO, Qualcomm has consistently lagged the competition with respect to graphics performance and I'm not particularly surprised that isnt going to change with the Snapdragon chips. I'm not sure if it is their implementation or the fact that they seem to be the only company using the old ATI solutions, but this is definitely a spot where they have some work to do.

One interesting thing though, while reading up a bit on the subject I came across this Youtube video with Sanjay Jha talking about Snapdragon. He is promising 1080p encode and decode in 2009.

http://www.youtube.com/watch?v=n2RbU13tUDk

The first 65nm Snapdragon chips are clearly only supporting 720p so I wonder if this is going to be the 45nm version of the chip.

The Snapdragon CPU has a clear advantage against both ARM11 and the Cortex-A8 in terms of raw performance and it definitely has better power efficiency than the A8 (who knows versus the ARM11 though). Die area-wise, nobody knows, but presumably it's somewhere between the ARM11 and the A8, so it's *possibly* universally superior to the latter.

However, I was thinking very specifically of the Cortex-A9, not the A8. The former should achieve roughly similar clock speeds compared to Scorption, yet it uses fewer pipeline stages and sports (fairly basic) OoOE. It should be extremely competitive power-wise and cost-wise (especially when you consider that, frankly, the NEON/VeNum SIMD engines are rather pointless/overkill for today's and even tommorow's handhelds) and sports 2.5 Dhrystone/MHz, versus 2.0 for Scorpion and that's before you consider the slightly deeper pipeline and other factors which don't affect Dhrystone.

So Qualcomm will likely have one generation of advantage with Scorpion. For an extra development cost likely of many tens of millions of dollars, and the fact they'll be at a disadvantage afterwards, there's no way to financially justify such an investment. Of course, ARM could have underdelivered with the A9 or maybe they still will (wait and see...) so it's not really their fault, but the end result is not that impressive IMO.

I hadn't realized that you were using the Cortex-A9 as your baseline.

Hmmm, I am a bit surprised that you dont see more value in having a one generation advantage over the competition. Assuming that Snapdragon does fulfill its promise of a substantial increase in performance per Watt over other Cortex-A8 enabled solutions, it should provide a clear point of differentiation over the industry leader (OMAP), not too mention their most formidable challenger (Intel). The projection from May indicates that A9 enabled devices will be available in 3-5 years....which is a virtual eternity in the handset/portable space.

http://www.wirelessweek.com/Article-Smartphones-Mobile-Internet-Devices.aspx

Bottom line though, I'd still really like to see the specs on some commercial Snapdragon enabled devices. There can always be a large gap between the marketing material and reality and that is particularly true when it comes to power consumption.
 
Thanks for the detailed response. I was really thinking of the CPU, but the graphics comments are interesting as well. IMO, Qualcomm has consistently lagged the competition with respect to graphics performance and I'm not particularly surprised that isnt going to change with the Snapdragon chips. I'm not sure if it is their implementation or the fact that they seem to be the only company using the old ATI solutions, but this is definitely a spot where they have some work to do.
It's definitely their implementation. As I said, my understanding (although I'm far from sure about this) is that the STn8820, based on the same or very similar OpenGL ES 2.0 IP from ATI, is a dual-pipeline design at similar clock speeds.

One interesting thing though, while reading up a bit on the subject I came across this Youtube video with Sanjay Jha talking about Snapdragon. He is promising 1080p encode and decode in 2009.

The first 65nm Snapdragon chips are clearly only supporting 720p so I wonder if this is going to be the 45nm version of the chip.
I don't know what chip it is, I know that MSM7850 has a 45nm shrink (just like MSM7200 was on 90nm and shrunk to 65nm as the MSM7200a), in fact that was probably their first 45nm chip, and performance didn't go up one iota under most metrics.

My understanding is that 1080p won't be a shrink; it'll be a new video codec design from ATI (just like Snapdragon's 720p decode comes from ATI IP too, which in turn is based on Tensilica's XTensa). Hopefully they'll be smart enough to combine that with a dual-pipeline 3D core and build it on TSMC's 40nm process instead of 45nm, but we'll see.

Anyway 1080p decode/encode is nothing extraordinary for chips coming out in 2009 on 45/40nm. They'll have plenty of competition, although I'd be curious to know what formats they support.

Hmmm, I am a bit surprised that you dont see more value in having a one generation advantage over the competition. Assuming that Snapdragon does fulfill its promise of a substantial increase in performance per Watt over other Cortex-A8 enabled solutions, it should provide a clear point of differentiation over the industry leader (OMAP), not too mention their most formidable challenger (Intel). The projection from May indicates that A9 enabled devices will be available in 3-5 years....which is a virtual eternity in the handset/portable space.
Couple of points. First, A9 end-user devices will be available for the 2009 Holidays, or if they miss that deadline by early 2010. Any other date you hear anywhere else is bullshit, period. That's not for phones with longer cycle times obviously, but for PNDs/PMPs/MIDs. Of course, it could get delayed, but that has been the ETA of at least one company for some time and I don't think it has moved.

Secondly, Qualcomm's lead partners for Snapdragon is HTC and Samsung. I am honestly skeptical they wouldn't have gotten many fewer design wins with a very-high-clock ARM11 or a very power-optimized Cortex-A8 synthesis. Meanwhile, the R&D certainly cost them a lot of money...

Bottom line though, I'd still really like to see the specs on some commercial Snapdragon enabled devices. There can always be a large gap between the marketing material and reality and that is particularly true when it comes to power consumption.
Power consumption figures for handheld CPUs tend to be given by IP vendors, so they'll rarely consider a truckload of real-world factors. Also, they've usually been based on low-leakage gates, yet more and more handheld designs are rather using mid-leakage gates to lower *active* power utilization while using power & voltage islands to make leakage much less important.

At least everyone tends to make an effort to use comparable numbers, but sadly those numbers often have nothing to do with the most important ones. So the stuff that really matters nearly never gets released. And sadly most of the press, even the deeply technical one, often makes huge mistakes in nearly everything related to handhelds... Given how often companies' FUD actually works, it should surprise nobody that it is so rampant.

BTW, two good PDFs that might come in handy for Snapdragon (aka QSD8250 & QSD8650) and MSM7850 respectively:
http://www.qualcomm.co.uk/qmag/pdf/qmag_03_08.pdf
http://www.3gafrica.org/graphical/W...08/Tom_O'Neill_QCT Roadmap (ACF Workshop).pdf
 
It is definitely Qualcomm and/or the HTC implementation.

As evidenced by http://www.htcclassaction.org/, MSM7xxx devices were clearly delivered without drivers initially with the HTC Kaiser, and has consistently underperformed with all subsequent releases across implementations by all venders (ref GLBenchmark) such as Toshiba G810, LG KS20, iMate Ultimate series, etc.

This, and subsequent analysis of driver code in released products, has lead to speculation that Qualcomm has tiered access to any driver IP leading to the widely varying implementations, as well as performance.

As evidenced by the driver hacking by http://www.htcclassaction.org/, 20%+ increased FPS performance in OpenGL ES can be had just by bypassing standard routes through the OS's graphics stack, and is superior to the "latest" HTC devices even with increased CPU MHz.

It also seems that Qualcomm has never licensed ARM VFP ip, and so current and future MSM cpus are already an obsolete dead end. Especially, compared to current and future Samsung, TI, Freescale offerings that did license it with option of choosing between FPU-less offerings.

Going by this analysis, it's easy to see this Qualcomm wasteland/fiefdom can be easily eclipsed by unified platforms (Symbian/Mach) that don't piecemeal nor stunt resources for any developer.
 
New snapdragon chip QSD8672

Arun what do you think about the new snapdragon chip presented lately? It has full HD encode and decode, 1.5Ghz dual core processor(surely it means Cortex A9) on 45nm process and up to 1440x900 display. It seems that the only thing missing here is HDMI output. Without it it can't compete with Tegra 650 :cry: or even with 600(at least in output capabilities cause HD playback has sense only if you can play it on big HD display and not on small screen).

It seems to be really powerful chip with low power consumption. And they say that sampling will start somewhere next year.

Do you have any informations about this chip? Do you think that HDMI can be added as an additional feature? They say that it is targeted at devices with 9-12 inch display but is their a chance that we will see it in device similar to HTC x7510?
 
Back
Top