Tegra 3 officially announced; in tablets by August, smartphones by Christmas

Maybe something has changed in the ownership percentages of HTC, or maybe pressure from industry competition is so high that it now outweighs investor pressure from Qualcomm.
 
Google tells me there was this HTC Qilin for chinamobile back in 2009, but it was a WM6.x device, not android (many of the old windows mobile HTC smartphones used OMAP SoCs, BTW).

Well I recalled a couple of HTC smart-phones in the past with a SGX530, hence my former question.

Nonetheless, it's been at least two years since HTC used anything other than a Qualcomm SoC for any of their smartphones.
67477987.gif
I doubt HTC intends to abandon Qualcomm, rather the contrary. I'm not aware of HTCs real intentions but for any manufacturer to be too dependent on only one source doesn't sound healthy to me.

As Lazy8 said those HTC Edge specs are definitely to drool over, but I'd still like to see first how it compares to a S4 8960 based HTC smartphone especially with all the possible applications open.
 
If confirmed, isn't this HTC's first Android smartphone without a Qualcomm SoC?
In addition to OMAP3 phones (mostly for China iirc) as Ailuros said, HTC is now using the ST-Ericsson A9500 with ST-E's TD-SCDMA baseband for China Mobile.

HTC was also definitely working on a Tegra 1 smartphone with a ST-Ericsson baseband back in 2009 but it was canceled to focus exclusively on Qualcomm's Snapdragon. But yes, NVIDIA breaking into HTC with device that gets launched worldwide would be a pretty big deal even if HTC is likely to focus more on Krait.
 
Lazy8s said:
I'm far more optimistic on the CPU side of Tegra 3 than I was before from the standpoint of efficiency (but not performance), after listening better to their explanation of how processing will be balanced.

Tegra 3 will still be a let-down, though, due to the underwhelming graphics performance. Tegra 3+ is kind of funny when you think about it in context.
As much as I love everything iPhone, I haven't seen a mainstream application that really requires all the GPU power of an iPhone 4S. Even if Android is notoriously more GPU demanding to work around its inefficient real-time SW stack to get stutter free scrolling (the next version of Android, no doubt), I don't see how the vast majority of users are going to benefit from all this graphics power.

Other than games (which are already pretty stunning on an iPhone 4), is there really any application that makes use of it? I can't think of anything.

From that POV, more CPU performance seems to be much better.
 
As much as I love everything iPhone, I haven't seen a mainstream application that really requires all the GPU power of an iPhone 4S. Even if Android is notoriously more GPU demanding to work around its inefficient real-time SW stack to get stutter free scrolling (the next version of Android, no doubt), I don't see how the vast majority of users are going to benefit from all this graphics power.

Other than games (which are already pretty stunning on an iPhone 4), is there really any application that makes use of it? I can't think of anything.

Wait a second; from the pre-release Transformer 2 results that leaked for a while in the GL Benchmark database Tegra3 was scoring in Egypt 720p somewhere in the 53 fps league, which is roughly 60% of the SGX543MP2 in the iPad2 (and there's a reason why I'm comparing a tablet with another tablet SoC). I wouldn't call that kind of performance meger or small at all. Au contraire it's the ballpark I give or take expect in that specific synthetic benchmark to see the S4 8960 and/or OMAP4470 (unless both IMG and Qualcomm pull better drivers in the meantime). The specific benchmark isn't measuring CPU performance obviously.

Of course do we have to wait for final devices to re-appear and get measured, but so far indications are anything but lacklustering.

The question would be if Apple under iOS is using any GPU resources for some of the general purpose tasks that make sense outside of a CPU and it's definitely possible with FP32 ALUs. A point you can't touch with FP20 only PS ALUs in Tegras.

From that POV, more CPU performance seems to be much better.
I'd prefer higher CPU efficiency personally; if I am to trust metafor's word (which I don't see why I shouldn't) Tegra3 is not going to have it easy against Krait, despite the latter being dual core. Would T3 had truly shipped in August as projected in the past quite a few things might have been different.
 
As much as I love everything iPhone, I haven't seen a mainstream application that really requires all the GPU power of an iPhone 4S. Even if Android is notoriously more GPU demanding to work around its inefficient real-time SW stack to get stutter free scrolling (the next version of Android, no doubt), I don't see how the vast majority of users are going to benefit from all this graphics power.

Other than games (which are already pretty stunning on an iPhone 4), is there really any application that makes use of it? I can't think of anything.

From that POV, more CPU performance seems to be much better.
Instagram supposedly uses the GPU to accelerate image filters as I heard someone from their team state they're more interested in the GPU increase in the iPhone 4S than the CPU.
 
As much as I love everything iPhone, I haven't seen a mainstream application that really requires all the GPU power of an iPhone 4S. Even if Android is notoriously more GPU demanding to work around its inefficient real-time SW stack to get stutter free scrolling (the next version of Android, no doubt), I don't see how the vast majority of users are going to benefit from all this graphics power.

Other than games (which are already pretty stunning on an iPhone 4), is there really any application that makes use of it? I can't think of anything.

From that POV, more CPU performance seems to be much better.

Quite honestly, I think -- without some sort of new use-case for phones -- that we've hit the saturation point for performance needed. Yes, things like Flash will always drain as much processing power as you have but that's a load of software inefficiency, not a task that is inherently computationally intensive.

My own personal preference from now on would be lower active power.
 
Making OpenCL available to the world of app developers will do its part to increase demand for more mobile GPU performance.

RenderScript could have an impact on GPGPU if Google would prioritize.
 
Quite honestly, I think -- without some sort of new use-case for phones -- that we've hit the saturation point for performance needed. Yes, things like Flash will always drain as much processing power as you have but that's a load of software inefficiency, not a task that is inherently computationally intensive.

My own personal preference from now on would be lower active power.

"Saturation point for performance needed"? That's an utopia for portable devices, at least until the wireless communications become so fast and cheap that everything is done through the cloud.

I'd rather have a smartphone that doubles as a personal computer and working station when connected to a large screen and mouse+keyboard.
 
"Saturation point for performance needed"? That's an utopia for portable devices, at least until the wireless communications become so fast and cheap that everything is done through the cloud.

I'd rather have a smartphone that doubles as a personal computer and working station when connected to a large screen and mouse+keyboard.

That would be a "new use-case". I really don't think that's in very high demand currently; especially with things such as wireless syncing and mirroring.

After all, what would you prefer? A phone that's high-powered enough to act as a PC but has to be in low-power mode when not plugged in (from a silicon standpoint, you can't always have *best* of both worlds).

Or simply 2 machines that run software that syncs up with each other to be identical.
 
Quite honestly, I think -- without some sort of new use-case for phones -- that we've hit the saturation point for performance needed. Yes, things like Flash will always drain as much processing power as you have but that's a load of software inefficiency, not a task that is inherently computationally intensive.

My own personal preference from now on would be lower active power.
Agreed, although I think my 'good enough' point is probably higher than yours. With a 2GHz 2xA15 (or 2.5GHz 2xKrait) you've got about 3x the average performance of a 1GHz 2xA9 for all workloads. That jump is still going to be a very perceptible one in web browsing even excluding Flash.

metafor said:
That would be a "new use-case". I really don't think that's in very high demand currently; especially with things such as wireless syncing and mirroring.

After all, what would you prefer? A phone that's high-powered enough to act as a PC but has to be in low-power mode when not plugged in (from a silicon standpoint, you can't always have *best* of both worlds).
Agreed, and there are also severe TDP limitations for any handheld device even when plugged in. Where's the (expensive & thick) heatsink? Where's the active (even if practically silent) fan? And where's the air moving through? If not there's not enough air, is the chassis able to sustain sufficiently high temperatures? And do you really want your device to be (literally) burning hot when plugged in? The ability to to plug in a screen & keyboard/mouse is very nice, but there's no way you'll be competitive with a 17" notebook.

---

Anyway once I've finished the article I'm currently working on, I probably should write a (much shorter) follow-up to my Handheld CPU article from January. There are quite a few interesting subjects that I would do well to revisit with all the information available! :)
 
Agreed, although I think my 'good enough' point is probably higher than yours. With a 2GHz 2xA15 (or 2.5GHz 2xKrait) you've got about 3x the average performance of a 1GHz 2xA9 for all workloads. That jump is still going to be a very perceptible one in web browsing even excluding Flash.

I could easily see 2x1.5GHz A9 (or 4x 1.5GHz A9) for tasks that are excessively parallel to be good enough for just about everything I do on the phone. Yes, there may be a small benefit to moving to 2GHz+ A15/Krait, but truth be told, if it weren't for the nerd-factor of owning a faster processor, I'd be hard-pressed to justify a purchase.

Agreed, and there are also severe TDP limitations for any handheld device even when plugged in. Where's the (expensive & thick) heatsink? Where's the active (even if practically silent) fan? And where's the air moving through? If not there's not enough air, is the chassis able to sustain sufficiently high temperatures? And do you really want your device to be (literally) burning hot when plugged in? The ability to to plug in a screen & keyboard/mouse is very nice, but there's no way you'll be competitive with a 17" notebook.

You also have to consider the opportunity cost. In order to shove in that high-performance component, you had to sacrifice in either area, performance or power compared to what could've been a very low-power, sufficiently-powerful component.

Even with the BIG.little approach, you're still wasting die-area; especially if you'll only be using those high-performance components -- which are likely to take up the most die area -- only a fraction of the time.

I think wireless (or wired) mirroring is by far a more practical and elegant solution.
 
There's a push for bigger screens and higher resolution displays on phones. Not sure how large a market that is, since those kinds of components usually require more GPU power and the overall price of the device is at the high end.

But there must be enough of a market or all these Android manufacturers wouldn't keep pushing the bar higher on screen size and resolution. So that means GPUs and SOCs continue to become more powerful.
 
There's a push for bigger screens and higher resolution displays on phones. Not sure how large a market that is, since those kinds of components usually require more GPU power and the overall price of the device is at the high end.

But there must be enough of a market or all these Android manufacturers wouldn't keep pushing the bar higher on screen size and resolution. So that means GPUs and SOCs continue to become more powerful.

Sure. But that just incrementally raises the point of diminishing return. Do you honestly ever see phones with 1080p resolutions being desired? At some point, they stop being phones (roughly ~5").

A SGX543MP2 is perfectly capable of driving just about any content on a 1280x720 screen. I would prefer for them to focus more on lowering the power consumption.
 
Hey, I would think so too but at the Arstechnica forum, one guy imported the Galaxy Note from Germany for a cool $818.

There must be enough people demanding these kinds of oversized devices. I'm not completely unsympathetic. Smart phones are more interesting to me as handheld computers, not phones per se.

Samsung and HTC seem to be doing well producing these high end phones. If they can no longer cite higher clock speeds or bigger screens, how else would they differentiate from products they shipped 3 months ago, and therefore justify new purchases?
 
Hey, I would think so too but at the Arstechnica forum, one guy imported the Galaxy Note from Germany for a cool $818.

There must be enough people demanding these kinds of oversized devices. I'm not completely unsympathetic. Smart phones are more interesting to me as handheld computers, not phones per se.

Samsung and HTC seem to be doing well producing these high end phones. If they can no longer cite higher clock speeds or bigger screens, how else would they differentiate from products they shipped 3 months ago, and therefore justify new purchases?

"You can use it for a full day now"
 
Then there are those who will put in an oversized battery on these power-hungry devices. Or use them tethered to a power cable for a good part of the time.

The specs chase won't die quietly.
 
"You can use it for a full day now"
But then they'd be sued by Apple for copying their ideas ;) (sorry, couldn't resist - I do feel it is noteworthy that the iPhone 4S has the slowest Cortex-A9 implementation to date despite their Intrinsity acquisition though [which is not clear if they used yet] - and that comes with the battery life benefit you'd expect). I don't think bigger screens hurt battery life though since they allow you to fit a correspondingly bigger battery.

I could easily see 2x1.5GHz A9 (or 4x 1.5GHz A9) for tasks that are excessively parallel to be good enough for just about everything I do on the phone. Yes, there may be a small benefit to moving to 2GHz+ A15/Krait, but truth be told, if it weren't for the nerd-factor of owning a faster processor, I'd be hard-pressed to justify a purchase.
I'm mostly thinking of WiFi web browsing here. I think the 'good enough' level is higher than you claim because everyone has a desktop computer and so experiences on a daily basis how much faster it can be. And web pages are still increasing slightly in complexity (even before considering Flash) so a bit of future proofing can't hurt.

But I obviously agree that the initial ~2X improvement over my iPhone 4S that you're describing (0.8GHz->1.5GHz) would be a lot more visible than the further ~2x improvement I'm describing (1.5GHz->2GHz & +50% IPC). This kind of thing always suffers from diminishing returns.

I think it's also very important that 'good enough' depends not just on performance but also pricing. The CPU is a small part of the bill of materials (as long as it doesn't force you to do anything too fancy to dissipate its heat) so it makes sense to overspec it - not just for marketing reasons, but also because that small difference in user experience really is worth those extra few dollars in the high-end. You could make a point that there aren't enough killer features on the horizon to make ultra-high-end smartphones compelling in the 20nm generation (as opposed to upper mid-range smartphones) and that might be true, but it's not a CPU-specific problem.

Active power is always a problem though - I remember that one of the first Snapdragon phones had an user settings for maximum clock speed so you could make your own trade-off for speed vs power consumption. I wouldn't be very surprised if we saw the same thing again in the Cortex-A15 generation with the default setting not being the maximum.
 
But then they'd be sued by Apple for copying their ideas ;) (sorry, couldn't resist - I do feel it is noteworthy that the iPhone 4S has the slowest Cortex-A9 implementation to date despite their Intrinsity acquisition though - and that comes with the battery life benefit you'd expect). I don't think bigger screens hurt battery life though since they allow you to fit a correspondingly bigger battery.


The performance is not really surprising if you have talked with guys who designed a5. They went all for saving battery. Computing performance was good enough since day 1(for their intended use cases and software) and they didn't try to make cores any faster if there was trade off between faster and more power efficient.
 
Wait a second; from the pre-release Transformer 2 results that leaked for a while in the GL Benchmark database Tegra3 was scoring in Egypt 720p somewhere in the 53 fps league, which is roughly 60% of the SGX543MP2 in the iPad2 (and there's a reason why I'm comparing a tablet with another tablet SoC). I wouldn't call that kind of performance meger or small at all.

And on top of that, doesn't the Transformer Prime use a higher resolution display than the iPad 2? To maintain relatively high frames per second at relatively high resolution all with relatively low power consumption is very nice. I don't see how one could be disappointed with GPU performance of Tegra 3 after seeing video like this: http://www.youtube.com/watch?v=2U2r3yKg0Ng and this: http://www.youtube.com/watch?v=beW44983Rx8
 
Back
Top