Maybe nothing on the market yet, but there are a bunch of announced single A9 products. Off the top of my head, Freescale, ST and Fujitsu all have themI don't think there exists a single A9 currently.
Maybe nothing on the market yet, but there are a bunch of announced single A9 products. Off the top of my head, Freescale, ST and Fujitsu all have themI don't think there exists a single A9 currently.
There are plenty of things that are still CPU bound -- including Javascript parsing and browser speeds.
That NV isn't badly positioned on the CPU side both from a timeframe as well as an application/user experience overall if they come along with a quad A9@2GHz and over half a year earlier in shipping devices than their competitors for the 28nm generation. Their problem will be most likely elsewhere. If TI manages to get OMAP5 or ST Ericsson A9600 in devices at the same time as Tegra4 (not likely IMHO unless TSMC has some very serious problems with 28LP) then of course NV will be in serious trouble. But more because their SoC will lackluster elsewhere than from the CPU side.There are mainstream applications for which the consumer won't notice the difference between a 600MHz single A8 and a dual 1.2GHz A9. What's your point?
How much slower is a iPhone4 vs. any dual core A9 powered smart-phone for example while browsing?
That NV isn't badly positioned on the CPU side both from a timeframe as well as an application/user experience overall if they come along with a quad A9@2GHz and over half a year earlier in shipping devices than their competitors for the 28nm generation.
Their problem will be most likely elsewhere. If TI manages to get OMAP5 or ST Ericsson A9600 in devices at the same time as Tegra4 (not likely IMHO unless TSMC has some very serious problems with 28LP) then of course NV will be in serious trouble. But more because their SoC will lackluster elsewhere than from the CPU side.
Depending on the complexity of the website, quite a bit actually. But what's your point? Speed doesn't matter?
So let me try to understand your argument so far:
1. nVidia may be behind in CPU, but CPU speed doesn't matter.
2. Therefore, nVidia isn't badly positioned on the CPU side.
http://www.anandtech.com/show/4484/htc-droid-incredible-2-review/5
http://www.anandtech.com/show/4484/htc-droid-incredible-2-review/6
Not too shabby for a A8@800MHz vs. 2*A9@1GHz.
Where did I ever state that CPU speed doesn't matter; I clearly said over and over again that it's not the ONLY defining factor or processor within a SoC.
2. Dual A15 will be significantly faster than quad-A9 on the vast vast vast majority of user applications.If hypothetically both square out at around 2.0GHz, I don't see the quad A9 falling that much short if at all.
I think you could add a wee bit more wit if you're trying to twist someone's words.
Did you not notice the difference in Browsermark between all the A8's and A9's?
Again, is it your assertion that "CPU speed doesn't matter"?
So how am I supposed to put these two seemingly disconnected statements together:
1. nVidia is not disadvantaged CPU WISE using a quad-A9 compared to a dual-A15. Sure, there's a time-to-market advantage but your exact statement was:
2. Dual A15 will be significantly faster than quad-A9 on the vast vast vast majority of user applications.
Are you disputing #2? Thinking quad-A9 will be able to match a dual-A15 CPU WISE? Or is your first statement not based on nVidia's CPU WISE strengths?
Backpedaling can make you quite defensive, understandable.
Yes I did. Still nothing "huge" considering the A4 CPU is clocked at merely 800MHz.
I put everything and even time to market into a bucket and claimed that there won't be a noticable disadvantage.
I said it more than once that if dual A15 SoCs with their projected GPUs and what not appear on shelves at the exact same time as NV's T4, NV will be in serious trouble. That's not ignoring CPU strength but rather acknowledging that's it not the only thing that matters on a SoC.
There are recent announcements probably indirectly driven from NV itself claiming that they'll have the only quad A9 on the market. Not true and as an immediate reaction newsblurbs from TI appear that claim that dual A15 will be by a lot faster than quad A9. TI is neglecting to mention in that case though that OMAP5 will also most likely have an at least twice as strong GPU as Tegra3 which incidentially can be also used in quite a few cases for general purpose tasks. If NV stays with FP20 PS ALUs the latter sounds like a no no until probably the 20nm generation.
If TI won't work for you as an example pick ST Ericcson's Nova A9600 with dual A15's at up to 2.5GHz and a GPU that equals roughly the XBox360 in GPU performance.
Look at it compared to the 1GHz A8's, since that's the same software stack (Android). If you don't consider a ~25% speedup "huge" then I don't know what to say.
I refuse to explain over and over again what I had in the back of my mind.I noticed you left out your own quote:
"If hypothetically both square out at around 2.0GHz, I don't see the quad A9 falling that much short if at all."
I'd expect quad A9@2GHz (estimated) in Tegra4 to be honest. T3 is supposed to be up to 5x times faster than T2 and T4 10x times faster than T2 according to NV's own roadmap. Else T4 = 2x T3. Not a lot IMO.TI won't be sitting idle. Dual-core 1.8GHz A9's with SGX544 will be out in between T3 and OMAP5. IMO, this is actually a better solution from a CPU performance perspective than quad-A9 at 1.5GHz. Of course, how well the ULP Geforce compared to SGX544 will be a factor as well.
I'm talking all this time about T4@28nm; I'd consider quad A9 in T3 to be "set" at up to 1.5GHz and not more.So nVidia hardly has a gigantic market lead over their competitors. If T3 manages to be in products by August, there will be a good 4-6 month gap, sure. But in the smartphone land, T3 in smartphones would arrive maybe 1-3 months if even before MSM8960 and OMAP4470.
I'm actually wondering how Apple achieved from 4.1 to iOS4.3 such a performance increase with a quite humble A8@800MHz. What it tells me is that Apple has probably a finer balance between its hw and sw and not that a A8@1GHz would break even with a dual A9@1GHz at web browsing. Then come in a whole damn lot of other factors like caches, bandwidth and what not between different SoCs.
I refuse to explain over and over again what I had in the back of my mind.
I'd expect quad A9@2GHz (estimated) in Tegra4 to be honest. T3 is supposed to be up to 5x times faster than T2 and T4 10x times faster than T2 according to NV's own roadmap. Else T4 = 2x T3. Not a lot IMO.
The supposed "12 core" ULP GF in Tegra3 (and if Anand is right that it's still not a unified design) sounds compared to the "8 core" ULP GF in Tegra2 (IMO 1 Vec4 FP20 PS ALU, 1 Vec4 FP32 ALU, 2 TMUs, 8z/stencil 16bit Z precision) something in the neighbourhood of 2 Vec4 PS ALUs, 1 Vec4 FP32 ALU. T2's ULP GF is clocked up to 333MHz in tablets and 300MHz in smart-phones as it seems. If the T3 ULP GF should now be clocked in the 400-450MHz region it doesn't sound like a major increase to me and I'm still wondering where the 5x times overall increase compared to T2 comes from; I recall Arun claiming that they'll justify that one later on.
The good thing with NV's T2 GPU is that it must have quite well tuned drivers. The GPU block looks more than humble both in terms of unit count and capabilities compared to the competition yet it's still not a slouch. This might be a point where their competitors might want to take a deeper look at with their own designs.
However I expect the SGX544 to be somewhere below the SGX543MP2 performance (yet not too far away) otherwise TI wouldn't claim a 2.5x times GPU performance increase compared to the SGX540@307MHz in the 4430.
If NV's performance increases for the T4@28nm are accurate then a hypothetical quad A9 at estimated 2.0GHz is the smallest thing to worry about, since OMAP5 goes SGX544MPx.
I'm talking all this time about T4@28nm; I'd consider quad A9 in T3 to be "set" at up to 1.5GHz and not more.
T3 smart-phones and I suppose also future SoCs have the option for smart-phones (AP30?) to be either dual or quad A9. I'd dare to speculate that the majority will opt for the first for smart-phones.
What could be a problem for NV and T4@28nm could be TSMC. I don't even recall if it's on LP or HP, but hopefully TSMC won't have the capacity/yield problems of the past. If everything should go according to their plans even a 4-6 months gap and NV's aggressive marketing being given it's definitely not bad.
Back to T3: I think I've read somewhere that NV expects to sell as many T3's in one quarter than T2 up to that stage. Hopefully I didn't catch that one wrong, but if that's real then I'm of course prepared for some serious kneeslapping.
Exactly how much faith do you put in these "something-x" numbers?
One thing to note is whether that "5x" is applied solely to the graphics or whether nVidia is attempting to apply some overly simple quad CPU = 2x dual CPU formula.
Looking at the Droid 3 benchmarks with the humble SGX540 (though clocked very high), I'd say SGX is improving their layer drivers at a respectable pace. I wouldn't expect improvements from QCOM until the Adreno 300 series. I'm not sure what the state of ARM's Mali driver stack is.
If we're talking about T4, then quad-A9 will simply not be competitive. Even at 2.0GHz, it will just barely compare (and in many cases lag behind) 1.5GHz A15 and Kraits. Nothing except really simplistic synthetic benchmarks actually scale to 4 threads perfectly.
Did nVidia say there'd be a dual-core variant of T3? It would make sense but that's quite an ambitious design variant for such an aggressive release date. And if so, then T3 for smartphones will likely lag significantly behind dual-1.8 OMAPs and dual Kraits.
I would guess that it largely assumes how successful Ice Cream Sandwich tablets are. Honeycomb was fairly lackluster and we'll see if ICS can truly make a dent in the tablet market.
IF they manage to have devices several months on shelves before the others it doesn't sound like such a big problem to me. If I'm right with my gut feeling that there's still a good portion of additional performance lurking in the IMG drivers especially for MPs then what should worry NV in something like the OMAP5 or similar is the SGX544MPx before anything else.
I"m confused with the process they're using. For T2 they used 40G and went into production in the same quarter as GF100 from what I recall. If they're using anything like 28LP or HLP then mass production could start earlier than Q1 12'; if it's HP though they might be forced to wait until TSMC's capacities and yields improve.
IIRC Arun had posted in one of the threads that it was on 28 LPG. Im not even sure if thats a real process but i'd imagine they'd be on a low power rather than a high performance process (along with all the other SoC's)
I dont know about your timeframe for mass production though. Its currently slated to sample in December 2011. Even if that happens, the earliest they can hope for mass production is late Q1 2012, probably only Q2
This one sounds like a likelier candidate: http://semiaccurate.com/2011/07/19/southern-islands-kepler-and-apples-a6-process-puzzle-outed/
If Charlie is right and AMD manages to manufacture something as complex as SI on HPL or whatever its called in Q4 11' I don't see why a SoC would have a problem around that timeframe or slightly later. Presupposition of course being that the SoC was laid out from the beginning for the specific process.
On process: Tegra 2 is 40LPG and Tegra 3 is 28LPG (aka 28LPT). I know that with 100% confidence from multiple sources.