Samsung Orion SoC - dual-core A9 + "5 times the 3D graphics performance"

Honestly Orion is not very impressive as the quad-core Tegra3 on 28nm (and likely its Qualcomm equivalent) will sample one quarter later at most. I can see Samsung very successfully using its own APs in many of its own phones along with a slim baseband from Infineon/Icera/Qualcomm, but I can't see them being very successful at third parties.

Do not confuse want to sample with able to sample, and certainly don't confuse it with able to produce. Nvidia after all does use TSMC. So add about a year or so to all schedules.
 
Do not confuse want to sample with able to sample, and certainly don't confuse it with able to produce. Nvidia after all does use TSMC. So add about a year or so to all schedules.
I certainly agree with you for long-term roadmaps, but this isn't a long-term roadmap. The process was already delayed by nearly two quarters, and the chip was already delayed a bit, but now it already taped-out. Unless there's a very bad problem, they'll probably just need one respin before sampling to customers, and if they aren't lucky an extra respin before production. In terms of process yields, mass production isn't going to start before late Q1 2011 at the earliest, and it's a stretch to assume TSMC 28LP yields are going to be catastrophic in that timeframe.

There are plenty of things you can say against TSMC's 28nm process, such as design rules/DFM changes surprisingly little before the process went live and the many initial issues on 40nm. There's plenty you can say against NVIDIA's execution both with Tegra and in general. In all cases it's not very pretty. But there's a fine line between warranted skepticism and over-the-top cynicism.
 
Given that Intel probably nearly certainly never licensed the SGX543MP and waited for the SGX544MP, I don't see the problem with assuming Samsung was one of the secret licensees. But yes, it could be Mali-based too.

In IMG's annual report for year ending April 2010, they said "....including five partners
that are working on designs using the very latest multi-processing (MP) core variants of the
SGX543MP and SGX544MP family."

So there are 5 licencees for SGXMP period, some licencees may of course have licenseed both 543 & 544, which i think is the case.

Renesas,TI are known, the other ones are *thought* to be Sony/Apple/Intel. So if Samsung has a IMG multi-core licence, then some of those other three don't.

. I still think it's pretty clear based on the 'first samples' date that it's actually Mali
I'll just throw into the mix that IMG said in a presentation in April '10 that "Mobile Multicore solutions will be here within a year"

http://people.csail.mit.edu/kapu/EG_09_MGW/IMG_MP_GPGPU.pdf

However, it does look more like Mali, which is going to be one hell of a disapointment when someone does some benchtest/power tests.
 
OT: not much reason to reply to you Arun since you and your lovebird have had a successful rendezvous LOL :LOL:

Kudos to aaronspink about the sampling comment; it was in the back of my mind but I tend to forget a tad lately :???:
 
Is that some guess based on paper spec or first-hand experience you can share?

I don't know where he's basing his assumption on, but if we'd be talking of something like the current multi-core Mali then SAMSUNG obviously doesn't mean a 5x fold increase in terms of triangle rates. More like close to nothing unless they clock that thing at 1GHz LOL :p
 
Is that some guess based on paper spec or first-hand experience you can share?

I'm basing it on ARM's MARKETING figures for mali400MP (their highest performing graphics solution), which for a 4 core @400MHZ will do 44Mpolys and 1.6G fill rate (again thats the marketing department figures).

I'll be surprised if it much outperforms AT ALL in real world terms the SGX540@200Mhz that samsung currently has, never mind x5 the performance.

Clearly when not running intensive stuff the multi-core solution can switch off 2 or 3 cores, so extending battery life in those situations. What the 4-core @ 400Mhz draws in power when fully active, compared to the SGX540@200Mhz might also be revealing.


of course "x5" graphics improvement seems to be in-vogue at the moment, TI started it with Omap4 V Omap3, then Nvidia with Tegra, and now Samsung.
 
Last edited by a moderator:
Intel claims up to 4x times graphics in Medfield over Moorestown or not?
 
I wonder if Orion is using a custom A9 design flow either developed by Samsung in-house or with Intrinsity/Apple as was the case for Hummingbird if Apple is still licensing that tech or whether Orion will stick to a stock ARM Cortex A9 design at least for this first iteration? If Samsung no longer has access to Intrinsity technology, I wonder if they'll be hard feelings when the manufacturing team is churning out SoCs for a competitor that is possibly faster or more efficient than what their own design teams can come up with in-house.
 
I wonder if Orion is using a custom A9 design flow either developed by Samsung in-house or with Intrinsity/Apple as was the case for Hummingbird if Apple is still licensing that tech or whether Orion will stick to a stock ARM Cortex A9 design at least for this first iteration? If Samsung no longer has access to Intrinsity technology, I wonder if they'll be hard feelings when the manufacturing team is churning out SoCs for a competitor that is possibly faster or more efficient than what their own design teams can come up with in-house.

Seeing as it's 1 GHz at 45nm, I would assume it's a stock Cortex A9. That seems to be the standard 45LP A9 frequency. There'd be no point to using dynamic logic if you're not going to clock it higher.
 
This article claims Mali: http://www.eetimes.com/electronics-news/4207498/Samsung-s-Orion-is-an-ARM-graphics-win--says-analyst

With Samsung's claim of 90 million polygon/s on S5PC110 it wouldn't come as much surprise if they're exaggerating or maybe outright making up performance figures for graphics. Of course, a claim of 5x that would now be convolving the two claims...

Samsung's own documentation recently claims only 20M Tris/s for SGX540, whereby I'm assuming it could be clocked at 200MHz.

Now ARM on it's homepage is claiming for Mali400 30M Tris at 275MHz. But that's the actual maximum geometry rate each fragment processor could theoretically handle; the maximum triangle rate for the vertex processor (which is only one for 4 fragment processors on a Mali400 4MP) is at 18M Tris/s (at least to arjan de lumens' claim here in the fora) which I'm assuming is for 275MHz. I'm generous and speculate that they might clock it at 400MHz which gives a triangle rate of 26M Tris/s for the entire enchilada. I'd also think 4 TMUs (which can handle by the way up to 4096*4096) which at 400MHz gives a fill-rate of 1.6GPixels/s. Now it's up to anyone's imagination (no pun intended LOL) to figure out how something with only a small percentage of higher geometry rate and 4x times the theoretical raw fill-rate can reach 5x times the graphics performance. As I said further above if they'd clock that thing at 1GHz or beyond probably yes.

As for triangle rates in general it's the same old typical horseshit various IHVs claim from time to time. I'm sure I could find various links where Tegra2 is supposed to reach 90M Tris too; shame that reality would show you that it's more like 70M vertices/s at 240MHz :devilish:

In the case of IMG their current white-papers claim for SGX520-543MP from 7M to 140M Tris/s all at 200MHz and at less than 50% shader load.
 
A four core Mali400MP at 400 MHz would easily see performance gains over an SGX540 at ~200 MHz, not that Orion would use a Mali set up like that.
 
A four core Mali400MP at 400 MHz would easily see performance gains over an SGX540 at ~200 MHz, not that Orion would use a Mali set up like that.

No one said it wouldn't; I just find the 5x fold graphics performance increase rather ridiculous.
 
Yeah; the 5x claim won't likely be attainable in anything resembling a real world test.

The issue was raised a little earlier in this thread of whether a quad-Mali400MP @400 MHz would even outperform Galaxy S's 540 @200 MHz at all, so my comment was just addressing that.
 
In terms of raw texturing throughput, you'd need a quad-Mali400MP at an incredible 500MHz to achieve a 5x advantage over a 200MHz SGX540 since they are 1 and 2 TMU designs respectively. I actually think a quad-core Mali is the most likely option, but I can't seriously believe it will be clocked above 300MHz. And that's just for raw texturing which is not a very good metric - I don't see how any other *useful* metric could be much more favorable to Mali. Assuming the drivers are good enough, a 2x advantage seems realistic (and before any PowerVR enthusiasts complain that's unlikely, keep in mind it takes substantially more die area than SGX540)
 
Not unrealistic at all; I'd even shoot for a tad higher than 2x under specific conditions but that's besides the point. And why not 400MHz under 45nm? ARM claims 395MHz under 65HP.
 
Given that Intel probably nearly certainly never licensed the SGX543MP and waited for the SGX544MP, I don't see the problem with assuming Samsung was one of the secret licensees. But yes, it could be Mali-based too.

Honestly Orion is not very impressive as the quad-core Tegra3 on 28nm (and likely its Qualcomm equivalent) will sample one quarter later at most. I can see Samsung very successfully using its own APs in many of its own phones along with a slim baseband from Infineon/Icera/Qualcomm, but I can't see them being very successful at third parties.

I thought Tegra 3 was a slightly reworked Tegra 2 and still built on 40nm? According to previous roadmaps it was slated for introduction in Q3 2011. Looking at how 40nm turned out I wasn't expecting 28nm chips in smartphones until 2012 at the earliest. Samsung's 32nm process should be ready by Q3/Q4 2011. But if the next Tegra is 28nm, when can we expect to see it ship in actual products?

Edit: Btw which fab do TI and Qualcomm use? TSMC itself?
 
Back
Top