Samsung Orion SoC - dual-core A9 + "5 times the 3D graphics performance"

I thought Tegra 3 was a slightly reworked Tegra 2 and still built on 40nm? According to previous roadmaps it was slated for introduction in Q3 2011. Looking at how 40nm turned out I wasn't expecting 28nm chips in smartphones until 2012 at the earliest. Samsung's 32nm process should be ready by Q3/Q4 2011. But if the next Tegra is 28nm, when can we expect to see it ship in actual products?

Edit: Btw which fab do TI and Qualcomm use? TSMC itself?

From what it looks like Tegra3 will possibly a quad core ARM CPU and a USC GPU (something like a GF9 grandchild). Under 40nm the SoC would far too big.

NV's plans apart, could be that Qualcolmm turns out to be the smartest so far since they have afaik a contract with GloFo for 28nm. I can't know how the story with 28nm will pan out for TSMC or what they're promising, but if the 40G problems should appear with 28nm too then I'm afraid far too many might consider another alternative before even striking a deal with TSMC.
 
The highest clock speeds ARM mentions for Mali are probably a lot like IMG's 200 MHz reference clock was during the MBX generation.

It likely just means they've tested and verified those speeds and use them as a reference to the level of potential these cores could perform for the markets at which they're targeted in the upper end of the performance spectrum, such as embedded, in-car navigation, and consumer electronics.

Mali's not likely to be clocked so much higher than other mobile GPUs, and Samsung's not likely to shift their SoC budget for GPU power consumption that much.
 
Also, TSMC isn't so much plagued by problems as that they rarely make a realistic forecast as to how long transitioning from sampling and pilot stages to actual volume production actually takes. It's like they always forecast the most optimistic possibilty.

Them then failing to meet their initial projections is never surprising, and the semis who had actually based their roadmaps off of the misguidance only have themselves to blame.
 
So will switching from PowerVR GPUs to ARM GPUs cause compatibility issues with existing Galaxy S games? I remember many of Gameloft's Android games often had issues with Qualcomm Snapdragon phones with the Adreno 200 compared to TI or Samsung SoC phones with the SGX, possibly partly because the Adreno is slower, partly because the game was originally developed for the PowerVR architecture on iOS, and/or partly because it relies on PowerVR OpenGL ES extensions.
 
I don't see why there would be any compatibility issues in the first place. Adreno, SGX as well as Mali are all tile-based and while each of them has their obvious differences I don't see why any developer would had been so naive to tailor his code that way to benefit the pure deferred renderer out of the bunch.

It's likelier that Adreno's drivers simply were sub-optimal since I can see them lately doing a LOT better in synthetic benchmarks then they used to in the past.

As for Orion containing a Mali-400 no big surprise here. If it would contain a T604 I'd believe the 5x times graphics increase single handed ;)
 
The compatibility question has made me wonder..

Is there any legal barrier for one company to implement another company's OGL extensions? ie, AMD implementing something with NV in its name. I seem to recall some companies doing this but only very vaguely. If it's okay maybe we'll see ARM implement some IMG extensions. There aren't an awful lot of them, probably fewer that are especially popularly used on phones.
 
The compatibility question has made me wonder..

Is there any legal barrier for one company to implement another company's OGL extensions? ie, AMD implementing something with NV in its name. I seem to recall some companies doing this but only very vaguely. If it's okay maybe we'll see ARM implement some IMG extensions. There aren't an awful lot of them, probably fewer that are especially popularly used on phones.

There are AFAIK no legal barriers tied to the OpenGL extension mechanism per se (e.g. Nvidia's 260.19 OpenGL driver implements at least three ATI extensions); that said, features exposed through extensions may be covered by underlying patent claims, in which case said patent claims are a legal barrier. This is, for whatever reason, a particularly big problem with texture compression.
 
Thanks, arjan.

Here are the extensions IMG currently has for OpenGL ES:

http://www.khronos.org/registry/gles/extensions/IMG/IMG_read_format.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_texture_compression_pvrtc.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_user_clip_plane.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_texture_env_enhanced_fixed_function.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_program_binary.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_shader_binary.txt
http://www.khronos.org/registry/gles/extensions/IMG/IMG_multisampled_render_to_texture.txt

Nothing major, mostly stuff that's superseded by OpenGL 1.1 or not supported in/made irrelevant by OpenGL 2.0. The shader binary stuff obviously won't cross over, but without an offline compiler or assembler I don't see any games using that.

The biggest possible point of contention is PVRTC - I imagine games are going to prefer to just come with multiple compressed texture formats for this one, it's not worth locking out all the non-IMG cards with this. Then there's the multisampled render to texture, which could be useful for Mali since it also does multisampling on tile. I'd be surprised if much software is using this extension right now.
 
The current gen SOC has 2 derivatives, S5PC110, and S5PV210. The PC part is physically smaller and is for smartphones (galaxy phones etc), the PV part is larger and has dual channel memory support and was supposed to be for tablets. However my understanding is that the galaxy tablet is using the PC part as well.

The Orion development is also following the same path, S5PC210 and S5PV310. The one that the recent video showed was the PV part.
 
http://www.nocutnews.co.kr/show.asp?idx=1688473
http://www.careace.net/2011/01/19/galaxy-s2-device-codenamed-seine-teased-mwc-announcement/


The Galaxy S2 will be officially announced in the 13th February and will sport an Orion SoC.

- Orion (Dual A9 1GHz + Mali 400)
- 1GB RAM
- 4.3" SAMOLED+ screen
- 1080p video recording
- 8MP camera
- Samsung Cloud Services (whatever that may be)
- Proposed goal of selling 10 million units before the end of 2011.
- thinner than 9mm (looks like a dick-size competition with Apple)

With a formal annoncement in mid-February and that sales goal, the device will probably be available for sale at most in the beginning of H2 2011.
This pretty much denies some assumptions I've seen around here that Orion would only be found available in 2012.


With A5 using the SGX543MP2, Orion using Mali 400 and 3rd gen Snapdragons using Adreno 220, I guess the TI OMAP4 will be the graphically-weakest "high-end oriented" SoC available during 2011 (trading blows with Tegra2, maybe), using last year's SGX540 (unless they clock the GPU to astonishingly high levels).
 
Last edited by a moderator:
With A5 using the SGX543MP2, Orion using Mali 400 and 3rd gen Snapdragons using Adreno 220, I guess the TI OMAP4 will be the graphically-weakest "high-end oriented" SoC available during 2011 (trading blows with Tegra2, maybe), using last year's SGX540 (unless they clock the GPU to astonishingly high levels).

I'd say that the SGX540 in the OMAP4430 is clocked somewhere in the 300MHz ballpark, with the 4440 being ~@400MHz.

What exactly makes you think that an up to twice as fast GPU (OMAP4440) than in the GalaxyS will be the weakest of all the other you mention? If you're judging from current GL_Benchmark2.0 scores I'd suggest a bit of patience there. Else Tegra2 for instance is only 4fps (10%) ahead of the GalaxyS tab at the same resolution (1024*600) where the T2 GPU is clocked at 240MHz and the SGX540 should be at 200MHz in Q3a. I'd say that Q3a is more fill-rate bound than anything else and with IMG showing on their own website a demo of a SGX535@400MHz reaching ~60fps in 1080p with same amount of TMUs as a SGX540 I'd say you're jumping to very premature conclusions.

The 4MP Mali400 in Orion might very well be clocked at 400MHz too, but I'd be very surprised if I'd see a multitude of TMUs on that one and most importantly its only 1VS unit for the entire MP because only fragment cores scale on that one. GalaxyS SGX540@200MHz is rated by Samsung itself at 20M (real achievable) Tris/s. SGX540@400MHz is twice as much.

Last time I read a tidbit at Anand about Adreno220 Qualcolmm seems to claim a ~530MPixels/s fill-rate for that one. Without overdraw and at 400MHz you have on a 540 800MPixels/s; just as much as you'd get on a 2MP@200MHz.

No wonder Rys is asking about frequencies LOL.

***edit: by the way I think that the 1.2GHz Hummingbird also promises 25% faster graphics. It sounds like a SGX540@250MHz in the Samsung Infuse 4G.
 
Would clock would you call astonishing?

Judging by today's standards, I'd say ~500MHz for the SoC's GPU to be quite high, and that would place the SGX540 on par with a ~250MHz SXG543MP2.

I'd say that the SGX540 in the OMAP4430 is clocked somewhere in the 300MHz ballpark, with the 4440 being ~@400MHz.

What exactly makes you think that an up to twice as fast GPU (OMAP4440) than in the GalaxyS will be the weakest of all the other you mention? If you're judging from current GL_Benchmark2.0 scores I'd suggest a bit of patience there.

You'll probably know more than me, but I've never heard of any GPU clock numbers for OMAP4. OMAP34xx is 110MHz and OMAP36xx is 200MHz. Since SGX540 seems like pretty much a SGX530*2, it didn't cross my mind that TI would double the GPU clocks again, achieving 4x the previous generation's performance.

What makes me guess these numbers is exactly that, benchmarks that have been used. GLBenchmark 2.0, Neocore (Adreno-optimized, I know, but it seems to scale equally well with other GPUs) and Quake 3.
They may not be ideal, but they're the only practical performance measurers we have. Maybe Epic will bundle an integrated benchmark when they release Infinity Blade for Android and then we may get a more realistic benchmark.

If indeed the SGX540 is clocked to 300-400MHz, then Tegra2 should become the slowest "top performer" of the bunch.
 
Last edited by a moderator:
Judging by today's standards, I'd say ~500MHz for the SoC's GPU to be quite high, and that would place the SGX540 on par with a ~250MHz SXG543MP2.

No it wouldn't. Apart from TMUs a single SGX543 is almost or at least twice as much as a SGX540.

Open for correction:

SGX540@200MHz
4 Vec2 ALUs
2 TMUs
8 z/stencil
20M Tris/s

SGX543@200MHz
4 Vec4 ALUs
2 TMUs
16 z/stencil
35M Tris/s
YUV & colour space hw acceleration
 
Back
Top