Samsung Orion SoC - dual-core A9 + "5 times the 3D graphics performance"

Mr combative once again...that's unfortunate.

Anyhow I didn't "come up" with some vague sentence, I got it from the odriod developers, which I did say.

"We could enjoy web-surfing about 5~6hour with mid-range of LCD back-light setting."

http://odroid.foros-phpbb.com/t547p20-odroid-a#3107

There's the link, users name is odriod,his postings would indicate that he is a hardkernal employee.
On the same thread two posts up he says
"please remember, this is a development platform"

He didn't say it could be used as a development platform, it *IS* a development platform.

Thread further indicates web browsing test was done using wifi.

11hr figure for web browsing came from Anandtech's testing of the ipad2 on web browsing using wifi.

I'm being persistent, as I always am when I'm almost sure I'm right and the other person is wrong.
If you read it as a "combat", then too bad for you. I'm just making my point and I won't stop doing so.



From the 10 questions I made that could actually turn your initial "it's a development platform so they probably just overclocked the chip to death in order to get those results" theory into something reasonable, you managed to answer 2 of them. Not enough.
Those 5-6 hours are stil vague because he doesn't state which websites he visited and for how long. I would only accept a comparison like that as valid if both devices were visitting the same websites during the whole time and none of those sites had Flash content.



And how about the fact that the ODROID-A is getting the same results as the Galaxy S II in synthetic tests?
Why was that omitted in your argument?




Yup. And some of the synthetic tests also show the galaxy s2 outperforming the odroid-a by a substantial margin - indicating that the odroid-a results, if anything, understate how powerful the Exynos really is.

Well some tests are supposed to show much better performance on GS II because of the much lower render resolution. Tests that aren't fill-rate or bandwidth-limited should be about the same between the two devices (maybe with one or another marginal differences since they can be running different drivers).
 
Yup. And some of the synthetic tests also show the galaxy s2 outperforming the odroid-a by a substantial margin - indicating that the odroid-a results, if anything, understate how powerful the Exynos really is.

Actually, no, S2 is running at much lower resolution, this means one of two things

1) S2 implmenetation is substantially slower, or,
2) We shouldn't be using VSync limited benchmarks to compare performance on devices running at different resolutions.

I'll let you decide which ;)
 
Well some tests are supposed to show much better performance on GS II because of the much lower render resolution. Tests that aren't fill-rate or bandwidth-limited should be about the same between the two devices (maybe with one or another marginal differences since they can be running different drivers).

Not just fill-rate or bandwidth, we're talking all fragment processing, everything that is being dispatched to those 4 fragment cores. It'd be better to say tests which aren't limited by the triangle frontend (vertex shading + binning/clipping + triangle setup)
 
Actually, no, S2 is running at much lower resolution, this means one of two things

1) S2 implmenetation is substantially slower, or,
2) We shouldn't be using VSync limited benchmarks to compare performance on devices running at different resolutions.

I'll let you decide which ;)

My comment applied to the synthetic tests of glbenchmark, you know, the ones which measure things in terms of vertices/sec, pixels/sec, shaders/sec, texels/sec, rather then frames-per-second. As far as I can tell, running at a lower resolution or fiddling with VSync should not be able to increase performance on such metrics (unlike what is the case with pseudo-game-content framerate tests like the Egypt tests), and I am quite surprised that both you and ToTTenTranz seem to be trying to argue otherwise.
 
My comment applied to the synthetic tests of glbenchmark, you know, the ones which measure things in terms of vertices/sec, pixels/sec, shaders/sec, texels/sec, rather then frames-per-second. As far as I can tell, running at a lower resolution or fiddling with VSync should not be able to increase performance on such metrics (unlike what is the case with pseudo-game-content framerate tests like the Egypt tests), and I am quite surprised that both you and ToTTenTranz seem to be trying to argue otherwise.

Actually, if the fillrate (pixels/s) is calculated by measuring resolution*FPS and VSync is turned on, you'll be capping that same synthetic test.

Besides, synthetic tests can also be influenced by driver performance, so there's a limit on how credible they can get (regarding performance potential).
 
Actually, if the fillrate (pixels/s) is calculated by measuring resolution*FPS and VSync is turned on, you'll be capping that same synthetic test.

In that case, cutting resolution to one-third (the approximate resolution difference between galaxy S2 and the odroid-a) should result in the lower-resolution device (the S2) producing one-third of the fillrate, which is manifestly not what any of the synthetic tests actually show.

In any case, the synthetic test results that I talked about are for the most part the synthetic vertex/triangle-rate tests; these tests produce substantially higher scores (as measured in vertices and triangles per second) on galaxy S2 than on odroid-A; it is these tests and not the Egypt framerate numbers that are the reason why I said that the odroid-A results may be understating the power of the Exynos.
 
In any case, the synthetic test results that I talked about are for the most part the synthetic vertex/triangle-rate tests; these tests produce substantially higher scores (as measured in vertices and triangles per second) on galaxy S2 than on odroid-A; it is these tests and not the Egypt framerate numbers that are the reason why I said that the odroid-A results may be understating the power of the Exynos.

And why might that be ?, drivers out of date ?
 
My comment applied to the synthetic tests of glbenchmark, you know, the ones which measure things in terms of vertices/sec, pixels/sec, shaders/sec, texels/sec, rather then frames-per-second. As far as I can tell, running at a lower resolution or fiddling with VSync should not be able to increase performance on such metrics (unlike what is the case with pseudo-game-content framerate tests like the Egypt tests), and I am quite surprised that both you and ToTTenTranz seem to be trying to argue otherwise.

It's true the low level tests show similar performance in some cases and varied performance in other, which probably points at driver versions, I was mixing things up with respect to one of your earlier posts...

It now reaches an Egypt score of 5997, putting it comfortably ahead of the ipad2.

If, at this point, anyone wants to point out that the ipad2 has more pixels, I'd like to point to the one other Exynos device in the GLbenchmark database: the HardKernel ODROID-A; granted, ipad2 gets a 30% higher framerate than the odroid, but the odroid has 33% more pixels.

You're happy to ignore the fact that S2 has half the pixels of ipad2 yet is only 10% faster while pointing out that ipad2 is only 30% faster than a device rendering 33% more pixels, sounds like spin to me.

Basically the vsync limit IS making it completely invalid to compare results at different resolutions on these devices.

John.
 
In any case, the synthetic test results that I talked about are for the most part the synthetic vertex/triangle-rate tests; these tests produce substantially higher scores (as measured in vertices and triangles per second) on galaxy S2 than on odroid-A;
The triangle tests in particular are resolution dependent, though, especially on a TBR.
 
Nice find, but actually the two pictures are clearly the same chip ;) The one on Page 44 allows for a lot more magnification though, and it's VERY clear that the two blocks in the bottom right and the block in the top right are all copy-pasted. In other words, this is a Mali-400 MP3! The first tri-core handheld GPU, but probably not the last. I estimate each core's die size at slightly more than 2mm², and the full GPU including the geometry processor seems to be close to 8.5mm² assuming the large SRAM blocks have nothing to do with it.

I don't know what that chip is though. It's not Samsung or ST-Ericsson. If it's not a random Tier 2, maybe it's an unannounced TV SoC from Mediatek?
 
Well naturally I knew that, I just wanted to see if everyone else was awake. :)

nice bit of deciphering mali-400MP3 indeed.

I reworked out the area to be between 9-10mm2 depending on the bit in the middle, on a 40nm process Soc.

For comparision, IMG's 545 was quoted by IMG as taking up 12.5mm2 @ 65nm, which I guess would make it somewhat smaller that the mali400MP3 (assuming IMG's figures are realistc), with similar performance, and much more compliance in terms of DX, OpenGL and OpenCL,
 
Last edited by a moderator:
I'm not sure I'd want to compare synthesis figures from anyone with numbers from real silicon. After all, if we wanted to cherry pick against IMG, I could point out that the SGX543MP2 on the Apple A5 takes more than 30mm² on Samsung 45nm and it's not 3x faster than Mali-400MP3. But the A5 is an exception and other SGX543MP2 implementations (ala Renesas) will certainly be quite a bit smaller!

A fair comparison would be the SGX540 and Mali-400MP4 on Samsung 45nm, but unfortunately we don't have any die shots of either...
 
The performance difference is challenging to reconcile when comparing Mali and PowerVR, especially in MP configurations as they scale differently. A Mali-400 MP4 obviously has quite the advantage over an SGX540 in some ways more than others.

Since both Intel and Apple cut out middle men in their semiconductor business (in different ways) compared to the typical semi vendor, they afford the extra die area of implementing an SGX core at a lower density. nVidia, TI, Renesas, etc have the heat/power challenge of really cramming the GPU in there, especially as the relative importance of that part of the SoC has grown.
 
It looks like it's based on ARM Ltd A9 hard macro, codenamed Elba (in fact Osprey as it seems to have L2 cache); see slide 6 of this presentation.
Ohhh, you might be right, good catch. Maybe Nufront then? Then again they claim 800MPixel/s, which would imply a clock speed of ~267MHz and that wouldn't be very impressive on 40G... Don't know of any other public Osprey licensee but presumably they're not the only ones.
 
Ohhh, you might be right, good catch. Maybe Nufront then? Then again they claim 800MPixel/s, which would imply a clock speed of ~267MHz and that wouldn't be very impressive on 40G... Don't know of any other public Osprey licensee but presumably they're not the only ones.

Seems unlikely, since they claim 45MTriangle/s. Compare with the 30MTriangle/s figure ARM gives at 275MHz, and signs clearly point to Mali-400MP2 at 400MHz.
 
Seems unlikely, since they claim 45MTriangle/s. Compare with the 30MTriangle/s figure ARM gives at 275MHz, and signs clearly point to Mali-400MP2 at 400MHz.
Good point, so it has to be an unannounced chip, probably from a Tier 2. Either way, nice die shot I suppose :)
 
Back
Top