Samsung SoC & ARMv8 discussions

About 50% faster in Manhattan and 100% faster in T-Rex, compared to the Exynos 5433 in the Note 4.
 
Judging from the overhead score on the Edge I'd say there's still some headroom in the drivers for somewhat higher performance.
 
BTW, Samsung had better have a LOT of confidence in their SoC's power savings, since they actually reduced the battery capacity from S5 to S6.
 
Since Malis have also surprised us with higher performances through driver/compiler optimisations, they might very well be able to squeeze a bit more out in their 3.1 drivers. NVIDIA was first with 3.1 drivers if memory serves well.
 
Since Malis have also surprised us with higher performances through driver/compiler optimisations, they might very well be able to squeeze a bit more out in their 3.1 drivers. NVIDIA was first with 3.1 drivers if memory serves well.
While there may well be legitimate gains to be had through driver optimisations, I always get a bad feeling when updated drivers yield major improvements in benchmark scores.
It is a weakness of the mobile benchmarking scene that it is totally dominated by a very limited number of benchmarks. Given the huge stakes at hand, foul play, which can take many shapes in benchmarking, can pretty much be expected.
 
While there may well be legitimate gains to be had through driver optimisations, I always get a bad feeling when updated drivers yield major improvements in benchmark scores.
It is a weakness of the mobile benchmarking scene that it is totally dominated by a very limited number of benchmarks. Given the huge stakes at hand, foul play, which can take many shapes in benchmarking, can pretty much be expected.

There had been more than one cases in the past where ARM managed to increase its Mali GPU performance even by 30 if not 40%. In the given case it's not much of a surprise considering how much of a pain in the ass their architecture is to write a compiler for. Since it did sound strange to me too I asked around and from what I had heard everything had been legitimate.

On a sidenote I'm seeing how PowerVR Rogue's going from OGL_ES3.0 to 3.1 drivers gain in Manhattan across platforms at least 20%; that doesn't come as a surprise either since 3.1 itself should help quite a bit for far better sw housekeeping.

Samsung's Galaxy S6 scores ~26 fps in Manhattan offscreen while GK20A in Tegra K1 ~32fps. The Tegra K1 drops to ~23 fps in Manhattan 3.1 while the Galaxy S6 to ~10 fps. I am not convinced at all that a GK20A is by over 2x times faster than a Mali T760MP8@773MHz.
 
While there may well be legitimate gains to be had through driver optimisations, I always get a bad feeling when updated drivers yield major improvements in benchmark scores.
It is a weakness of the mobile benchmarking scene that it is totally dominated by a very limited number of benchmarks. Given the huge stakes at hand, foul play, which can take many shapes in benchmarking, can pretty much be expected.

If only actual games could be benchmarked. I don't understand why this has yet to become a thing.
 
If only actual games could be benchmarked. I don't understand why this has yet to become a thing.

We've been saying that here in the forum for quite some time now. I think websites are reluctant to use any because they don't have the tools to measure real game performance? ISVs don't see any interest yet to implement timedemos in their games and there should be some problems to create something like FRAPs for Android from what I recall.

On the other hand scratch in game timedemos as IHVs can optimize drives for those just as much as for synthetic benchmarks ;)
 
Actually, there is an app called Gamebench that does a quite good job measuring a lot of info during gameplays.
 
Actually, there is an app called Gamebench that does a quite good job measuring a lot of info during gameplays.

Measurement's only half the problem. You need games that run the same way (same resolution and image quality targets) between devices and aren't frame rate limited. And you need something with reliable (and preferably easy) reproducibility. Does anyone know of major games that actually include in-game benchmarks? I'm not sure why they don't, surely they use things like this internally to help with game development. I wonder if there's any outside pressure to not include it.
 
Measurement's only half the problem. You need games that run the same way (same resolution and image quality targets) between devices and aren't frame rate limited. And you need something with reliable (and preferably easy) reproducibility. Does anyone know of major games that actually include in-game benchmarks? I'm not sure why they don't, surely they use things like this internally to help with game development. I wonder if there's any outside pressure to not include it.

Ι'm not aware of any but would like to stand corrected. Any ISV that develops mobile games could take a scene out of the game they'd consider as representative, create a timedemo and let it run at say 1080p offscreen.

Actually, there is an app called Gamebench that does a quite good job measuring a lot of info during gameplays.

I'll give it a try; but if it can't bypass vsync and render at specific resolutions on all tested devices I don't think it would help much.

***edit: tried it in Asphalt 8 just for one race. It affects performance and at spots severely. No thank you.
 
Last edited:
There had been more than one cases in the past where ARM managed to increase its Mali GPU performance even by 30 if not 40%. In the given case it's not much of a surprise considering how much of a pain in the ass their architecture is to write a compiler for. Since it did sound strange to me too I asked around and from what I had heard everything had been legitimate.

On a sidenote I'm seeing how PowerVR Rogue's going from OGL_ES3.0 to 3.1 drivers gain in Manhattan across platforms at least 20%; that doesn't come as a surprise either since 3.1 itself should help quite a bit for far better sw housekeeping.

Samsung's Galaxy S6 scores ~26 fps in Manhattan offscreen while GK20A in Tegra K1 ~32fps. The Tegra K1 drops to ~23 fps in Manhattan 3.1 while the Galaxy S6 to ~10 fps. I am not convinced at all that a GK20A is by over 2x times faster than a Mali T760MP8@773MHz.
I highly doubt that 10fps figure is accurate. Currently databases are being filled by trash data by the people at MWC. Power saving mode for one guts the frequency to half.
 
Back
Top