Samsung Exynos 5250 - production starting in Q2 2012

  • Thread starter Deleted member 13524
  • Start date
On every single instance (Kernel sources for Exynos, talks with some lead devs, Google's gerrit patches) points out that the CCI is powered off and inactive.

Of course it doesn't make much sense given some of the chip layouts we know of, even if just the ports are disabled: http://i.imgur.com/6wunhUQ.png
Indeed it doesn't make sense.

I looked at this code:
https://github.com/AndreiLux/Perseu...erseus/arch/arm/mach-exynos/bL_control.c#L400
Code:
static size_t bL_check_status(char *info)
{
...
len += sprintf(&info[len], " %d\n",
(readl(cci_base + 0x4000 + 1 * 0x1000) & 0x3)
== 3 ? 1 : 0);
...
len += sprintf(&info[len], " %d\n\n",
(readl(cci_base + 0x4000 + 0 * 0x1000) & 0x3)
== 3 ? 1 : 0);
}
These 2 sprintf are what matches the CCI column in the output pasted here.

If you look at the CCI TRM Register summary, you'll see that if the printed 2 bits are 0 it just means DVM and snoop requests are disabled from slave 3 and 4.

OTOH I perhaps missed the comments that explicitly state that the CCI is off in the kernel. Could you please point me to the proper file to look at?
 
OTOH I perhaps missed the comments that explicitly state that the CCI is off in the kernel. Could you please point me to the proper file to look at?

https://github.com/AndreiLux/Perseus-UNIVERSAL5410/blob/perseus/arch/arm/mach-exynos/cci.c#L195

Code:
/*
 * This function is used for checking CCI hw configuration
 * It CCI hw is not disabled, kernel panic is occurred by
 * this function.
 */
static void cci_check_hw(void)
{
...
...
...
	if (!tmp) {
		pr_err("***** CCI is not disabled, Please check board type *****\n");
		panic("CCI is not disabled! Do not use this board!\n");
	}

That blatantly states it's a "hardware configuration" and a board/SoC thing. That's the hardware bandwidth monitor probes that are getting configured and used in the function, no idea how to figure out what exactly it's reading out. This code is non-existent for the 5420.
 
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and GLBenchmark* among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2
 
Last edited by a moderator:
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2

Interesting! Where do you find that in the sources?
 
This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz.
Ouch, that's really embarassing. Given that it does run stable at that frequency apparently I'm not sure I'd call it cheating, but it's damn close.

Actually I wonder if the reason it doesn't run at 533MHz in everything is power consumption, or if it might even stability related? Hmm, who knows.

The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm
Does that mean the 4700-4800 score in GLB2.5 1080p Offscreen in GFX-Bench is at 480MHz? That's closer to the perf/MHz I'd have expected then at least :)

But that also only shows a ~16% improvement for a ~33% frequency increase to 640MHz. Do you know what frequency the LPDDR runs at and what total bandwidth (in GB/s) that gives? I suspect either memory bandwidth or memory latency might be a bottleneck then.
 
https://github.com/AndreiLux/Perseus-UNIVERSAL5410/blob/perseus/arch/arm/mach-exynos/cci.c#L195

Code:
/*
 * This function is used for checking CCI hw configuration
 * It CCI hw is not disabled, kernel panic is occurred by
 * this function.
 */
static void cci_check_hw(void)
{
...
...
...
	if (!tmp) {
		pr_err("***** CCI is not disabled, Please check board type *****\n");
		panic("CCI is not disabled! Do not use this board!\n");
	}

That blatantly states it's a "hardware configuration" and a board/SoC thing. That's the hardware bandwidth monitor probes that are getting configured and used in the function, no idea how to figure out what exactly it's reading out. This code is non-existent for the 5420.
That code is only used if the CCI is config-ed out, so I'm not sure exactly what it has to do with the discussion about the CCI being completely powered down dynamically. I wonder if we haven't started discussing something different :)
 
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2

ROFL :LOL: (and yes you know why I'm rolling on the floor right now....)
 
Does that mean the 4700-4800 score in GLB2.5 1080p Offscreen in GFX-Bench is at 480MHz? That's closer to the perf/MHz I'd have expected then at least :)

But that also only shows a ~16% improvement for a ~33% frequency increase to 640MHz. Do you know what frequency the LPDDR runs at and what total bandwidth (in GB/s) that gives? I suspect either memory bandwidth or memory latency might be a bottleneck then.
Yes, all GFXBench scores are at 480.

The memory runs at 800MHz and should be 12.8GB/s, if there's no shenanigans in the internal bus widths of course.

This little shitty trick can't be found in the sources and that's why I thought it was running 532MHz for weeks (I confirmed that number by running the white-listed benchmarks to see if it reaches that frequency, d'oh).

A user-space entity fires up a frequency lock on /sys/devices/platform/pvrsvrkm.0/sgx_dvfs_max_lock during 3D load. You can just monitor that entry via ADB while gaming and benchmarking to see what's going on.

The live clock is extractable from /sys/modules/pvrsrvkm/parameters/sgx_gpu_clk

Btw, I mistyped the white-listed benchmarks : I meant Antutu and GLBenchmark. 3DMark seems to have always run 480.
 
Last edited by a moderator:
Nebuchadnezzar, those findings should turn viral.
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10°C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

;)
 
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10°C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

;)
If you knew what Intel does, you wouldn't complain :D
 
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10°C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

;)

I think it's only "cheating" if it's application-dependent.
Boosting the GPU's frequency for select benchmarks is clearly cheating. Whatever they do to balance the device between "snapyness" and battery life is up to them, IMO.
 
All of these stupid games just to trick an awful and pointless benchmark like AnTuTu. Really depressing state of affairs.
 
All of these stupid games just to trick an awful and pointless benchmark like AnTuTu. Really depressing state of affairs.

No benchmark with such an exotic name can be pointless. First and above all....and last unfortunately it's got an exotic name! :rolleyes:
 
Question I have is..are samsung the only one doing these kind of tricks? Do we know if qualcomm or nvidia or even apple does thes kind of things?
Really put me off exynos for a while.
 
Question I have is..are samsung the only one doing these kind of tricks? Do we know if qualcomm or nvidia or even apple does thes kind of things?
Really put me off exynos for a while.

For NV and Tegra I haven't heard or read anything yet; for GPUs however and benchmarks I'm sure they've quite a few corpses hidden in the basement.
 
There are lots of scandals in the gpu business.

Just from the top of my head, ATi had instructions in the driver that lowered the anisotropic filtering quality whenever the quake 3 executable was detected (Radeon 8500 era), 3dmark vantage had a physics benchmark that used hardware PhysX so the nVidia cards would get higher scores and I think Intel used to actually disable AF in some benchmarks.

Then there are many TWIMTBP games, where the TWIMTBP "optimizations" consisted mostly in blocking IQ features when a nVidia card wasn't detected. We get to see that on Android too, unfortunately.

Nebuchadnezzar, would you mind if I share your findings with some blogs, as long as I properly reference your post?
 
There are lots of scandals in the gpu business.

Just from the top of my head, ATi had instructions in the driver that lowered the anisotropic filtering quality whenever the quake 3 executable was detected (Radeon 8500 era), 3dmark vantage had a physics benchmark that used hardware PhysX so the nVidia cards would get higher scores and I think Intel used to actually disable AF in some benchmarks.
Slight nitpick you couldn't actually lower anisotropic filtering quality on radeon 8500 as even its best setting was shitty as hell and essentially useless (disregarding the extreme angle dependency which at least doesn't make things worse compared to ordinary filtering, it could only do bilinear anisotropic). The quack issue was about ordinary trilinear filtering (IIRC they used some extreme lod bias (might have been a bug) plus limited filtering between mipmaps (later known as brilinear filtering), though the latter might have been used in other apps as well but with a less extreme setting (the extent of brilinear is tweakable). After all brilinear was quite a popular optimization (and still might be in some markets).
Though imho nvidia beats it all with the uncompetitive FX series in 3dmark03 where they not only used very highly optimized simpler shaders using fixed point arithmetic (and the results were visibly different), but especially the static clip planes were hilarious :).
 
Back
Top