Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 26-Jun-2013, 09:52   #601
Laurent06
Member
 
Join Date: Dec 2007
Posts: 535
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
On every single instance (Kernel sources for Exynos, talks with some lead devs, Google's gerrit patches) points out that the CCI is powered off and inactive.

Of course it doesn't make much sense given some of the chip layouts we know of, even if just the ports are disabled: http://i.imgur.com/6wunhUQ.png
Indeed it doesn't make sense.

I looked at this code:
https://github.com/AndreiLux/Perseus...control.c#L400
Code:
static size_t bL_check_status(char *info)
{
...
len += sprintf(&info[len], " %d\n",
(readl(cci_base + 0x4000 + 1 * 0x1000) & 0x3)
== 3 ? 1 : 0);
...
len += sprintf(&info[len], " %d\n\n",
(readl(cci_base + 0x4000 + 0 * 0x1000) & 0x3)
== 3 ? 1 : 0);
}
These 2 sprintf are what matches the CCI column in the output pasted here.

If you look at the CCI TRM Register summary, you'll see that if the printed 2 bits are 0 it just means DVM and snoop requests are disabled from slave 3 and 4.

OTOH I perhaps missed the comments that explicitly state that the CCI is off in the kernel. Could you please point me to the proper file to look at?
__________________
Speaking for myself.
Laurent06 is offline   Reply With Quote
Old 26-Jun-2013, 17:26   #602
Nebuchadnezzar
Member
 
Join Date: Feb 2002
Location: Luxembourg
Posts: 601
Default

Quote:
Originally Posted by Laurent06 View Post
OTOH I perhaps missed the comments that explicitly state that the CCI is off in the kernel. Could you please point me to the proper file to look at?
https://github.com/AndreiLux/Perseus...nos/cci.c#L195

Code:
/*
 * This function is used for checking CCI hw configuration
 * It CCI hw is not disabled, kernel panic is occurred by
 * this function.
 */
static void cci_check_hw(void)
{
...
...
...
	if (!tmp) {
		pr_err("***** CCI is not disabled, Please check board type *****\n");
		panic("CCI is not disabled! Do not use this board!\n");
	}
That blatantly states it's a "hardware configuration" and a board/SoC thing. That's the hardware bandwidth monitor probes that are getting configured and used in the function, no idea how to figure out what exactly it's reading out. This code is non-existent for the 5420.
Nebuchadnezzar is offline   Reply With Quote
Old 27-Jun-2013, 02:09   #603
Nebuchadnezzar
Member
 
Join Date: Feb 2002
Location: Luxembourg
Posts: 601
Default

Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and GLBenchmark* among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2

Last edited by Nebuchadnezzar; 27-Jun-2013 at 12:41.
Nebuchadnezzar is offline   Reply With Quote
Old 27-Jun-2013, 03:38   #604
frogblast
Junior Member
 
Join Date: Apr 2008
Posts: 77
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2
Interesting! Where do you find that in the sources?
frogblast is offline   Reply With Quote
Old 27-Jun-2013, 06:19   #605
Arun
Unknown.
 
Join Date: Aug 2002
Location: UK
Posts: 4,914
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz.
Ouch, that's really embarassing. Given that it does run stable at that frequency apparently I'm not sure I'd call it cheating, but it's damn close.

Actually I wonder if the reason it doesn't run at 533MHz in everything is power consumption, or if it might even stability related? Hmm, who knows.

Quote:
Originally Posted by Nebuchadnezzar View Post
The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm
Does that mean the 4700-4800 score in GLB2.5 1080p Offscreen in GFX-Bench is at 480MHz? That's closer to the perf/MHz I'd have expected then at least

But that also only shows a ~16% improvement for a ~33% frequency increase to 640MHz. Do you know what frequency the LPDDR runs at and what total bandwidth (in GB/s) that gives? I suspect either memory bandwidth or memory latency might be a bottleneck then.
__________________
Focusing on non-graphics projects in 2013 (but I still love triangles)
"[...]; the kind of variation which ensues depending in most cases in a far higher degree on the nature or constitution of the being, than on the nature of the changed conditions."
Arun is offline   Reply With Quote
Old 27-Jun-2013, 09:11   #606
Laurent06
Member
 
Join Date: Dec 2007
Posts: 535
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
https://github.com/AndreiLux/Perseus...nos/cci.c#L195

Code:
/*
 * This function is used for checking CCI hw configuration
 * It CCI hw is not disabled, kernel panic is occurred by
 * this function.
 */
static void cci_check_hw(void)
{
...
...
...
	if (!tmp) {
		pr_err("***** CCI is not disabled, Please check board type *****\n");
		panic("CCI is not disabled! Do not use this board!\n");
	}
That blatantly states it's a "hardware configuration" and a board/SoC thing. That's the hardware bandwidth monitor probes that are getting configured and used in the function, no idea how to figure out what exactly it's reading out. This code is non-existent for the 5420.
That code is only used if the CCI is config-ed out, so I'm not sure exactly what it has to do with the discussion about the CCI being completely powered down dynamically. I wonder if we haven't started discussing something different
__________________
Speaking for myself.
Laurent06 is offline   Reply With Quote
Old 27-Jun-2013, 10:05   #607
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,499
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and 3DMark among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2
ROFL (and yes you know why I'm rolling on the floor right now....)
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 27-Jun-2013, 10:52   #608
Nebuchadnezzar
Member
 
Join Date: Feb 2002
Location: Luxembourg
Posts: 601
Default

Quote:
Originally Posted by Arun View Post
Does that mean the 4700-4800 score in GLB2.5 1080p Offscreen in GFX-Bench is at 480MHz? That's closer to the perf/MHz I'd have expected then at least

But that also only shows a ~16% improvement for a ~33% frequency increase to 640MHz. Do you know what frequency the LPDDR runs at and what total bandwidth (in GB/s) that gives? I suspect either memory bandwidth or memory latency might be a bottleneck then.
Yes, all GFXBench scores are at 480.

The memory runs at 800MHz and should be 12.8GB/s, if there's no shenanigans in the internal bus widths of course.

This little shitty trick can't be found in the sources and that's why I thought it was running 532MHz for weeks (I confirmed that number by running the white-listed benchmarks to see if it reaches that frequency, d'oh).

A user-space entity fires up a frequency lock on /sys/devices/platform/pvrsvrkm.0/sgx_dvfs_max_lock during 3D load. You can just monitor that entry via ADB while gaming and benchmarking to see what's going on.

The live clock is extractable from /sys/modules/pvrsrvkm/parameters/sgx_gpu_clk

Btw, I mistyped the white-listed benchmarks : I meant Antutu and GLBenchmark. 3DMark seems to have always run 480.

Last edited by Nebuchadnezzar; 27-Jun-2013 at 11:01.
Nebuchadnezzar is offline   Reply With Quote
Old 27-Jun-2013, 11:18   #609
ToTTenTranz
Senior Member
 
Join Date: Jul 2008
Posts: 3,428
Default

Nebuchadnezzar, those findings should turn viral.
ToTTenTranz is offline   Reply With Quote
Old 27-Jun-2013, 12:08   #610
Nebuchadnezzar
Member
 
Join Date: Feb 2002
Location: Luxembourg
Posts: 601
Default

Quote:
Originally Posted by ToTTenTranz View Post
Nebuchadnezzar, those findings should turn viral.
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

Nebuchadnezzar is offline   Reply With Quote
Old 27-Jun-2013, 13:12   #611
Laurent06
Member
 
Join Date: Dec 2007
Posts: 535
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

If you knew what Intel does, you wouldn't complain
__________________
Speaking for myself.
Laurent06 is offline   Reply With Quote
Old 27-Jun-2013, 13:16   #612
ToTTenTranz
Senior Member
 
Join Date: Jul 2008
Posts: 3,428
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
By the way did I mention to you guys that they also cheat in terms of CPU policy and thermals? I found that out a few weeks ago but don't think I posted it here.

Antutu for example triggers a thermal "boost mode" where the trigger temps are raised by 10C and the bottom throttling freq is set to an A15 core frequency instead of the usual to-A7 throttling. That and that they put a min-CPU frequency of 1200MHz just by having the app opened (and doing nothing).

I think it's only "cheating" if it's application-dependent.
Boosting the GPU's frequency for select benchmarks is clearly cheating. Whatever they do to balance the device between "snapyness" and battery life is up to them, IMO.
ToTTenTranz is offline   Reply With Quote
Old 27-Jun-2013, 14:46   #613
Exophase
Senior Member
 
Join Date: Mar 2010
Location: Cleveland, OH
Posts: 1,938
Default

All of these stupid games just to trick an awful and pointless benchmark like AnTuTu. Really depressing state of affairs.
Exophase is offline   Reply With Quote
Old 27-Jun-2013, 17:27   #614
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,499
Default

Quote:
Originally Posted by Exophase View Post
All of these stupid games just to trick an awful and pointless benchmark like AnTuTu. Really depressing state of affairs.
No benchmark with such an exotic name can be pointless. First and above all....and last unfortunately it's got an exotic name!
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 27-Jun-2013, 17:34   #615
french toast
Senior Member
 
Join Date: Jan 2012
Location: Leicestershire - England
Posts: 1,634
Default

Question I have is..are samsung the only one doing these kind of tricks? Do we know if qualcomm or nvidia or even apple does thes kind of things?
Really put me off exynos for a while.
french toast is offline   Reply With Quote
Old 27-Jun-2013, 17:57   #616
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,499
Default

Quote:
Originally Posted by french toast View Post
Question I have is..are samsung the only one doing these kind of tricks? Do we know if qualcomm or nvidia or even apple does thes kind of things?
Really put me off exynos for a while.
For NV and Tegra I haven't heard or read anything yet; for GPUs however and benchmarks I'm sure they've quite a few corpses hidden in the basement.
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 27-Jun-2013, 22:38   #617
french toast
Senior Member
 
Join Date: Jan 2012
Location: Leicestershire - England
Posts: 1,634
Default

Quote:
Originally Posted by Ailuros View Post
For NV and Tegra I haven't heard or read anything yet; for GPUs however and benchmarks I'm sure they've quite a few corpses hidden in the basement.
I dread to think... :/
french toast is offline   Reply With Quote
Old 28-Jun-2013, 00:15   #618
ToTTenTranz
Senior Member
 
Join Date: Jul 2008
Posts: 3,428
Default

There are lots of scandals in the gpu business.

Just from the top of my head, ATi had instructions in the driver that lowered the anisotropic filtering quality whenever the quake 3 executable was detected (Radeon 8500 era), 3dmark vantage had a physics benchmark that used hardware PhysX so the nVidia cards would get higher scores and I think Intel used to actually disable AF in some benchmarks.

Then there are many TWIMTBP games, where the TWIMTBP "optimizations" consisted mostly in blocking IQ features when a nVidia card wasn't detected. We get to see that on Android too, unfortunately.

Nebuchadnezzar, would you mind if I share your findings with some blogs, as long as I properly reference your post?
ToTTenTranz is offline   Reply With Quote
Old 28-Jun-2013, 01:38   #619
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,691
Default

Quote:
Originally Posted by ToTTenTranz View Post
There are lots of scandals in the gpu business.

Just from the top of my head, ATi had instructions in the driver that lowered the anisotropic filtering quality whenever the quake 3 executable was detected (Radeon 8500 era), 3dmark vantage had a physics benchmark that used hardware PhysX so the nVidia cards would get higher scores and I think Intel used to actually disable AF in some benchmarks.
Slight nitpick you couldn't actually lower anisotropic filtering quality on radeon 8500 as even its best setting was shitty as hell and essentially useless (disregarding the extreme angle dependency which at least doesn't make things worse compared to ordinary filtering, it could only do bilinear anisotropic). The quack issue was about ordinary trilinear filtering (IIRC they used some extreme lod bias (might have been a bug) plus limited filtering between mipmaps (later known as brilinear filtering), though the latter might have been used in other apps as well but with a less extreme setting (the extent of brilinear is tweakable). After all brilinear was quite a popular optimization (and still might be in some markets).
Though imho nvidia beats it all with the uncompetitive FX series in 3dmark03 where they not only used very highly optimized simpler shaders using fixed point arithmetic (and the results were visibly different), but especially the static clip planes were hilarious .
mczak is offline   Reply With Quote
Old 28-Jun-2013, 10:54   #620
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,499
Default

Quote:
Originally Posted by mczak View Post
.......but especially the static clip planes were hilarious .
Worst cheat ever it actually gave a whole new definition to hidden surface removal
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 05-Jul-2013, 07:29   #621
balagamer
Registered
 
Join Date: Jul 2013
Posts: 3
Default

Quote:
Originally Posted by Nebuchadnezzar View Post
Oh hell Samsung, shame on you!

I'm currently doing GPU overclocking and voltage control in the kernel for the 5410/i9500 and was screwing around with what was supposed to be a generic max limit only to be surprised by what it actually represents.

This GPU does not run 532MHz; that frequency level is solely reserved for Antutu and GLBenchmark* among things. The GPU on non-whitelisted applications is limited to 480MHz. The old GLBenchmark apps for example run at 532MHz while the new GFXBench app which is not whitelisted, runs at 480MHz. /facepalm

For anybody interested, here's some scores at 640MHz, for comparison's sake of what 544MP3 could do. I tried 700 but that wasn't stable within the prescribed upper voltage limit (1150mV).

GFXBench 2.7.2 (offscreen):
2.7 T-Rex: 14fps
2.5 Egypt: 48fps

Antutu 3DRating (onscreen): 8372 / 31.4fps
Antutu 3.3.1 3D benchmark: 8584

Basemark Taiji: 46.54

3DMark:
Ice storm standard: 11357 overall, 11486 graphics, 58.1fps GT1 43.8fps GT2
Ice storm extreme: 7314 overall, 6680 grapgics, 39.1fps GT1, 23.1fps GT2
with gfx benchmark the fill rate scores are drastically different when compared to iphone5 despite having similar gpu as s4, is that limitation due to platform or due to some other factors?
balagamer is offline   Reply With Quote
Old 05-Jul-2013, 08:16   #622
ToTTenTranz
Senior Member
 
Join Date: Jul 2008
Posts: 3,428
Default

Quote:
Originally Posted by balagamer View Post
with gfx benchmark the fill rate scores are drastically different when compared to iphone5 despite having similar gpu as s4, is that limitation due to platform or due to some other factors?
Clock speeds. The GPU in the iphone 5 is lower clocked.
ToTTenTranz is offline   Reply With Quote
Old 05-Jul-2013, 09:20   #623
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,499
Default

Quote:
Originally Posted by ToTTenTranz View Post
Clock speeds. The GPU in the iphone 5 is lower clocked.
325 (A6) vs. 480MHz (E5410)

By the way the lowest fillrate efficiency out of the crop of Series5XT/6 GPUs they used for that graph should be the 544MP3 in the Exynos5410:



http://withimagination.imgtec.com/in...iciency-part-8
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 05-Jul-2013, 10:08   #624
balagamer
Registered
 
Join Date: Jul 2013
Posts: 3
Default

Quote:
Originally Posted by ToTTenTranz View Post
Clock speeds. The GPU in the iphone 5 is lower clocked.
take a look at triangle throughput here

http://gfxbench.com/compare.jsp?cols...Apple+iPhone+5

iphone 5 doubles the score on almost all chart there, amy be due to the difference in rendering reolsution?
balagamer is offline   Reply With Quote
Old 05-Jul-2013, 11:25   #625
tangey
Senior Member
 
Join Date: Jul 2006
Location: 0x5FF6BC
Posts: 1,032
Default

Quote:
Originally Posted by ToTTenTranz View Post
Clock speeds. The GPU in the iphone 5 is lower clocked.
In glbenchmark, Samsung 5410 has 13% better off-screen fill rate than iphone5, but has 60%+ higher clock (assuming the 5410 is clocking @ 533Mhz for the test).

So either Samsung graphics datapath is inferior to the one in the A6, or its a driver issue.
__________________
Check out my blog charting my challenge to have a maximum value rewards holiday:-
http://www.goingonrewards.com
tangey is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:40.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.