Samsung Galaxy S series rumours.....

Yes it depends, and that's exactly why I questioned your initial comment about cluster migeation being terrible for battery life :) I'd like to see real life comparisons between that various use models of multi cluster.

My point is, no it's not better. No matter how much load there are on the A15's, if the A7's can handle it, it's better to do it there. Nevermind the caches being powered on.
 
As I don't currently own a TouchWiz device, I forget can you alter the animation speed in the settings panel a la Vanilla Android? The home button lag, looks like an exaggerated animation speed.

Yes ive done that with my gs3 and it helped loads, also disabling the S voice feature which also added a delay as the OS seemed to wait a tick to see if you were going to use it.

But the lag I saw some weeks back that got me concerned was swiping across homescreens, and especially in the app drawer when you swipe through the widgets tab...considering the hardware on show and jellybean enhancements (project butter) this is very worrying.

If you compare it to the supposedly inferior powered htc one, its night and day in UI speed....it seems both companies have swapped positions lol.
 
But the lag I saw some weeks back that got me concerned was swiping across homescreens, and especially in the app drawer when you swipe through the widgets tab...considering the hardware on show and jellybean enhancements (project butter) this is very worrying.

According to Engadget (however trustworthy you consider them) the GS4 is pretty lag free unless you have Air View and/or Air Gestures turned on. Those 'features' apparently cause lag.
 
Yea that could be the reason, that airview gimmick also seems un responsive and laggy on nearly every demo ive seen, although to be fair that could just be the knack of hovering your finger at the correct distance.
 
My point is, no it's not better. No matter how much load there are on the A15's, if the A7's can handle it, it's better to do it there. Nevermind the caches being powered on.
So you are sure that the power required for one A7 plus its L2 cache plus the cost of the coherency interconnect between the two clusters is less than one A15 core? I don't say you are wrong, but without real comparisons you won't convince me.
 
So you are sure that the power required for one A7 plus its L2 cache plus the cost of the coherency interconnect between the two clusters is less than one A15 core? I don't say you are wrong, but without real comparisons you won't convince me.
The CCI doesn't get shut off when a cluster is shut down, is something that is perpetually on.

I'll extract power data later on.
 
The CCI doesn't get shut off when a cluster is shut down, is something that is perpetually on.
Yes but the work it does is lighter enough that I guess its power consumption is measurably lower. I also expect part of it to clock gated when a cluster is off.

I'll extract power data later on.
Great!
Were you finally able to change the migration policy?
 
Perhaps Samsung had to pull resources from the Exynos 5410 software team, to work on the Qualcomm version. Not using CPU migration is wasting the potential of big.LITTLE, no wonder their keynote never mentioned Octa.
 
So you are sure that the power required for one A7 plus its L2 cache plus the cost of the coherency interconnect between the two clusters is less than one A15 core? I don't say you are wrong, but without real comparisons you won't convince me.
In my opinion, if power was indeed higher, then that would mostly imply that the cache hierarchy is highly suboptimal. I still suspect an Intel-cache hierarchy with much smaller L2s and a fully shared L3 might work better for big.LITTLE but who knows...
 
My point is, no it's not better. No matter how much load there are on the A15's, if the A7's can handle it, it's better to do it there. Nevermind the caches being powered on.
That's still not evidence that cluster migration is "terrible for battery life".

In typical usage, the need for A15s is already on a very low duty cycle, and within that time, I would imagine that one task is usually consuming most of the power. Shuffling low usage threads to the A7s may only be saving a little bit of power for a little bit of time, and on top of that this is only for the CPU while other components use the same power with all migration schemes.

The low lying fruit is using the A7s for the 95% of the time that high speed isn't needed on any core. I wouldn't be surprised if it was only 15 minutes of battery life that's being sacrificed with cluster migration. That's not "terrible" by any definition.
 
That's still not evidence that cluster migration is "terrible for battery life".

In typical usage, the need for A15s is already on a very low duty cycle, and within that time, I would imagine that one task is usually consuming most of the power. Shuffling low usage threads to the A7s may only be saving a little bit of power for a little bit of time, and on top of that this is only for the CPU while other components use the same power with all migration schemes.

The low lying fruit is using the A7s for the 95% of the time that high speed isn't needed on any core. I wouldn't be surprised if it was only 15 minutes of battery life that's being sacrificed with cluster migration. That's not "terrible" by any definition.
Gaming is a perfect example where you have continuous high load on 1-2 cores, any additional threads ARE on a efficiency disadvantage (3x the power iirc), how are you going to argue against such a use-case? The current kernel doesn't have thread-count hot-plugging power management so all 4 cores are always online and only managed by the CPUIdle driver.

I'll do some proper use-analysis at some point and report back.
 
In my opinion, if power was indeed higher, then that would mostly imply that the cache hierarchy is highly suboptimal. I still suspect an Intel-cache hierarchy with much smaller L2s and a fully shared L3 might work better for big.LITTLE but who knows...
Look at slide 15 : http://cache-www.intel.com/cd/00/00/51/61/516194_516194.pdf
Of course that's not directly comparable but caches tend to consume more than what people think. Note also that this probably includes all cache levels.

Another point that bothers me in the multi cluster on scenario is that most memory accesses will be coherent and hence make requests to the L2 tag controller/RAM of the other cluster (given that the CCI has no directory).

In the end I'm not sure what the best thread placement strategy is and that's why I'm waiting for Nebuchadnezzar results.
 
Another point that bothers me in the multi cluster on scenario is that most memory accesses will be coherent and hence make requests to the L2 tag controller/RAM of the other cluster (given that the CCI has no directory).

Shouldn't coherency traffic only go out for stores as far as exclusive lines are concerned (which should be most of them)? And that traffic should only be a fraction of what the write traffic to the caches themselves was..
 
Gaming is a perfect example where you have continuous high load on 1-2 cores, any additional threads ARE on a efficiency disadvantage (3x the power iirc), how are you going to argue against such a use-case?
I'll don't have numbers from real world examples, but if we're talking about background threads needing 6% of the main threads' power instead of 2%, it's not going to matter much, especially in light of display and GPU power consumption making that difference even less tangible.

If you do get some data, though, then I'd love to hear about it.
 

Why are they using mali4xx gpu? That is old gpu right?
Dunno if that could handle FHD resolution.


That is worse than SGX544 right?

Why are they using such a old gpu?That would be worse than S4's gpu right?

Screenshot_2013-05-05-11-32-28.png


This is Rgbenchmark of GT-I9500
, this benchmark is supposed to measure memory bandwidth, if so then why it is low? Or am I wrong?

Complete Noob here:(
 
If its clocks are high enough (~650MHz?), a Mali 450MP8 could be on par with most high-end SoCs.
 
Back
Top