Google Nexus 6P

This shows the ridiculousness of S810's throttling:

tCY5Ohq.png


"Hey, I can play a game with cool graphics... for 7 minutes until its framerate becomes unbearable."
 
Realistic expectations ;)
For realistic expectations look at benchmarks and reviews that test the device as you would use it.

High burst performance is genuinely useful for some applications. Limiting clock speeds to sustainable levels wouldn't create a better experience for users.
 
does nexus 6p allows the thermal throttling to be disabled?

my LG G pro 2 allows thermal throttling to be disabled through hidden menu (it is named as Hidden Menu). Some Samsung phone also allows their thermal throttling to be disabled by whitelisted app (the Gear VR app, some benchmarks)
 
phone not always used while being held on hands...
e.g. when you use it for VR or for watching/playing video/games (with case stand or using objects to prop it up)
 
High burst performance is genuinely useful for some applications.
Considering we're still talking about GPU performance, which applications exactly are you talking about?

I genuinely thought you were being ironic.
 
Gaming isn't the only use of the GPU, many compute uses e.g. local photo or video editing/enhancing would definitely benefit from higher burst performance....
 
If the GPU speed affects web rendering or phone UI performance (or like JohnH said, photo/video editing, or basically anything that can be accelerated by GPU), those "turbo" clock definitely would help. Actually, look at it this way, those sustained performance probably only benefit gaming. Thus people that don't game can benefit from the "turbo" clock.
Anyway, about the thermal throttling.. I used to disable it on my LG phone, but in the end I choose to enable it because sometimes when on charger it can actually stop charging because the phone overheat! With it off, the phone can definitely get very hot... very very hot. Maybe the SoC can take the heat, but I'm not sure the other components can take it, especially in continuous manner.
 
Compute is a poor excuse that I keep hearing from people but if you actually go and actually profile GPU clocks used in such scenarios you'll see that they basically never ever reach these turbo clocks. Not even vendors use this explanation as they outright tell us that the higher clocks are there for higher TDP systems but you know my counter-argument against that.
 
Compute is a poor excuse that I keep hearing from people but if you actually go and actually profile GPU clocks used in such scenarios you'll see that they basically never ever reach these turbo clocks. Not even vendors use this explanation as they outright tell us that the higher clocks are there for higher TDP systems but you know my counter-argument against that.

I don't think compute is a poor excuse of turbo clocks, in fact I think the exact opposite is true, however if the higher clocks are only being enabled for benchmarks then clearly there is no valid excuse for it!
 
I don't think compute is a poor excuse of turbo clocks, in fact I think the exact opposite is true, however if the higher clocks are only being enabled for benchmarks then clearly there is no valid excuse for it!
You misunderstand, they're not disabled or limited in any way. There's simply not any load out there right now which makes DVFS scale up to those clocks, so no they're not used for image processing or webpage rendering.
 
You misunderstand, they're not disabled or limited in any way. There's simply not any load out there right now which makes DVFS scale up to those clocks, so no they're not used for image processing or webpage rendering.

That's interesting, I can think of quite a few compute workloads that could benefit from significant short term clock boost, I guess vendors just aren't deploying that technology yet, perhaps because going compute heavy to the extent of needing to overclock isn't going to be good for battery life....
 
I think quite a bit of it is that there's also quite a bit of CPU work going on in a lot of workloads that also use the GPU (browsers in particular), so balancing power across the whole SoC probably doesn't come out in favour of high GPU frequencies. So the GPU gets used, but unless vendors rethink how to balance power it's never going to work out the way we'd expect over here in GPU land.
 
I think quite a bit of it is that there's also quite a bit of CPU work going on in a lot of workloads that also use the GPU (browsers in particular), so balancing power across the whole SoC probably doesn't come out in favour of high GPU frequencies. So the GPU gets used, but unless vendors rethink how to balance power it's never going to work out the way we'd expect over here in GPU land.
The issue is that I'm currently not aware of any SoC that employs DVFS policies that would even be able to respond to super fine-grained high loads that for example would be used in browsers or similar use-cases, like most SoCs out there switch frequency on a 100ms sample rate and GPUs have mostly always step-wise policies so it's always going to take a continuous load 200-300ms to trigger the highest frequencies. At this point you'd need user-space optimizations for QoS on the GPU freq and AFAIK only Samsung does stuff like that and even there they never request the highest frequencies.
 
hmm, if you use Interactive governor, it quickly skyrockets to the max frequency even when simply scrolling the screen.

currently I use on-demand and yeah, the initial activity lag is noticeable but not really annoying (im using Snapdragon 800 2.x Ghz).
 
Back
Top