Apple, investigating big-Little on the GPU ?

Why wouldn't you just have a better power-gated GPU that scales up/down as needed?
 
You could. But building a smaller GPU with slower and smaller transistors would still probably offer some power savings over just power gating the GPU. Although I have no idea if that's significant or worthwhile the added complexity. It could just a patent for the sake of having it.
 
Why wouldn't you just have a better power-gated GPU that scales up/down as needed?

Hum...

NtIoYzM.gif


It looks like no matter how much you downclock/downvolt the big GPU, the little ones always end up consuming less on idle.
Same thing with the big.LITTLE CPUs I, guess.
 
Apple has shipped Macs for years with both discrete and integrated GPUs, with transparent switching between them based on load. How does this differ?
 
Apple has shipped Macs for years with both discrete and integrated GPUs, with transparent switching between them based on load. How does this differ?

The switching in Macs happens when you launch a 3D application (as far as I know). But something like big.LITTLE happens within the execution of a single application, or indeed a single thread, in about 20µs, if memory serves. It's faster by a few orders of magnitude. So if Apple is working on something like big.LITTLE for GPUs, it's quite different indeed.
 
But it's intriguing that they'd pass on big.LITTLE for CPUs and then use it on GPUs.
 
But it's intriguing that they'd pass on big.LITTLE for CPUs and then use it on GPUs.

I'm actually kind of curious what the A9 will be. Higher performing dual core, tricore, or a big little setup (2+2)? If it's indeed on a FinFet process, they have some room on the die to play with.

I think single threaded performance maybe good enough for the time being, so I'd rather have them focus on power optimization to help battery life.
 
I'm actually kind of curious what the A9 will be. Higher performing dual core, tricore, or a big little setup (2+2)? If it's indeed on a FinFet process, they have some room on the die to play with.
If per-transistor costs is as close to previous process as I recall reading about, I don't think the die will grow much in terms of transistors. Apple is very cost conservative, they rather rely on flash and thunder rather than substantial, tangible improvements if such an improvement would be expensive. After all, they stuck the iPhone6+ with only 1GB RAM, the same as iPhone5 from two years prior despite the vastly higher screen resolution.
 
The 2015 date stamp on this got me interested, which is why I posted it. However closer examination shows it is a lot older than that, and the former AMD guy named in the patent who was with Apple at the time, is back with AMD.

Likely as not, it's not relevant to IOS products.
 
The 2D 'core' in Rogue isn't enough by itself to draw today's modern mobile OS interfaces. It's capable of assisting, but if the 3D parts of the GPU were turned completely off, it wouldn't have the ability to do it by itself.
 
The 2D 'core' in Rogue isn't enough by itself to draw today's modern mobile OS interfaces. It's capable of assisting, but if the 3D parts of the GPU were turned completely off, it wouldn't have the ability to do it by itself.

Interesting. For a modern UI on a =/>1536p tablet display would something like ~1.5 GTexels fillrate (at least) for a ULP GPU be advisable today?
 
That's more than decent for 1536p60 rendering, for today's UIs.
 
Under 28HPm or smaller? Not really. Those aren't "monster" ULP GPUs like the 8 cluster GPU in A8X, but rather GPUs like Mali450MP2, T760MP or SGX544MP1 or G1110 or allike.
 
Back
Top