Next-Gen iPhone & iPhone Nano Speculation

Samsung, Mediatek, HiSilicon rely on outside source(s).

In terms of competition with Apple, I'd boil that down to Samsung.
I didn't bother to point out that ARM themselves supply CPU+GPU solutions, claiming some synergies by doing so.

Look, it would be strange indeed if Apple didn't look into designing their own graphics. Of course they do. We'll only know about it outside their walls when and if there are sufficient advantages for them to actually bring it to market.
("Apple A7 GPU" isn't enough of a smoking gun to assume it has already happened.)
 
Last edited by a moderator:
I Just went to Apples recruitment page and saw that Apple were looking for applicants for six more graphics positions, up 2/10. (They've been doing this for quite a while).
This was the job description for the site manager. The other positions are quite clear as well.
Description
Recruiting, hiring and building a world-class GFX IP development team

Growing/mentoring an effective leadership and team management structure

Effective collaboration with cross-site teams including architecture, design, validation, and physical implementation

Driving, tracking and executing GFX IP development owned by Site from inception to retirement

Owning regular personnel reviews, compensation planning/advocacy and communication for the Site

Representing Site within larger Apple organization in Austin and Cupertino
 
In terms of competition with Apple, I'd boil that down to Samsung.
I didn't bother to point out that ARM themselves supply CPU+GPU solutions, claiming some synergies by doing so.
Mediatek use both Mali and Imagination, while HiSilicon uses Vivante and Mali, so I'm not sure what you mean :)

EDIT: I'm not trying to dismiss your guess that Apple want to design its own GPU, just wanted to clarify that most SoC providers use external GPU IP.
 
EDIT: I'm not trying to dismiss your guess that Apple want to design its own GPU, just wanted to clarify that most SoC providers use external GPU IP.

Considering the job listing I referenced, (maybe you were composing your post), I'd say that guessing is not necessary. It is very clear indeed.
 
Entropy - your assessment makes sense. I wonder whether Imagination IP being readily available to competitors is a big factor or not. Given the market's predilection with specs, maybe this is a further move to cement Apple's position at the high end of the market in a way that a competitor cannot simply match by buying third party IP/chips?
 
Update on SRAM - they now say 4MB http://www.chipworks.com/en/technical-competitive-analysis/resources/blog/inside-the-a7/

Which is what I suspected in the first place. They still think it's for the fingerprints, and thus, posit it's completely lost when the device battery dies. I'm still skeptical.
A 4MB shared last level cache between the CPU and GPU would be interesting and explain it being a non-standard Rogue.

The PoP memory interface also nearly doubled. No longer dual 32 bit channels?
Have they confirmed how many layers are on the stacked memory package? With only a 68% increase in pin count and the memory interface organized in a 2 large, 2 small configuration it doesn't look like a doubling either to 4x32-bit or 2x64-bit, although a 128-bit memory interface for the A7 would satisfy rumours that the retina iPad Mini could use the A7. Perhaps it could be 2x32-bit main memory plus a dedicated 16-bit DRAM for camera caching and a dedicated 16-bit DRAM for fingerprint data although this too would loose data on power off.

Memory bandwidth impacts fill rate right? Maybe a large cache for the GPU or a wider memory interface could explain why the A7 GPU has a disproportionate improvement in fill rate compared to other measures.
 
The specialized functions of a GPU, like texturing, filtering, mipmapping, texture/image compressions, the myriad of shader functions, etc., with the associated hardware, drivers, scheduler/kernel, compiler support require a design team with a wide range of very specific talents. A good CPU would require just as much skill as a good GPU, but the latter is far less likely to be seen from a new design team.

Take Sony for instance. For the PlayStation 2, they and their internal project partners designed both the CPU and the GPU. The Graphics Synthesizer ended up lacking a modern approach (multi-pass rendering instead of multi-texturing effects) and lacking quality implementations of specialized functionality like mipmapping and texture compression in hardware. For the PS3, they set out to design the GPU again, too: early on, considering adapting the Cell architecture into a GPU yet actually R&Ding another custom GPU called the Reality Synthesizer. Ultimately, though, they found that a refined GPU solution from an established graphics specialist like nVidia was the best choice.

It does seem Apple may be assembling a new team for ground-up mobile GPU design, but I won't hold my breath that Imagination will end up getting displaced inside iPhone and iPad.
 
It does seem Apple may be assembling a new team for ground-up mobile GPU design, but I won't hold my breath that Imagination will end up getting displaced inside iPhone and iPad.

I don't want to get into a CPU vs GPU complexity argument. (Or rather, I do :), but this is neither the time nor the place). I'll just point out that there is a fair number of suppliers that are broadly competitive. Qualcomms Adreno guys, ARMs mali group, Vivante, Intel, AMD, nVidia, ImgTech - they are all players and some of them with very limited resources, particularly compared to Apple.

Also, Apple doesn't start right now. They have been actively recruiting for a fair while, so just how far along they are is difficult to say from the outside.
I don't see that there can be any doubt that (premium) mobile graphics is the target. That's where their money is, and in the greater scheme of things it's the only really interesting market.

I was surprised by how outspoken the job offers were. They send a very clear message to applicants that Apple is putting its weight behind this effort.

Before I saw those statements, I assumed that they had "contingency plan" development going on, which would quite possibly only ever get visible to the outside world in the case of their relationship with ImgTech going sour or being disrupted in some way. (That this was a concern of theirs was apparent already when they bought their defensive stake in ImgTech.) But their Orlando effort seems far too substantial to simply be a back-up plan.

That said, just as with Intel and the Macs, as long as it makes sense to them to continue with their current partners, they likely will.
 
Cluster power islands in A7?

This may be a naive noob question, but I can't quite rationalise Apple employing one of the highest end Rogue cores in the A7 (assuming that it is a G6430, and at a reasonably high clock speed according to Anand), and the somewhat tame results - given the performance of previous PowerVR generations relative to the competition, the 4-cluster implementation suspected, IMG marekting bluster etc.

So my question is this: could this actually be a G6430 but running in an "X2" mode of operation? i.e. only 2 of 4 clusters turned on in the iphone 5s.

This cluster power island feature is common to all Rogue cores apparently:

http://withimagination.imgtec.com/index.php/powervr/powervr-g6630-go-fast-or-go-home

"All our ‘Rogue’ cores also include a robust power management mechanism. We’ve defined power islands which can be turned on or off dynamically to optimize power consumption. For example, the PowerVR G6630 enables each cluster pairs to be managed, allowing three modes of operations (X2, X4 and X6) which will help extend the ever-essential battery life of mobile and embedded computing systems."

Could 2 of the 4 clusters be disabled, or running at a significantly reduced speed, in the iphone, but enabled in the upcoming ipad5 for example?

I could see how this would lend itself to providing Apple with an efficient solution by which they just produce one single SoC but change its performance via drivers. I'm not sure why this would be necessarily preferable to just producing 2 SoCs with 2 and 4 clusters, per previous A6/A6X examples, given the extra, and potentially unused, die space neeeded. Obviously IMG think it's a desireable feature.

Any thoughts?
 
Using a mode of which the core could move in and out on an almost moment-to-moment basis as a permanent mode of operation instead would be an interesting way to lock in their performance target for that SoC implementation, yet I'm not sure if they'd be able to operate the full set of TMUs that the A7's presumed G6430 is apparently using if they had half the clusters permanently asleep.

The discrepancy compared to the MT8135's G6200 and some of the competition in performance results across several different benchmarks with somewhat varying workloads (3DMark tests versus GfxBench tests, for example) is what continues to puzzle me. While the limiting factors in a couple benchmark tests might not allow a G64xx core to flex its advantage over other cores, the discrepancy appears to be reflected to a large degree across all benchmarks so far.
 
We may know more after this week, when the new iPad with presumably A7x is unveiled and we can compare how differently they deployed the Rogue cores.
 
Current ipad4, as well as ipad3 used a different asic compared to iphone 5 and 4S respectively, so it's not unthinkable the same may be true this time around as well. Considering the huge resolution difference, you'd pretty much expect more graphics resources I'd think...
 
Current ipad4, as well as ipad3 used a different asic compared to iphone 5 and 4S respectively, so it's not unthinkable the same may be true this time around as well. Considering the huge resolution difference, you'd pretty much expect more graphics resources I'd think...

Otherwise it would be at a standstill.

The iPad 4 with its A6X has more or less the same graphic performance as the A7 SoC in the iPhone 5S.
 
I see new entries from apple on the khronos opengles conformancy list for all products iphone4 and above, running ios7.0, both for es1.1 and es2.0.

All that is except iphone5s. Looks to me that they held this back until after this Tuesdays product launches.
 
Otherwise it would be at a standstill.
Good point!

Wonder if there will be any "iWatch" announcement or not. I'm guessing...no. Although I'm still kindling a small hope that there will be, due to Samsung hurriedly rushing out their own product in a panic just to get there first.

...Aghh, just one more day now (+ change) ghfjgdshjgfhjkdsghjkfasghjfsghfsahdfhj!

Here's to hoping they show off a 4K thunderbolt display also to go with the new Mac Pro. (Bit off topic tho.)
 
I see new entries from apple on the khronos opengles conformancy list for all products iphone4 and above, running ios7.0, both for es1.1 and es2.0.

All that is except iphone5s. Looks to me that they held this back until after this Tuesdays product launches.

There goes that idea.

iphone 5s es2.0 and es1.1 conformancy entries posted today.

Might point to at least some of the new products coming today using a different GPU.
 
iPad Air got just announced. Uses an Apple A7 chipset, the same as in the iPhone 5S in other words. Maybe there won't be an A7X? Probably too early to tell yet. They're not yet done with their keynote.
 
Back
Top