PowerVR Rogue Architecture

Interesting that Mediatek provided a quote, and that their guy basically says they are a licensee, but IMG still feels they can't outright say they are.

IMG won some early smartphone socs designs, but have failed to do so meaningfully, in the last couple of cycles, with Mediatek generally only using IMG IP in tablet socs. At this stage, if IMG had continued to have the bulk of their smartphone SOCs, they'd be getting 250m+ units p.a. from there.

So it'll be interesting to see if this is indeed for smartphone socs, or does Mali still have them locked out of there, and this Mediatek agreement is more for Tablets and TVs.
 
It was a toss-up as to whether I posted here or the Vulcan thread.

I note that in the khronos Vulcan "product" conformance pages, GX6250 and GX6450 are listed as conformant, running with an intel i5-2500K processor. G6230 and G6430 are also listed as conformant, running with a i5-3470. See entries for the 15th of Feb.

I'm puzzled at those listings, given that IMG GPU IP only exists as part of socs. I thought that Khronos testing was for actual hardware, not simulations or theoritical conformancy. If I'm not wrong, how do the above GPU IPs get paired with intel core-i5 processors in the above tests ? Special add-in boards ? Seems odd to me for IMG to have to submit some strange Intel based systems for conformancy testing, and go to the bother of writting x86 Vulcan drivers for Series6 rogue ? Wouldn't it be much handier to pick some arm android platforms.

I think mediatek implemented GX6250, and didn't apple implement GX6450 in the iphone 6.
 
Last edited:
You can submit conformance on whatever you like. We chose our emulator this time (x86-64 host PCs with the emulated GPU connected as a PCIe device), emulating some popular cores also shipping in real silicon. Makes it much easier for us, versus mucking about on customer platforms we don't have full control over, software wise. We could have submitted on a bunch more emulated (and real) GPUs, but it doesn't buy us anything to do so at the moment.

As for drivers supporting a certain processor architecture, these days that's honestly not too far off just using a different toolchain. We don't design the code for x86, or ARM, or MIPS, or whatever. We just write a (hopefully) nice clean driver that isn't hard to compile for any given target ISA and ABI.

Also, it's Vulkan (helps to get it right, because Googling for it otherwise is difficult).
 
Thanks for the answer. I didn't realise submission options were so open, given the list is for product conformancy (which to me meant physical devices/chips etc). As I recall most IMG IP that I've seen on Khronos was tested in end silicon, sometimes by IMG, sometimes by your customers. Not that I'm casting doubt on IMG in particular, but in general one imagines that it leaves it open to abuse, or even genuine accidental non-compliance in end devices. I accept it is not the same thing at all, but having used CPU ICE for longer than I care to remember, my experience with them has been there are some limitations in performance/functionalty compared to the actual CPU.

Also, it's Vulkan (helps to get it right, because Googling for it otherwise is difficult).

You know, I had to look twice to figure out what I'd done wrong. I've been seeing Vulcan all along in my head...too much Star Trek.
 
Probably not a perfect fit for this thread, but close enough.
http://www.design-reuse.com/news/39...e-synopsys-implementation-signoff-tools.html?

MOUNTAIN VIEW, Calif. -- March 31, 2016 -- Synopsys, Inc. (Nasdaq: SNPS) today announced Intel® Custom Foundry's certification of digital and signoff implementation tools from the Synopsys Galaxy™ Design Platform for Intel's 10-nanometer (nm) tri-gate process technology. Synopsys and Intel Custom Foundry employed a PowerVR® GT7200 GPU design from Imagination Technologies™ to develop the reference flow. Customers of Intel Custom Foundry now have access to the 10-nm system-on-chip (SoC) design methodology based on the technology-leading Synopsys Galaxy Design Platform, anchored by IC Compiler™ II.
One assumes you just don't pick a random IP block when doin these things. I've seen a few similar statements from Synopsys in the past in relation to IMG, nothing that mentioned Intel. The Intel guy quoted in the article refers to the announcement being targeted at "early adopters" of their 10nm process.So does this mean / suggest that Intel foundry have a potential customer that wants to use powervr in a couple of years time?

If I was to go into speculation mode, could this be an indication that Intel is gunning for a 10nm fabrication for an Apple A11/A12, given that Apple are a leading edge/early adopter when it comes to semi fab processes.

As it happens I also saw this article today, the second part of which refers to a speculative Apple/Intel/10nm tie-up.
http://venturebeat.com/2015/10/16/intel-has-1000-people-working-on-chips-for-the-iphone/
 
Last edited:
It has boggled me for years why Intel and Apple haven't teamed up yet. Both are cutting edge tech companies that could benefit each other.

Intel even owns a part of Imgtech, yes? (That's why they used PowerVR in some integrated graphics solutions IIRC.)
 
It has boggled me for years why Intel and Apple haven't teamed up yet. Both are cutting edge tech companies that could benefit each other.

Intel even owns a part of Imgtech, yes? (That's why they used PowerVR in some integrated graphics solutions IIRC.)

Intel sold its IMG stocks. Apple would definitely benefit from manufacturing at Intel's foundries, I just can't see Intel having enough capacities to serve Apple's needs. For A9 Apple was forced to dual source with processes that aren't that much apart. If Apple would go to an Intel foundry, dual sourcing doesn't sound like a viable solution considering the distance Intel's processes have to other foundries. Would anyone imagine that Intel would delay its own products for a customer like Apple? I for one wouldn't.
 
Apple would definitely benefit from manufacturing at Intel's foundries, I just can't see Intel having enough capacities to serve Apple's needs. For A9 Apple was forced to dual source with processes that aren't that much apart.

Dual sourcing A9s has almost nothing to do with capacity and all to do with hedging risks (pricing, performance and availability).

Intel has larger production capacity than Samsung and TMC.

Intel's business model is poorly aligned with the foundry model.

Cheers
 
How can dual sourcing not be relevant to capacity and at the same be relevant to availability exactly?
 
How can dual sourcing not be relevant to capacity and at the same be relevant to availability exactly?

Being first on a bleeding edge process shift is risky. Betting on a single supplier to come through with your big soc on a new process on time is risky. Especially if recent history indicate trouble (20/22nm). Using two suppliers hedges that risk.

Qualcomm use Samsung exclusively for their 820, a soc that will see unit counts comparable to the A9.

Cheers
 
Under normal conditions if an IHV is certain that N foundry can cover its demands there's hardly a reason to go through all the trouble and added expenses of dual sourcing.

Processes get increasingly problematic, meaning things won't get any better at each processes new kickstart. Agreed for the 820 but if memory serves well the A9 went into production it was after the 7420 at Samsung's foundries and before the 820. Probably even less important that the Exynos 8890 won't cover all S7 phones worldwide like the 7420 did. Last but not least it's my understanding that with new processes foundries usually start with small capacities in order to increase until their peak planned capacity in the quarters to follow.

In the end what you call the reason for dual sourcing (lack of capacity or risk of availability) sounds like besides the point
 
It's a particularly good fit for anything expected to draw on a modern TV screen. Good TV-focused rendering feature set, buckets of fillrate, and fantastic overall PPA in comparison to our immediate competitors. Our most impressive pure-GPU product since I joined IMG, on balance.

An aside, but I've enjoyed working on the smaller cores more than the big ones overall, because the challenges are a superset of those faced when working on a larger design.
 
Last edited:
First Manhattan3.1 long term performance results are appearing in the Kishonti database: https://gfxbench.com/result.jsp?benchmark=gfx40&test=748&order=score&base=gpu&ff-check-desktop=0

Unfortunately still no results from A8X/A9X tablets, however so far anything A7/A8 doesn't seem to throttle at all in that test and A9 devices so far are in the 15-30% ballpark depending on case. Compared to the occassional >50% reduced performance ratings for any recent Adreno or Mali GPU quite a bit better. Worst case I saw so far would be the S6 edge: https://gfxbench.com/device.jsp?benchmark=gfx40&os=Android&api=gl&cpu-arch=ARM&hwtype=GPU&hwname=ARM Mali-T760 MP8 (octa core)&did=23147698&D=Samsung Galaxy S6 Edge (SM-G925x, SC-04G, SCV31, 404SC)

With 73% less performance it ends up even slower than the low end GPU in my current smartphone.
 
Is the long-term test made offscreen?

https://gfxbench.com/result.jsp?benchmark=gfx40&test=748&order=score&base=gpu&ff-check-desktop=0

tNXU7Wj.png



The Mi5's supposedly low-binned S820 gets over twice the long-term performance of HTC 10 and LG G5. At the same resolution, the S7 Edge gets 25% less performance than the other 1440p models. I guess that heatpipe isn't doing any miracles after all.
But even if these are onscreen results, the 1440p resolution renders about 80% more pixels than 1080p, and the differences here are disproportionately larger than that. The Mi5 Pro gets around 250% higher performance than the S7 Edge.
 
Back
Top