PowerVR Rogue Architecture

On topic: actually Intel's Rogue implementations don't throttle out of the norm either.
 
Probably not a perfect fit for this thread, but close enough.
http://www.design-reuse.com/news/39...e-synopsys-implementation-signoff-tools.html?


One assumes you just don't pick a random IP block when doin these things. I've seen a few similar statements from Synopsys in the past in relation to IMG, nothing that mentioned Intel. The Intel guy quoted in the article refers to the announcement being targeted at "early adopters" of their 10nm process.So does this mean / suggest that Intel foundry have a potential customer that wants to use powervr in a couple of years time?

If I was to go into speculation mode, could this be an indication that Intel is gunning for a 10nm fabrication for an Apple A11/A12, given that Apple are a leading edge/early adopter when it comes to semi fab processes.

As it happens I also saw this article today, the second part of which refers to a speculative Apple/Intel/10nm tie-up.
http://venturebeat.com/2015/10/16/intel-has-1000-people-working-on-chips-for-the-iphone/

And now mentor graphics reports certification from Intel foundry on their 10nm process, again explicitly citing the use of powervr GT7200 for the certification.
http://www.prnewswire.com/news-rele...ools-for-10nm-tri-gate-process-300279718.html

Should anything be read into these multiple announcements of certification on Intel 10nm, and in particular the use of powervr IP for the certification.
 
And to make it a trio of announcements regarding certification of intel's 10nm process citing a PowerVr GPU block to do so, we have Cadence design:-

http://finance.yahoo.com/news/intel-custom-foundry-certifies-cadence-000000959.html

In June, mentor graphics
http://www.prnewswire.com/news-rele...ools-for-10nm-tri-gate-process-300279718.html

And in March, Synopsys:
http://www.design-reuse.com/news/39...e-synopsys-implementation-signoff-tools.html?

I'm not at all knowledgeable about this stuff, but I'm guessing these are all players envolved in getting Intel's 10nm implemented, tested and verified.

I still don't immediately see why IMG IP is involved, given IMG play no part in Intel's known road map, unless as I speculated in an earlier posut, it suggests that there is an IMG licensee in the wings that might be looking to use Intel as a Foundry @ 10nm.

Interesting that they didn't use an Intel CPU for the process ?
 
Last edited:
Am I the only one that doesn't expect anything but 8XT in the upcoming Apple A10 SoC? If not I guess we'll see the next iPhone first on shelves and then they'll rush out with a PR along the line "oh by the way there's also Series8XT...." :p
 
Could be that 8XT is for apple only, and therefore not licensable per se. They might never announce it.

Wonder what the new gpu is in the new watch soc ? x2 the performance of the previous one. Could it be a member of the 8XE family,
 
The original watch used Samsungs 28nm, the new one likely uses 16nm, so the GPU wouldn't need to be changed much to hit the claimed increase.
 
Could be that 8XT is for apple only, and therefore not licensable per se. They might never announce it.

Would be a horrible idea especially for the automotive market. In any case Apple A10 seems to carry a 6 cluster "GT8600" or whatever they call it at either the same frequency of the A9 GT7600 or slightly above.

Judging from the GPU claims here http://www.theverge.com/2016/9/7/12746884/apple-a10-fusion-processor-iphone-7-specs-comparison

A9/150% or A8/300% means around 63-64 fps in Manhattan 3.0 offscreen. That's A9X 9.7" GPU performance level.
 
Last edited:
Dunno how accurate that diagram is but:-
1) 820mhz looks in keeping with mediateks history of faster clock.
2) mediatek has put more emphasis on graphics - x2.4 increase in graphics performance agains x.4 cpu performance.
 
Dunno how accurate that diagram is but:-
1) 820mhz looks in keeping with mediateks history of faster clock.
2) mediatek has put more emphasis on graphics - x2.4 increase in graphics performance agains x.4 cpu performance.

Keep in mind that MTK usually keeps GPUs for quite some time; G6200 lasted for two SoC generations starting from 600 up to 700MHz. When they used the G6200@600MHz first it was also leaps and bounds ahead of the former single core SGX544@300MHz and it wasn't that far apart from the G6430@450?MHz in the A7 in terms of performance either.

As for the frequency a GT7400@820MHz is at 210 GFLOPs FP32 vs. GT7600@633MHz (?) with 243 GFLOPs FP32 in the A10. If the latter frequency is real then the difference is roughly in the 16% ballpark.

By the way Mediatek, Qualcomm, Samsung, Apple and the likes should really try to push ISVs in developing more and better mobile games. I know it sounds like a broken record but you don't really need as powerful GPUs to play Pet Rescue Saga.....
 
Last edited:
As for the frequency a GT7400@820MHz is at 210 GFLOPs FP32 vs. GT7600@633MHz (?) with 243 GFLOPs FP32 in the A10. If the latter frequency is real then the difference is roughly in the 16% ballpark.

I guess you mean the A9 ? Given the mediatek chip will probably hit phones just some months behind the next iphone, I guess we could say they are 2 year behind apple in terms of smartphone graphics.

By the way Mediatek, Qualcomm, Samsung, Apple and the likes should really try to push ISVs in developing more and better mobile games. I know it sounds like a broken record but you don't really need as powerful GPUs to play Pet Rescue Saga.....

Yeah, for some time now graphics capability inside socs has been "good enough". Apple is using the GPU for compute stuff of course, AFAIK they have said that Siri uses the GPU to process the requests. But we do need better use cases to keep increasing the requirement for better graphics.
 
No I meant the A10 GPU. My memory is weak for when MTK has slated the X30, but if it should ship out to partners in early 2017 under 10FF as projected I doubt it'll take as many months to appear in a device. Either way, since they're for now at the same generation as i-devices for GPU IP, differences in terms of performance are less important overall compared to possible higher throttling with as high frequencies. If the GT7400 in the X30 is truly clocked at 820MHz, you could also expect the "turbo version" of the SoC later on (HelioX35) to possibly have even higher frequencies.

High frequencies aren't a bad thing if you do it like Pascal/desktop, but I doubt Mediatek has the resources to start padding transistors like NV did.
 
Is there any source indicating a GT7600 for the A10? That's the GPU they went with for the A9. They didn't change the GPU besides clocks?

By the way Mediatek, Qualcomm, Samsung, Apple and the likes should really try to push ISVs in developing more and better mobile games. I know it sounds like a broken record but you don't really need as powerful GPUs to play Pet Rescue Saga.....
Samsung did have Epic showing a techdemo running on Galaxy S7 during its reveal, but that's as far as they went IIRC.
 
Is there any source indicating a GT7600 for the A10? That's the GPU they went with for the A9. They didn't change the GPU besides clocks?

I'm extremely bad with die-shots; however spending only relatively little time zooming into the A9 and A10 shots it looks extremely similar for my simple layman's understanding to be something all that "new". I'd love to stand corrected but I can see only very few changes and yes it sounds very much like something slightly optimized with higher clocks. I won't mind calling it 7600w/cheese if that shoulld help LOL :D

Samsung did have Epic showing a techdemo running on Galaxy S7 during its reveal, but that's as far as they went IIRC.

Financing a techdemo is peanuts for someone like Samsung. Not a bad move, however I'd prefer if manufacturers would push more for good higher end games. Simply something only few would mind to pay 10 bucks for.
 
No I meant the A10 GPU.

In which case that means Mediatek high-end soc is less than 1 year behind Apple's high-end soc in terms of theoritical performance of the graphics IP being implemented, which is both testiment to mediatek, and also to the general plateauing of graphics performance within Socs.

Unless there is a need for better graphics, the difference isn't going to get bigger.
 
As I said it was the same with Mediatek and the beginning of Rogue where they too ended up for a limited period quite close to A7 GPU performance. For the future I'd expect MTK to re-use the exact same GPU IP for another generation with higher clocks (or alternative GPU IP roughly in that ballpark) and Apple to go for A11/10FF TSMC to comparable to A10X GPU performance (ie roughly twice A10 GPU) and that's a point MTK won't be able to catch up easily unless they go for significantly more GPU die area.
 
Back
Top