PowerVR Rogue Architecture

The new MediaTek win shows acknowledgement that performance efficiency is still expected to be a differentiating factor in upcoming mobile products. GPU compute is finally coming into its own on mobile where it's enabling the device to understand what its actually seeing in a photo, intelligently manipulate the composition and effects based on that, and show the user a real-time preview when capturing and editing, as well as improving computer vision and even speech recognition for a variety of other intelligence and security related functionalities.

As for the A10's GPU, Apple had made previous generational performance gains without utilizing much of the low hanging fruit of ramping clock speed. Seeing as the CPU got most of its gain from that this time around, the A10 GPU is perhaps a higher clocked variant of or customized GT7600 or GT7600+. However, something to consider is, in addition to the claim of a 50% performance improvement, Apple also claimed 2/3rds the power consumption. On a similar manufacturing node, that could be possible with process optimizations and improvements to physical implementation, yet it would require an impressive effort in those areas in their own right.
 
Financing a techdemo is peanuts for someone like Samsung. Not a bad move, however I'd prefer if manufacturers would push more for good higher end games. Simply something only few would mind to pay 10 bucks for.

Of course not. They should be financing full ports like nvidia did, but without the ridiculous hardware exclusivity.
 
http://www.pcworld.com/article/3151...u-powervr-gpu-as-it-looks-to-bounce-back.html

Imagination also wants to regain share in the mobile market as it expands the Series8XE graphics processor into low-end and mid-range phones, said Graham Deacon, senior director of business operations for PowerVR.

That's a market Imagination has lost, with competitive GPUs from ARM and Qualcomm taking share. Imagination introduced Series8XE earlier this year, and the technology is reaching low- and mid-range devices.

The goal is for Series8XT to be a harbinger of many new technologies, but all the GPUs will cram in more performance, Deacon said.
 
http://i3.kym-cdn.com/photos/images/facebook/000/000/069/O_RLY-Quite.jpg

One thing that's incorrect in the AnandTech article is that the number of USCs is 2x too high for all cores; i.e. GE8200 is 0.25 USCs and GE8340 is 2 USCs. Note that GE8100 is also 0.25 USCs so it's actually faster than it looks despite the minimal area (but it loses a few instructions so it's very slightly less efficient).
Hmm, are you sure about that? When I was supplied the GE8200/8300 last year, half a cluster and 1 cluster were the figures I was given.
 
http://i3.kym-cdn.com/photos/images/facebook/000/000/069/O_RLY-Quite.jpg

One thing that's incorrect in the AnandTech article is that the number of USCs is 2x too high for all cores; i.e. GE8200 is 0.25 USCs and GE8340 is 2 USCs. Note that GE8100 is also 0.25 USCs so it's actually faster than it looks despite the minimal area (but it loses a few instructions so it's very slightly less efficient).

Probably a very dumb question, but I'm wondering how it's possible to go as low as to a quarter of a cluster. My original understanding of first Rogue cores was that the "base" configuration was 2 clusters with a quad TMU; going all the way down from 32 lanes to 4 doesn't sound as simple as to cut a cake into pieces and separate them.
 
Probably a very dumb question, but I'm wondering how it's possible to go as low as to a quarter of a cluster. My original understanding of first Rogue cores was that the "base" configuration was 2 clusters with a quad TMU; going all the way down from 32 lanes to 4 doesn't sound as simple as to cut a cake into pieces and separate them.
The SIMD width has always been pretty configurable in the RTL, so while there's still the validation overhead, actually baking out a core with less than the normal number of parallel data pipelines is reasonably straightforward at this point.
 
Thank you. On a sidenote ducks and gnomes are boring; after watching Trolls maybe you should consider something small cute and with lots of hair as the next mascot ;)
 
It's not DX11 compliant, but it does have hardware tessellation (which exists in other APIs that we support).
 
IMG has announced powerVR series9XE and XM, and also a neural network core that is licensabled.

Interesting to note that like series8, this series9 doesn't address high end at the time of announcement....the apple effect ?
It's also worth noting that series9 is Rogue arch, NOT furian.

https://www.imgtec.com/news/press-r...le-user-experience-in-cost-sensitive-devices/
https://www.imgtec.com/news/press-r...and-half-the-bandwidth-of-nearest-competitor/

That's more like business as usual. If their claims should reflect reality then this is one hell of a piece of IP they just announced: https://www.imgtec.com/news/press-r...and-half-the-bandwidth-of-nearest-competitor/

https://www.imgtec.com/blog/why-the-powervr-2nx-nna-is-the-future-of-neural-net-acceleration/
 
Back
Top