Apple (PowerVR) TBDR GPU-architecture speculation thread

Kaotik

Drunk Member
Legend
Supporter
Apple recently announced they're moving from x86 to Arm on Macs too. They've now also released a video, which confirms they'll move to their own GPUs too.

https://developer.apple.com/videos/play/wwdc2020/10631

Details of the architecture are of course still limited other than it's TBDR. With Apples licensing deal with Imagination, it's however quite surely PowerVR-based just like their mobile GPUs.
This could also explain why Imagination/PowerVR was looking again at high-performance GPUs a little while back
 
hi long tie lurker fist time poster

is there any details on how apples current gpu work like simd or simt? what is the width of a gpu core? ect?
 
I don't think these sessions confirm anything more than systems coming in the fall using Apple/Imagination TBDR GPUs. People are reading into it too much, seeking for writings on a non-existing wall. Mac developers are still targeting the same API with the same set of capabiliy vectors (Metal GPU families).

They are obviously preparing for the fall launch. If they are not going to launch higher-performance Apple Silicon Macs this year, why would they announce specifics about those now given their famous obsession in secrecy? Heck, if you need a concrete example, the video sessions on Metal and SDK additions for A13 Bionic dropped only after the September iPhone 11 announcement last year.
 
Last edited:
I don't think these sessions confirm anything more than systems coming in the fall using Apple/Imagination TBDR GPUs. People are reading into it too much, seeking for writings on a non-existing wall. Mac developers are still targeting the same API with the same set of capabiliy vectors (Metal GPU families).

They are obviously preparing for the fall launch. If they are not going to launch higher-performance Apple Silicon Macs this year, why would they announce specifics about those now given their famous obsession in secrecy? Heck, if you need a concrete example, the video sessions on Metal and SDK additions for A13 Bionic dropped only after the September iPhone 11 announcement last year.
Hence speculation thread
 
I don't think these sessions confirm anything more than systems coming in the fall using Apple/Imagination TBDR GPUs. People are reading into it too much, seeking for writings on a non-existing wall. Mac developers are still targeting the same API with the same set of capabiliy vectors (Metal GPU families).

They are obviously preparing for the fall launch. If they are not going to launch higher-performance Apple Silicon Macs this year, why would they announce specifics about those now given their famous obsession in secrecy? Heck, if you need a concrete example, the video sessions on Metal and SDK additions for A13 Bionic dropped only after the September iPhone 11 announcement last year.

Same here, they've been trying to make their own GPU for a while now and even announced they'd cancel their partnership with Imagination. Well jokes on them because they licensed Imagination's newest arch again.

It seems like CPU's might have had a lot of undone optimization, thanks to Intel leading the pack for so long that it had no benchmark of competition on whether it was doing well, allowing AMD to pick itself back up from disaster, ARM to slowly and surely creep up into competition with them, and Apple to produce its own remarkably fast CPUs. GPUs on the other hand seem to have kept up their relative competition, and so companies like Samsung and Apple trying to make their own have found Nvidia and AMD (and Imagination apparently?) have more than kept up their best on designs.

Which is all to say it doesn't seem easy, or likely, that some random new entrant will be making their own competitive GPUs anytime soon. Remember Apple loves to use doublespeak to rebrand existing tech as somehow its own, and this looks like yet more of that.
 
Apple recently announced they're moving from x86 to Arm on Macs too. They've now also released a video, which confirms they'll move to their own GPUs too.
I see in this presentation, that Apple won't combine their own CPU cores with e. g. AMD's GPU cores on its Apple Silicon.

What is the difference between Thread Groups (8x8 or 0-63) and smaller SIMD Groups (0-31 and 32-63)?
 
It won't scale. Even after years of introduction with features like NV's "tiled caching" or AMD's 'DSBR', discrete GPUs to date are still fundamentally built as sort-last architectures and this is reflected in their driver design being largely unchanged. Current desktop GPUs don't actually do the 'tiling' described from a driver perspective when observed in comparison to mobile GPUs to which they implicitly render the primitives/draws in-order per tile.

Also I heard from an Apple representative that they don't think mesh shaders will be the future either ...
 
Could it be that apple renewed the licencing deal with imgtech due to the desktop socs? Maybe the iphone/ipad socs indeed are free of powervr IP while the to be released desktop SOC's are something that benefit from imgtech ip/help?
 
The future is computational pipelines and raytracing, for that it works just fine.

If you truly believe that then they should expose mesh shading and geometry shaders in their Metal API because those features actually help us take a step towards a more general purpose graphics pipeline ...
 
Could it be that apple renewed the licencing deal with imgtech due to the desktop socs? Maybe the iphone/ipad socs indeed are free of powervr IP while the to be released desktop SOC's are something that benefit from imgtech ip/help?
More likely that ImgTech still has some IP that would help Apple improve their GPU-design further. And they would know, since they have hired a lot if the engineering staff that developed it for Img in the first place. (And have of course been in close communication all along). So they license that in order to tweak their designs, and avoid kludges in order to sidestep taken IP.
 
If you truly believe that then they should expose mesh shading and geometry shaders in their Metal API because those features actually help us take a step towards a more general purpose graphics pipeline ...
I dunno what general purpose is, but apparently it's not compute.

I believe every other shader should get ditched. Make everything compute, it can be heterogenous with some of the compute engines having special function blocks, it can still have Z-buffering and multisampling and vertex lists, but no more opaque scheduling and pipelining of shader stages handled outside of programmer control ... everything needs to be compute.

Of course at that point what remains of a TBDR is just a low resolution internal G-buffer, but that's fine and useful.
 
More likely that ImgTech still has some IP that would help Apple improve their GPU-design further. And they would know, since they have hired a lot if the engineering staff that developed it for Img in the first place. (And have of course been in close communication all along). So they license that in order to tweak their designs, and avoid kludges in order to sidestep taken IP.

Apple dropped imgtech and said they would not need any of their IP etc. Also apple didn't want to buy imgtech when it was on sale. Clearly apple was divorcing imgtech and as much is clear from press releases from both companies.

I see only 3 possible reasons for change of heart:

Apple realised imgtech has value and decided to become licencee again
Imgtech was behind the scenes able to show they know how apple design still uses or conflicts with imgtech ip and imgtech would sue, apple would loose. Apple has to become licencee again
And variant of the first reason, maybe iphone/ipad design were completely done in house and not using imgtech anymore but the x86 replacement would benefit from imgtech ip
 
Or Apple wants to use the work in progress ImgTech IP to get RealTime RayTracing for their workstation and desktop products?
 
ImgTech got their RT from buying Caustic back in 2010, and introduced their Wizard stuff (PowerVR GR6500 was first I believe) in 2014.
Ironically, Caustic was founded by ex-Apple folks. I wonder if they have wound up back at the mothership by now. I doubt this is what Apple licensed in this recent round since they never showed interest in the RTRT stuff back when they licensed ImgTech GPU solutions, but who knows?
 
It might be apple was using arm to pressure intel for longest time. Intel not doing so well in recent years might have pushed apple to jump the fence. When apple became serious with move to arm that is probably when things really changed.

Blender/maya/arnold type use cases could really use ray tracing acceleration. Interesting to see if apple does support ray tracing or not.
 
They have been offering RTRT in Metal touted with multi-GPU scalability, albeit being a software/compute solution (for now presumably).
 
I dunno what general purpose is, but apparently it's not compute.

I believe every other shader should get ditched. Make everything compute, it can be heterogenous with some of the compute engines having special function blocks, it can still have Z-buffering and multisampling and vertex lists, but no more opaque scheduling and pipelining of shader stages handled outside of programmer control ... everything needs to be compute.

Of course at that point what remains of a TBDR is just a low resolution internal G-buffer, but that's fine and useful.

Not going to happen and Apple doesn't share this vision either. To this day, Metal's geometry pipeline still has more state than a mesh shading pipeline. Mesh shaders is this next step in an evolution towards a more stateless and compute-based pipeline. Making "everything compute" would run contrary to having special graphics state like Z-buffering or multisampling but it would also even mean getting rid the rasterization state/fixed function rasterizer and they don't want that since it will significantly impact performance.

Again, they don't see a future in either compute or mesh shading. They don't want explore the former because they still want the advantage of fixed function units and the latter indirectly causes issues with tiling. Tiling specifically comes with it's own set of tradeoffs. Too coarse and load imbalance will manifest but if it's too fine there'll be lot's of redundancy overhead ...
 
Back
Top