Imagination Technologies DXT (PowerVR)

Rys

Graphics @ AMD
Moderator
Veteran
Supporter

Another round of scalable GPU IP with their Photon ray tracer.
 
Off the top of my head I never saw a A series product, or a Furian product ? I've seen a B series in a risc V soc, Series 8 / Rogue midrange in the multiple Helio SoC recently, but it's prety light.

My bad if my first post was too cliché or plain wrong. I love PowerVR and theirs techs, but most of the time I don't see the annoncements translating into a product I can see bench of, review, or even my hands on.
 
Off the top of my head I never saw a A series product, or a Furian product ? I've seen a B series in a risc V soc, Series 8 / Rogue midrange in the multiple Helio SoC recently, but it's prety light.

My bad if my first post was too cliché or plain wrong. I love PowerVR and theirs techs, but most of the time I don't see the annoncements translating into a product I can see bench of, review, or even my hands on.

Could be that you're right about Furian or even A Series, but information for automotive or other design wins are very rare these days. Since their presence for smartphones/tablets is relatively small compared to the past, it could very well be that we never see a number of IP solutions in a mainstream mobile product. IMG announced CXT in late 2021 and despite their claim that they had design wins in several markets from it we still haven't seen or heard anything from it. Yes it's still early but I don't expect Mediatek for example to release a smartphone or tablet SoC with it anytime soon.
 
imagination-tech-dxd-1-09-11-2023-pcgh.jpg

I wonder about their 128 wide warps/tasks.
Does that mean a workgroup has to be at least 128 threads wide to avoid idle threads? That's much more from the 32/64 number we're used at. Seems much harder to saturate if so.

I also wonder what this means to the political situation, since ImgTech is owned by a chinese company.
Likely ImgTech could compete with NV AI acceleration faster than Moore Threads etc., i guess.
 
I wonder about their 128 wide warps/tasks.
Does that mean a workgroup has to be at least 128 threads wide to avoid idle threads? That's much more from the 32/64 number we're used at. Seems much harder to saturate if so.
Imagination's architecture (since 8XT) has the unusual ability to have multiple workgroups per warps/tasks if the workgroup is less than the warp size, each with their own separate local memory (i.e. this is all hardware, not a software hack based on recompiling the shader). Similarly, many triangles can be part of the same warp in 3D rendering, so small triangles typically doesn't reduce warp occupancy much if at all. I'm pretty sure that's documented in some public documents for developers somewhere although I have no idea where exactly.

Obviously it'll still make branch divergence worse amongst other things, it's always a trade-off.
 
You want your GPU to be fed and so utilised as efficiently as possible and you choose how much data each will be able to use. You want to design and build each unit wide enough so you can have good total throughput but have it be narrow enough so that you don't have lots of partially filled units because that's inefficient and wasteful. You might as well have a few more narrower units with high utilisation instead of larger underutilised ones, you're spending/wasting money and power driving an inefficient design otherwise. For comparison, Nvidia and (with RDNA) AMD have 32 warp/wave designs for their hardware ("warp" is Nvidia's term, "wave" is AMD's term - same thing, think of them like data packets). Imgtech apparently have a very clever way of keeping their very wide (128 warp) design fed by grouping up these sub-optimal data packet groups to better utilise their hardware, making it more efficient

It's a cool idea, I have no idea how effective it is, plus by its own invention that means they're having trouble filling units. Why spend the hardware to do that instead of going for narrower units you can fill more reliably? I guess it's a total system design choice and it might be less work than redesigning everything to be narrower?
 
I wonder what is missing from the BXT line to support DX11. And what is missing from the DXD to support DX12, which is a pretty "old" api by now.
 
10 years later, I'm surprised it's still a "trade off" choice to implement without hitting the power budget a lot. But thx for the info !
I posted it with a question mark, meaning take it with a grain of salt; it was at least the status when Rogue first appeared; I heard a rumor back then that with whatever they skipped and limited their architecture to DX10.0 compliance down from DX11.x meant a rough 50% of area saving per ALU. Considering quite a few IP variants (starting with Series7 I think) had a ff tessellation hw unit and supported tessellation, I doubt they needed anything else for Android.
 
I wonder what is missing from the BXT line to support DX11. And what is missing from the DXD to support DX12, which is a pretty "old" api by now.
There doesn't seem to be much stopping them in terms of missing features or capabilities to minimally advertise support for D3D11/12. What's needed to make them *useful* for those APIs is a somewhat different subject. They just need to implement a better geometry pipeline that's more robust against geometry shaders/tessellation/stream output and it would be nice if they added immediate mode rendering functionality just like Qualcomm Adreno's FlexRender technology to avoid the explicit cost of render pass resets. Deferred renderers on console/PC games are composited in a way to do many fullscreen passes over the course of rendering a frame but that inherently causes render pass resets which doesn't go down well with tile-based rendering architectures constantly flushing their tile memory ...
 
Its been a minute for me since I last commented on IMG :)

As I recall many MANY moons ago, IMG had a systemic issue with microsoft driver support, they just didn't have the resources/impetus/wherewithall to have stable drivers for windows. And then there was the ongoing saga of the developer community shouting that they would not open source other drivers.

I wonder has anything changed ?
 
Its been a minute for me since I last commented on IMG :)

As I recall many MANY moons ago, IMG had a systemic issue with microsoft driver support, they just didn't have the resources/impetus/wherewithall to have stable drivers for windows. And then there was the ongoing saga of the developer community shouting that they would not open source other drivers.

I wonder has anything changed ?
I think there have been some steps for open source drivers, but I haven't read much into it. https://blog.imaginationtech.com/imagination-and-our-commitment-to-open-source

For windows drivers I'm following a gentleman in another forum with ΜTT GPUs and while DX11 window drivers seem to constantly improve, there's a lot they probably still need to do. On the other hand some Glenfly GPUs with S3 graphics IP in them seem to have better windows drivers at this point.
 
Last edited:
Back
Top