IMG PowerVR RTX - what is it?

Shifty Geezer

uber-Troll!
Moderator
Legend
This slide has surfaced from 4Gamer.net.

pvr1-660x495.jpg

There's a curious arrow in the bottom right showing a different architecture to PVR5 and 6, RTX, and it's driven via Imagination's OpenRL. A bit of googling threw up this announcement:

Imagination previews Brazil 3.0 Beta running on OpenRL at SIGGRAPH 2011
Is this Molly/Poppy? Are we looking at a new RT focussed desktop/mobile GPU for realtime applications, or more an RT accelerator for offline work or large realtime servers?
 
I thought it was well established that molly and poppy are minature lions/tigers that are eventually going to eat Rys.

If you want information, maybe try the Caustic Graphics website.
 
So, there was a new algorithm from Caustic which they've rolled into a software patform, Brazil, and they created a custom processor similar to SaarCOR. Imagetec bought Caustic and are now integrating/implementing Caustic into a processor, PowerVR RTX. Is that a future path for all PVR designs? Will RT be a differentiating factor for Power GPUs, and raytracing will be an option for all realtime graphics running on Power cores in the future?

The very few demos I've found regards Caustic are still low-framerate, despite being incredibly fast for raytracing. Not useable in high-framerate visualisations, but that's on existing multicores CPUs or the 100 MHz Caustic 1. I don't think RT games rendering is on the cards, so I'm guessing this still more for development rather than realtime graphics.
 
At the AGM you could see how excited the CEO is about Caustic but there wasn't much he was giving away. Below is a summary of the main comments.

"Caustic - Very strong patents in place, extremely complimentary, demos today will be prototypes running at a fraction of the speed (1/20) of the real chips and is already same speed as current top end NVIDIA offerings.

Will be an add on to Rogue, not a replacement (20% hardware addition to give 'magic')

Using ray tracing massively reduces the cost to develop a new game (1/10 to 1/5 of the cost previously)

Follow Open RL and Brazil news to follow the progress."

First view of the tech will be in a stand alone chip (presumably Caustic 2) then it will be integrated into the PowerVR IP offerings from Rogue (Series 6) onwards.

Comments were also made on the hardware you'd need to run this card - normal desk/lap top rather than multi $k machines as the card would do all the grunt work.

The idea for the first wave is to get the RT tech into as many developers' hands as possible, taking the prohibitive cost element out of the equation. Then - the world !!!!! :)

RT will be offered to licensees as an option and won't be a standard part of the general IP (presumably for now).
 
So, there was a new algorithm from Caustic which they've rolled into a software patform, Brazil, and they created a custom processor similar to SaarCOR. Imagetec bought Caustic and are now integrating/implementing Caustic into a processor, PowerVR RTX. Is that a future path for all PVR designs? Will RT be a differentiating factor for Power GPUs, and raytracing will be an option for all realtime graphics running on Power cores in the future?

The very few demos I've found regards Caustic are still low-framerate, despite being incredibly fast for raytracing. Not useable in high-framerate visualisations, but that's on existing multicores CPUs or the 100 MHz Caustic 1. I don't think RT games rendering is on the cards, so I'm guessing this still more for development rather than realtime graphics.

The demos I've seen were running on FPGA at a small fraction of true end speed.
My understanding is that IMG's first target is to produce an add-in board for use in RT development, with a target of getting the cost down to 1/10 of conventional workstation costs with the same graphics performance. Next target is to have a system suitable for console/STB sized devices (perhaps a companion graphics chip ?).Final target, some 4-5 years away, is to have it suitable for inclusion in an SOC, with power reduction being likely the greatest hurdle.

Listening to some presentations, it seems that they see the RT tech as complimenting SGX/RGX, and not replacing, thus the ultimate aim is to have SOCs that are a combination of both technologies.
 
I just approved hammerd's post above, a few good tidbits in there, especially the "20% hardware addition" which is much more precise than anything else I've heard out of them (even if it's not very meaningful without an idea of what graphics core they're pairing it with).

I talked to the ex-CTO of Caustic at MWC11 a fair bit, I wanted to post/publish something based on it, unfortunately my memory is significantly worse than that of the average goldfish (which doesn't have as bad a memory as you'd think but still) so combined with the ridiculous amount of discussions I had every day back there, I honestly can't remember some important details so I didn't feel confident publishing anything. He did mention the basic 'magic' behind it is detailed in a publicly available patent though (which I haven't had the time to properly read sadly).

Anyhow, Caustic 1 is based on FPGAs and that's still what they were demoing at MWC11, but they were still working on the 90nm Caustic 2 chip - I can't remember if it had already taped-out at MWC11 or was about to tape-out sadly, but iirc it did not tape-out before the IMG acquisition.

The short-term end-product will be 90nm Caustic 2-based boards for the workstation market which people will pair with NV/AMD workstation GPUs, the mid-term end-product is a similar piece of IP that will ready to integrate next to Rogue, and the (very-)long-term end-product is something that shares some silicon between the two - he mentioned there was some similarity between the TBDR logic and their own algorithms interestingly enough, so you wouldn't have a bit less rasterisation hardware idle when raytracing and vice-versa. I'm not sure if there's that much to gain there, but it's a possibility they're considering (many years down the road, depending on RTX success, etc).

One technical bit I remember is he was very confident they could scale performance as much as they claim in the future without becoming bandwidth limited - their bandwidth cost is low, and if necessary there are a few things they can do to reduce it further (but increasing silicon cost and there may be efficiency trade-offs for some of them). So basically you'd expect non-raytracing related bandwidth costs to remain the majority of the total GPU bandwidth. On the other hand, PCI-Express bandwidth can be a real problem, so they won't be able to scale discrete solutions much further than Caustic 2.

And tangey, they won't need 4-5 years to have it suitable for SoC integration - there's nothing magical about that, Rogue looks just like a NVIDIA GPU to them except it's on the same chip. Whether that means it will actually get implemented soon is another question, and it also remains to be seen what markets it will target first.

Besides handhelds (which might be hardest to penetrate), you've got consoles, set-top boxes (he specifically mentioned some big players might want to create a gaming ecosystem around it and use RTX as a differentiator - Apple TV? Samsung?), and workstations (since discrete solutions will be limited by PCI-E). The problem with workstations is how do you find a partner to make a many-core Rogue+RTX chip with performance comparable to NVIDIA's Fermi or Kepler? And to make that profitable you probably need to target the PC market at the same time.

I think it's pretty clear they didn't have a partner for PC/workstation yet but at least with Rogue (and RTX) they can scale up to NV/AMD-level performance, so they must logically again be actively looking for potential partners there (unlike in the SGX generation). Who knows if anything will come out of it or if it'll be another crushed dream like PMX590.
 
Oooooh, it's all jolly exciting. ;) So presumably RTX is Caustic 2 by another name, branded under the PowerVR name since the acquisition. The IP wuill then be licensable for AMD and nVidia to incorporate. Hence the desire to promote Brazil 3 and OpenRL as the tools that devs will use on GPU's as they are, and then IHVs will want to incorporate the Caustic silicon for performance gains.

Any chance of squeezing IMG for a B3D article?
 
I just approved hammerd's post above, a few good tidbits in there,

He takes notes :) , I was working on memory of that presentation and also from a chat with some folks subsequently.

And tangey, they won't need 4-5 years to have it suitable for SoC integration - there's nothing magical about that, .

I was advised that the major tech hurdle was getting the required performance in the sub-watt region. However that might not be the reason for the 4-5 years. I got the impression that they need to educate industry into the benefits of what the combined tech is capable of. Constantly heard it referred to as a "disruptive" technology, i.e. something that has the ability to move graphics in an entirely different direction, with a big big emphasis on the cost reductions that this technology can bring, for example, in game design.

Aslo time factor may be because they see themselves pretty busy in developing/delivering Rogue over the next number of years :)

Shifty Geezer said:
The IP wuill then be licensable for AMD and nVidia to incorporate
I did not hear any mention of licensing it to other graphics providers, my assumption is that IMG see this as having the capability to give them a unique offering in terms of performance/power/cost ratios.
 
Last edited by a moderator:
I did not hear any mention of licensing it to other graphics providers, my assumption is that IMG see this as having the capability to give them a unique offering in terms of performance/power/cost ratios.
It is this from Hammerd that suggested such an intention:
RT will be offered to licensees as an option and won't be a standard part of the general IP (presumably for now).
That could of course just mean for various mobile SOC solutions rather than GPU IHVs, such as Apple licensing RTX and not SGX for an iPad.

Still, licensing to AMD+nVidia seems a necessary step to me. For OpenRT to be used in the realtime pipeline, it needs to offer suitable performance and be ubiquitous. Freely available tools can help adoption, but if only PVR GPUs support full-speed raytracing and the other IHVs run an order of magnitude slower, using RT will be too slow for most customers and it'll be ignored. If PVR license the tech to the other GPU manufacturers, they can get that mainstream adoption even if it means losing their exclusive vantage point. Unless they can seemlessly integrate RT into the DX/OGL pipeline, that feature in Power GPUs will be like so many proprietary techs over the years that haven't been fully used because DX is built around common features.

I'm also not seeing where the massive savings in game design are to come from. 1/10 to 1/5?? How can modellers time be saved? Optimising meshes can't be that costly, and there'll still be RAM issues that'll want careful crafting of resources. I guess lighting would be a lot cheaper though if it's all realtime and we lose the baking process.
 
It is this from Hammerd that suggested such an intention:
That could of course just mean for various mobile SOC solutions rather than GPU IHVs, such as Apple licensing RTX and not SGX for an iPad.

Indeed I think this was what was being conveyed (TI/ST/Apple/whoever), not other graphics providers.

I think OpenRT is obviously meant to be hardware neutral, others who are interested in having their own RT IP would make their drivers OpenRT compliant.

I'm also not seeing where the massive savings in game design are to come from. 1/10 to 1/5?? How can modellers time be saved? Optimising meshes can't be that costly, and there'll still be RAM issues that'll want careful crafting of resources. I guess lighting would be a lot cheaper though if it's all realtime and we lose the baking process.

I don't know enough about the process, but my recollection was that it was indeed that being RT, there was no baking process, and it was this that is a major cost/time saver.
 
I think OpenRT is obviously meant to be hardware neutral, others who are interested in having their own RT IP would make their drivers OpenRT compliant.

It's "OpenRL", BTW, and, yes, it is hardware neutral.

OpenRT is an entirely different thing.
 
Dumb layman's question: if IMG should want to integrate ray tracing in the less foreseeable future into their GPU IP cores wouldn't they theoretically have an advantage due to ray casting? Before someone says it I of course realize that ray tracing and ray casting are vastly different, however aren't there any technical lines crossed in theory?
 
I think OpenRT is obviously meant to be hardware neutral, others who are interested in having their own RT IP would make their drivers OpenRT compliant.
Okay, that's sort of sound in theory, but then they believe they've tied up the realtime RT tech very nicely with their patents. Hence, if Caustic + IMG have done a thorough job, it might not be possible for other IHVs to add effective RT hardware and they'll remain too slow for realtime apps, so the technique won't be targeted. For OpenRL to be fully adopted it has to be useable at speed for everyone. I don't know how IMG can solve that.

I don't know enough about the process, but my recollection was that it was indeed that being RT, there was no baking process, and it was this that is a major cost/time saver.
Maybe the saving figures given were 1/10 - 1/5 for some aspects of development, rather than for the overall cost? Otherwise a console using RT would be the developers best friend and get all the interest!
 
I was advised that the major tech hurdle was getting the required performance in the sub-watt region.
Oh yes, that makes sense. I am under the impression they don't consider handheld to be their first market for it though. Of course, it may still turn out to be first (several years down the road) if they fail to find licensees elsewhere.

tangey said:
i.e. something that has the ability to move graphics in an entirely different direction, with a big big emphasis on the cost reductions that this technology can bring, for example, in game design.
Shifty said:
I'm also not seeing where the massive savings in game design are to come from. 1/10 to 1/5??
It was marketing bullshit for Larrabee, and it's marketing bullshit for Caustic. If genuinely fast enough it does shift some of the complexity from software to hardware/driver, but any small developer can get the exact same benefits in a different way by licensing a game engine. And there's no way it could reduce cost by 80% for anything but a tech demo.

tangey said:
Aslo time factor may be because they see themselves pretty busy in developing/delivering Rogue over the next number of years :)
They've already developed and delivered the first version of Rogue - while customer support does take a lot of time and effort, these are mostly separate teams from those doing the actual architecture R&D. I doubt it's much of an issue.

Shifty said:
Okay, that's sort of sound in theory, but then they believe they've tied up the realtime RT tech very nicely with their patents. Hence, if Caustic + IMG have done a thorough job, it might not be possible for other IHVs to add effective RT hardware and they'll remain too slow for realtime apps, so the technique won't be targeted. For OpenRL to be fully adopted it has to be useable at speed for everyone. I don't know how IMG can solve that.
There's always something weird with an open standard that requires specific patents to implement efficiently, especially when paired with a business plan that assumes an early mover advantage but eventual widespread adoption. I suppose that on paper they're hoping for others to implement the standard but much less efficiently - in practice, there's no way NVIDIA/AMD will be stupid enough to do that.
 
I'm sure they were wanting to make an attempt on the desktop/laptop space with Rogue. RTX is a move to the productivity workstation space that should prove effective. I wonder what Laa-Yosh thinks of this? I'm sure he'll be very pleased! Whether IMG can leverage that to get into the desktop space, I don't know. Even if raytracing proves more effective for modern complex scenes than rasterising, it'd be dependant on a sea change of the whole computing and rendering space. How do you get everyone to abandon DX and use raytracing instead? Yet without that change, Caustic hardware gives IMG no marketable advantage to compete with.
 
I don't think the "PC" market is a major target for them for a number of reasons.

The point of gravity in graphics is/has shifting/shifted in their favour, i.e. its going lower end, obvious one is smart phone, but also tablets,netbooks, portable consoles, and also some not so obvious ones like in-car.

Also IMG's tech is at the same time heading upwards, with options in netbooks and low end laptops, and heading outwards into digital cameras and heading downwards into qualcomm low end terrority (i.e. mediatek).

This makes PC not the be all and end all that it once was. Even at PC level, the whole thing is moving to integrated. Add-in cards, once pretty standard issue in every PC, are slowly becoming increasingly "optional" (remember that your typical beyond3D reader is not your typical PC user). And the pc segement intself is not as big as it was.

And in the market that still DOES remain for add-in cards, Nvidia and AMD is it, I can't see anyone going in there.

One wonders if IMG can come up with something that makes Intel think about them for things other than low-end portable. But then you're into a whole other political ball game with Intel's inhouse dept.
 
Back
Top