PowerVR Wizard Architecture

translate a little bit, sorry for my poor english

The GR6500 is not FPGA but real chip, the card contain 1GB low speed RAM, which simulated those mobile deivces using.
and below demo is running on 4-way GR6500, which have 3 billion polygons at 1080P 30/fps

the article didn't mention any customer
 
This is great. :)

It's a shame no one seems to want it since it to use ray tracing one has to re write the renderers.

Only apple seems to be in a position to get devs to rewrite sw. It's a shame they don't care for it.
 
It's a shame no one seems to want it since it to use ray tracing one has to re write the renderers.
that rewrite would be a piece of cake probably. I've been playing with openRL, it basically takes most of the work from you. you compose a complete scene on the driver side and then just trigger the rendering. The openRL/Caustics driver handles all the rest.
It's similar with OptiX.
yet it isn't rocket-science either to write your own tracer (I'd guess every rendering/gfx coder made at least one in their lifetime).

Only apple seems to be in a position to get devs to rewrite sw. It's a shame they don't care for it.
maybe we don't do it, because we care? ;)
using tech for the sake of it isn't really beneficial. We are rather goal oriented: "how can you make it look twice as good with 1% more gpu usage".
e.g. you can get a nice DoF by separating the view by depth into 3 planes, do some blur passes and compose these by depth again. this isn't really correct, as you don't capture occluded objects in the far depth case. simple way to fix that would be to just render these 3 image-"layers". but we don't. why? because it would take 3 times as long (roughly) and you'd see some difference if you really really pay attention, but 99% of the players won't. now we could go one more step, with path tracing, which would cost 100time more (simply because you trace and shade 100times more samples per pixel), that would look obviously a little bit more accurate, but 99.99% of the players won't ever notice it. (but actually might complain about the noise)

the first step to get tracing would be to not rewrite the renderer completely, but to add tracing on top to achieve something beneficial. and "look no further" because that's what is already happening. nowadays we use in most games "screen space reflections" (and this is simple tracing along rays in the depth buffer) which seems to be beneficial in comparison to reflection planes/texture. e.g. the water in Ryse in the first cutscene uses a custom water shader I wrote, to achieve some things that are not really possible (in a simple way) with reflection rendering (or rasterization)
1. the water has traced subsurface scattering (which sadly is very subtle in the lighting conditions)
2. there is the player character that drops into water and swims, having very localized reflections on very rough (even bending) water waves
3. there are large scale objects like the chain, that needs distant reflection and at the same time really accurate reflection where it intersects the waves

another case where some kind of tracing is used is in 'Last of Us", where the occlusion of light by dynamic objects is 'kind of' traced. the dynamic objects are represented roughly by spheres.

POM/VirtualDisplacement/Parallax mapping is another very localized effect where rays are traced to achieve something that wasn't otherwise possible, within acceptable performance constrains.

if ImgTec would want to push tracing, they'd have to send devkits to developers, free, in big amounts.
 
if ImgTec would want to push tracing, they'd have to send devkits to developers, free, in big amounts.

IMGTEC cannot push ray tracing, no matter how much they want it. They sit too far from devs/users for that.

Only a soc owner with a lot of dev influence can do this.

That leaves Intel/NV/Apple.
 
IMGTEC cannot push ray tracing, no matter how much they want it. They sit too far from devs/users for that.
they were selling R2500 and R2100, why wouldn't they be able to send some to some influential/creative guys like Carmack back then?
 
And as great as Carmack is, his endorsements mean what?

There is no business case for investing in ray tracing apps without consumers owning the hardware and no business case for making ray tracing hardware without availability of apps. IMGTEC makes neither.

The only way this stalemate breaks is if intel/nv/apple add ray tracing to existing socs.
 
summing up your arguments:
- they have no possibility to push it
- they have no reason (no business case) to push it.

extrapolating: SOC manufacturers have no reasons to license it (even if RT would start off, ImgTec would most likely not deliver anything compatible or with software support).

if that all is correct, then it makes me wonder why they invest money on it and make some fuzz with demos every now and then. Is it just to increase the stock value for the day someone buys ImgTec for the patents?
 
Is this limited to polygonal environments, or can one use it to accelerate tracing of, say, signed distance fields, or voxel scene representations, or height field textures etc...? Is it fast enough for more than a few rays per pixel?
 
summing up your arguments:
- they have no possibility to push it
- they have no reason (no business case) to push it.

extrapolating: SOC manufacturers have no reasons to license it (even if RT would start off, ImgTec would most likely not deliver anything compatible or with software support).

if that all is correct, then it makes me wonder why they invest money on it and make some fuzz with demos every now and then. Is it just to increase the stock value for the day someone buys ImgTec for the patents?
Hope springs eternal...
 
summing up your arguments:
- they have no possibility to push it
- they have no reason (no business case) to push it.

extrapolating: SOC manufacturers have no reasons to license it (even if RT would start off, ImgTec would most likely not deliver anything compatible or with software support).

Unless IMG is deliberately hiding anything there doesn't seem to be up to now at least one licensee for Wizard IP. Assuming there truly isn't any, how high are the chances that there will be?

I was at one point as naive myself to think that since Caustic was actually a small startup motivated by some former Apple folks that Apple might be interested in the technology after all.

if that all is correct, then it makes me wonder why they invest money on it and make some fuzz with demos every now and then. Is it just to increase the stock value for the day someone buys ImgTec for the patents?

IHVs now and then come up with new ideas; that doesn't mean that all of them turn into a success. That should not mean that I believe that the RT approach is bad or anything else, rather the contrary. I just don't see a necessity for it let alone in the ULP SoC mobile world.
 
Unless IMG is deliberately hiding anything there doesn't seem to be up to now at least one licensee for Wizard IP. Assuming there truly isn't any, how high are the chances that there will be?

I was at one point as naive myself to think that since Caustic was actually a small startup motivated by some former Apple folks that Apple might be interested in the technology after all.
.

I would need to relisten to be sure, but my understanding is that in the last post-results conference call (actually it was in discussions after the AGM in Sept), the IMG CEO stated that RT would be licensed early 2016, and it would be announced (although I imagine not the licencee).

I need to correct that. It was in post AGM discussions in Sept, and his expectations were that the first licensing would take place in calender Q1 2016, and that it would be announced. At that time, there were several customers waiting for RT dev boards to assess the tech. Given the upthread posting, those dev boards must be out now.
 
Last edited:
Samsung's been busy developing Hybrid RT/Rasteriser IP for mobile SOC inclusion.

http://dl.acm.org/citation.cfm?id=2818442

Published at the beginning of Nov, for siggraph Asia '15.

Makes reference to James McCombe (formerly caustic/IMG) and also to advantages V GR6500.

Interesting. However the paper doesn't sound like a potential IP license from Samsung, but an inhouse developed hybrid GPU. I'm not that confident when it comes to Samsung and GPU development but that's another chapter. If they should have in mind something that has to do with RT yes they might license Wizard if they don't have any other viable alternative.

I'm keeping in the back of my head that there is most likely a potential licensee for it for early next year.
 
Online translation for it is fairly understandable....

The PowerVR GR6500, Wizard code name is a variant of PowerVR Series 6 XT (Rogue) equipped with the ray tracing module. It is clocked at 600 MHz and integrates 4 clusters for computational power of 150 Gflops (300 Gflops low accuracy). An identical configuration, apart from the ray tracing in the GPU of the Apple A8 (iPhone 6) but clocked at a frequency doubled. Imagination said that this PowerVR 6500 was manufactured in 28nm, measuring just over 100 mm² and consumes 4.5 W, so we are closer to the mobile world as one large desktop.


Do I understand the above correctly that the RT block in the GR6500 clocks twice as high as the rest of the GPU?

Also isn't a tad over 100mm2 at 28nm quite big?
 
Online translation for it is fairly understandable....



Do I understand the above correctly that the RT block in the GR6500 clocks twice as high as the rest of the GPU?

Also isn't a tad over 100mm2 at 28nm quite big?

That's not what the french quote is saying. They said that the GR6500 is the same configuration (except the RT block of course) that the PVR found in the Iphone6, but the clock speed is doubled.
 
Back
Top