Adshir's LocalRay mobile-friendly realtime raytracing solution *spawn

chris1515

Legend
Supporter
Raytracing is GPU accelerated on the PS5 as per Cerny so this is garbage.

https://wccftech.com/localray-promises-cross-platform-raytracing-even-for-smartphones/

Yes but I think the solution is different than Xbox one maybe custom Sony or third party provider Power VR or a new one local ray. Custom not means out of the GPU.

They said they have a contract with at least a console platform holder.

We designed it bottom to go after low-end devices. We have multiple deals across console and mobile sectors, but we are not allowed to talk about it until the second half of the

Adshir was founded a few years ago by Dr. Reuven Bakalash, who previously founded HyperRoll (acquired by Oracle in 2009) and Lucidlogix Technologies (acquired by Google last year). Bakalash has invented several raytracing related patents (you can find five on Google's own patents database) which are now used in LocalRay to achieve an 'algorithmic approach'. This enables a 'proprietary dynamic data structure' that adapts automatically to scene modifications. Other key points to enabling raytracing even on battery-powered (as low as '2W smartphones') were reducing the complexity of ray/polygon intersections and doing essentially 'free' skin animation.
 
Last edited:
Last edited:
Isn't this software RT?

Maybe it's a library that can take advantage of HW features.

no hardware RT

It is unclear yet how LocalRay will be implemented, though. There's the option of licensing the software to be integrated into game development engines such as Unity or Unreal Engine; alternatively, it could be integrated at the device level by hardware makers.

I read it as a mobile solution. So the PS5 is going to have a mobile RT solution, in some ways?

mobile RT does not mean bad solution here it made to scale from mobile to PC



The dem are on a 1080
 
Last edited:
Anyone with technical knowledge to confirm if this is sound?
Well, skimmed the presentation video, just looking on the slides and ignoring any cloud related stuff.
What we see is sharp reflections only, same is in CryTeks demo. The scenes have lower geometric complexity, but it's 60 fps not 30. (Interesting: They mention 5700 XT, which i guess they have used for the shown robot demo.)

He mentions 'free skinning'. This makes me think he uses bounding spheres instead boxes. I do this too and AS update cost is indeed negligible because bounds need no refitting with animation.
Problem here is spheres a re a bad fit - still worse than bounding boxes in their worst cases. Solution can be to subdivide the spheres further so they better approximate a flat surface.
But i would not call this alone an 'invention'. Still, it's rarely used so maybe you can sell it as that if you keep details secret.
Pure speculation of mine... i'll look up his patents later the day...

I'm also doubtful Sony would sell this as 'HW RT'. I don't think it can compete with RTX, but it would be enough to be used in games as well. Still, NVs DXR fallback seems at least as powerful, from what we see.
 
Ok, after browsing patents i see nothing special. Good work but nothing exciting. Thing is, Sony could do this themselves easily, so i don't believe there is a connection.
 
Performance is also not great for real games as the demos are simple demos. The 'mobile' example is 4 animated characters running at 40 fps on a Surface (Pro?) with only hard shadows. The high-end PC version is a single character. There's no evidence of it handling full-flavour (area lights, rough surfaces, etc) ray-tracing. Very reminiscent of the early Cell RT demos in simply being classic RT image generation.
 
Agree, but maybe there is some more hidden. The article says 20 people working on this for 7 years. That's a lot of manpower and time.
But i can't see soft shadows either. Maybe below the torus knot but don't think so. And no reflections of reflections.
Curious: Why a need to 'port' a game to LocalRay? What could cause a need to do significant changes to be compatible?
 
Agree, but maybe there is some more hidden. The article says 20 people working on this for 7 years. That's a lot of manpower and time.
But i can't see soft shadows either. Maybe below the torus knot but don't think so. And no reflections of reflections.
Curious: Why a need to 'port' a game to LocalRay? What could cause a need to do significant changes to be compatible?

There is not only the 5 patents you see in Google, They have 20 patents, this is in the article too. All the demo are pure software if someone use it in hardware accelerated way in console or smartphone it will probably goes faster.
 
Last edited:
There is not only the 5 patents you see in Google, They have 24 patents, this is in the article too. All the demo are pure software if someone use it in hardware accelerated way in console or smartphone it will probably goes faster.
You mean running on the CPU only, no compute?
 
You mean running on the CPU only, no compute?

No I said it is purely compute not hardware accelerated at all in the demo. After maybe it will be pure software this year and it will not be used in next generation consoles.

EDIT:
LocalRay solution is getting to market in two ways. It could license its software to enable a game engine such as Unity or Unreal to handle real-time ray tracing, or it could be a runtime ray tracing engine in the device level for hardware makers. The company expects that partners and customers will deploy systems with the software in 2020.

The second part looks like hardware accelerated imo. After it will maybe only used in smartphone. And it will only be part of console as a software.

Adshir was started in 2011, when Bakalash started filing for patents. The company received its first funding in 2014. It has received 24 patents and has applied for eight more.

They have filed 32 patents and 24 are approved.
 
Last edited:
He mentions 'free skinning'. This makes me think he uses bounding spheres instead boxes. I do this too and AS update cost is indeed negligible because bounds need no refitting with animation.
How do spherical bounding boxes equal free skinning? Problem with skinning is not just the high level bounding box which has to cover the entire potential range, but foremost the lower level parts as any structure not pre-baked for a specific pose can't properly bin triangles.

Or do you have to use "mid level" bounding boxes grouped by rig hierarchy rather than raw model space triangle binning? I suppose in that case you can actually estimate worst case bounding boxes for each group, and then refine / transform inner bonding volumes by current rig pose. The AS has to be aware of the rig a time of evaluation in order to be able to support "free skinning", and has to be baked with the full possible range of motion in mind.

But how would that be "for free"? You'd have to store the active rig inside the AS in order to avoid additional loads, you have to perform additional math for transforming effective bounding boxes according to current rig, etc.
 
Last edited:
How do spherical bounding boxes equal free skinning?
In my case, using a hierarchy of surfels, each surfel also has a bounding sphere. So for characters i only need to ensure the bounds are large enough to bound children under any animation. (Ensuring large enough is a matter of precalculation.)
Animation is then just transforming the surfels, which i need to do anyways, so the BVH update is free for me. There is no extra complexity like mid levels or taking character bones into account.

The same would work for triangles. You would only need to transform the BVH using the same skinning system as for the mesh itself, but no bottom up dependencies.
It would also work for bounding boxes, in case you do not need to guarantee the parent box bounds child boxes, and the guarantee to bound only the triangles would suffice (which is the case for RT).

I think this will become the norm. Tracing speed suffers, but FF HW will compensate and dynamic geometry has no more extra cost.
So no refitting, just transform the bottom AS and rebuild the top levels as usual.
It might be worth to shrink the bounds so they bound triangles exactly to prevent the hit on tracing speed for the lowest levels of the tree.
 
It's just software, who cares. There's a 98% chance whatever neat thing it is, it's replicable in a convenient way that gets around whatever stupid patents they came up with. The asshole "inventor" that came up with it would've done better to just publish it as a paper than sit there and pretend he's going to get money off it, rendering patents never seem worth much beyond annoyance.
 
With PS and XSX having AMD HW RT, they should have no use for it.
What's left? Nintendo, Atari Box, Chinese consoles? Only the former could be called a leading manufactor in console space, and it's a mobile device.
On the other hand, shown video on Samsung mobile has too low framerate and too restricted use of reflections on only one object to be worth it.
So, maybe current gen PS4 or XBone?

I'm not convinced from what he says. There is no technical detail or argument. Actually he talks more on downsides of RTX than about his own tech.
 
Back
Top