Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The major sticking point behind nanite is that it needs some form of a forward progress model so that it can use persistent threads to do hierarchal culling. I don't see how dynamic parallelism will get us there since that's mostly a software feature and forward progress guarantees is mostly a hardware property of how GPUs scheduling works ...
Persistent threads are a way to implement dynamic parallelism, you just have a megakernel inside the persistent threads and it jumps entire work groups to the appropriate branch when sufficient work items are available. It can run any task in parallel dynamically with cooperative multithreading. Starvation comes from reserving so many threads outright and trying to do an end-run around the scheduler without knowing the exact internals, but the aim is dynamic parallelism. Just like on CPUs, cooperative multithreading under complete control of the code can have some performance advantages.

PS. it's possible the possible starvation also comes from the way they do the queuing, there are some advantages to playing loose and fast (or at least 7 years ago to the best of their knowledge, I know dick about it). Regardless, the ultimate aim of persistent threads for Epic is clearly dynamic parallelism.
 
Last edited:
Toms hardware is expecting AMD's RDNA3 to support dedicated hardware (BVH) ray tracing acceleration akin to Intel and NV. Launch is in abit more then a week. Also to note, 6900XT can be found for as low as 650usd new, which is a very good deal imo.

 
Persistent threads are a way to implement dynamic parallelism, you just have a megakernel inside the persistent threads and it jumps entire work groups to the appropriate branch when sufficient work items are available. It can run any task in parallel dynamically with cooperative multithreading. Starvation comes from reserving so many threads outright and trying to do an end-run around the scheduler without knowing the exact internals, but the aim is dynamic parallelism. Just like on CPUs, cooperative multithreading under complete control of the code can have some performance advantages.

PS. it's possible the possible starvation also comes from the way they do the queuing, there are some advantages to playing loose and fast (or at least 7 years ago to the best of their knowledge, I know dick about it). Regardless, the ultimate aim of persistent threads for Epic is clearly dynamic parallelism.
I don't think nanite needs dynamic parallelism and I think you're overstating the importance of it's functionality. All dynamic parallelism does is let the device (GPU) self spawn it's own kernels without the host doing it (CPU) ...

Dynamic parallelism by itself will not stop GPUs from potentially deadlocking itself when executing blocking algorithms. Both CUDA and OpenCL had dynamic parallelism but those APIs still didn't make any guarantees about forward progress until Volta released with independent thread scheduling. Nanite's persistent threads optimization already works with current APIs and shading languages via undefined behaviour and they don't have APIs for dynamic parallelism either. The next necessary step for it to make it consistent across drivers/HW is to make this specific scheduling behaviour legally binding from an API/shading language perspective so that some form of forward progress guarantee is enshrined in the specification ...
 
Nanite's persistent threads optimization already works with current APIs and shading languages via undefined behaviour and they don't have APIs for dynamic parallelism either.

That's mostly the point of persistent threads, the scheduler is in the kernel itself so it needs no API for dynamic parallelism, it DIY's it.
 
Where? Looks good to me:

View attachment 7357

Are you just seeing some situations of very high contrast lighting?

I think I read about Lumen currently supporting only 2 spatial bounces? Minimal actual spatial bounces with texture feedback tends to create extreme contrast scenarios that wouldn't be there with more spatial bounces. I'm wondering if the results we see here, the small tree in the midground going to black while the sky and even parts of the ground go to white are the results of that or the results of clipping past the floating point range and this is more or less correct-ish.
 
I think I read about Lumen currently supporting only 2 spatial bounces? Minimal actual spatial bounces with texture feedback tends to create extreme contrast scenarios that wouldn't be there with more spatial bounces. I'm wondering if the results we see here, the small tree in the midground going to black while the sky and even parts of the ground go to white are the results of that or the results of clipping past the floating point range and this is more or less correct-ish.
What I saw without looking too closely looked to me like a difficult-to-expose-for situation, or even expanded contrast for artist effect. Seemed in line with what I'd expect from photography. Considering much of the GI looks well balanced with the direct lighting, it doesn't smack of a technical limitation to me.
 
Does anyone remember that company that used point cloud data (Voxels) for 'Unlimited detail' ???

Their system only displayed one voxel per pixel using search algorithms some 10 years ago and it was sort of Nanite 10 years before we got Nanite.

Everyone laughed at them 10 years ago and accused them of faking the tech but looking back at it now it might have just been way a head of it's time.
 
Does anyone remember that company that used point cloud data (Voxels) for 'Unlimited detail' ???

Their system only displayed one voxel per pixel using search algorithms some 10 years ago and it was sort of Nanite 10 years before we got Nanite.

Everyone laughed at them 10 years ago and accused them of faking the tech but looking back at it now it might have just been way a head of it's time.
Funilly enough they had to solve similar issues Unreal Engine 5 was trying to solve, like applying Nanite on animated objects.
I would agree with you that the whole idea was ahead of its time. They were looking at the right direction, but the tech wasnt there yet.
 
Funilly enough they had to solve similar issues Unreal Engine 5 was trying to solve, like applying Nanite on animated objects.
I would agree with you that the whole idea was ahead of its time. They were looking at the right direction, but the tech wasnt there yet.

The last I saw of them they did manage to get it working on animated objects but it looked like PS1/N64 animation levels.

And the storage requirements were huge.
 
 
Does anyone remember that company that used point cloud data (Voxels) for 'Unlimited detail' ???

Their system only displayed one voxel per pixel using search algorithms some 10 years ago and it was sort of Nanite 10 years before we got Nanite.

Everyone laughed at them 10 years ago and accused them of faking the tech but looking back at it now it might have just been way a head of it's time.
Thread discussing is:

And seriously? You want to compare this to Nanite?


1662638891273-png.6888
 
Thread discussing is:

And seriously? You want to compare this to Nanite?


1662638891273-png.6888
The Unlimited Detail tech though was much better looking and the idea was very similar. He isnt saying it produced the same quality as UE5. But the idea was there with point data
 
Back
Top