Even POWER VR raytracing can't be a good RT solution?
They use a HW unit for reordering, and reordering is the only way to address the bad memory access pattern.
But it also adds a lot of constant cost, so NVs approach seems preferable to get started IMO.
We should approach this with some patience. For example the progress in denoising was not there yet when ImgTech made their RT GPUs, and also not the idea of traversal shaders for LOD, AFAIK.
So we can expect progress from some other directions, and HW reordering would not help with coherent and short rays (sharper reflections / shadows from smaller human made light sources, AO).
I'm afraid HW reordering would result in underutilized chip area, and hope we can address this with a software solution instead in the future.
ImgTech also has HW BVH build. That's more promising eventually, but i don't know how much of a bottleneck this really is with RTX, which uses compute and some CPU for this. Everybody says it's no big issue.
Mobile GPUs are less powerful, so having FF units for everything makes more sense there. But the lesser FF we use, the more flexibility we have, and the better we can distribute chip area to what we really need or not in a certain game.
To get a 'good' RT solution we would need a totally unrealistic leap in memory technology, so my rating is not meant as critique.
So how many rays/sec do you expect? 5 Giga rays/sec (RTX 2060)? 7 Giga rays/sec(RTX 2070)?
If i have to answer, 4-6. No idea.
Problem is the rays/sec number is scene dependent and thus pointless because there is no standard scene and setting people use to get this number.
It will be also impossible to compare with RTX, assuming traversal shaders are a thing and the feature set becomes too different.