Your post is the best resource regarding AMD frame interpolation so far over the whole internet.
Looking at pharmas post just above it seems working well, but there is not much motion shown, so that's expected.
Your's shows plenty of artifacts in a worst case scenario to analyze.
There's especially this artifact catching my interest: We can see a hard cut around the character where parts of the sky have been attempted to be reconstructed or guessed.
The artifact looks exactly like standard methods to do this to achieve screenspace motion blur or DOF. Basically copy pasting nearby sections of the background to deal with missing information. It's quite interesting those methods are good enough in practice although the artifacts in a still image are heavy.
But obviously it does not work well in this extreme case of fast camera rotation.
My first impression is: 'Damn, it's not good enough. So we need those otherwise useless tensor cores indeed, just to make crutches work we ideally should not even need.' : (
But i'm not yet willing to accept this. I think AMD could improve this simply by blurring the mask so the hard cut becomes smooth, and then we would not notice the trickery so easily.
Likely the same applies to other issues as well.
I hope they can improve it further and don't leave it at this seemingly early state.
On the long run, at some point i'll have to give up my resistance against ML for rendering games, i'm afraid. But no - not yet, please.