The amount of optimizations which devs are putting into the console versions of their games are usually some magnitude of times higher than what they do for PC.
I don't see why this would be any different for new consoles, early generation or not.
It can be seen clearly that devs are spending a lot of time optimizing their RT performance on new consoles already btw, from consoles to PC comparisons and the changes which some games get with patches.
RT as any new thing gets a lot of attention and interest from devs at the moment.
You say that optimization on consoles usually are some magnitude of times higher than what they do for PC, yet the early gen games have never been remembered as showing what the consoles were capable of.
Rather, early gen titles have often been picked on for having visible compromises for using engines that were made for the previous generation in mind.
Why are today's consoles' early-gen titles suddenly different?
And since you say studios still are getting to grasps with using DXR in the right way even on PC, and patching for consoles too, why do you assume that consoles are using well optimized low level implementations?
There are a couple of titles requiring FL12_0 already I think but why does it matter?
It's just that you seem to assume that the consoles are close to having peak optimization from day one, when no game yet is known for requiring DX12 Ultimate, and actually even Nvidia GPUs arguably are hampered by games still being made to run on fl 12_0.
Even the go to cases for showcasing PC versions at their best have been limited by having to take older hardware into account.
Again, why do you think that it's because of "games simply started limiting the tessellation to what was performant on the consoles" and not because AMD h/w got better at tessellation to a point where NV did not have a performance advantage due to it? This new AMD h/w was also used for mid-gen console upgrades which in turn lead to versions of games for these "pro" consoles running with higher amounts and levels of tessellation than the "base" consoles versions leading to less visual difference between them and PC too.
I'm not talking of AMD releasing new hardware that was better at tessellation. I'm talking of early gen games that had lower tessellation settings on both consoles and AMD GPUs than on Nvidia GPUs. And how tessellation settings stopped being a deal in comparisons for later games, on the same consoles and AMD and Nvidia GPUs.
But if I misunderstood your argument and you mean it's improvements on the game engine and/or driver side, if it took two or three years to optimize tessellation for GCN to remove Nvidia's visible advantage, why do you assume optimizing RT on RDNA2 would be done in just a few months?