Looking the UE4 RT video, i think it exposes well the performance limitations, but also the software limitations.
UE is a very traditional engine, and although there was remarkable progress in TAA, PBS, tools and ease of use, i do not expect wonders with performance and raytracing. We surely can expect a bit more from Frostbite and even much more from A4 because those guys do not need to maintain compatibility with thousands of users and their workflows.
On one hand UE / Unity makes it very easy nowadays to help with adoption, on the other hand those engines likely make it hard to fully utilize the potential from such new hardware. Even if there would be a big market. I expect more from custom engines and i hope they will still dominate AAA game development.
Looking at UE4 GI implementation it is too slow to be used in games, but it is also the only correct one using RTX we have seen so far in a game engine. They say it needs faster hardware and it's a thing for the future. In they way they implement it, this is surely true.
Without GI there is zero reason to expect photorealism. Soft shadows and reflections do not help with this, even if they add a wow here and a ohhh there. But it will keep looking like games.
So the question is still: Can we solve the GI problem, now, using RTX?
Sure! But can we do it fast enough?
Morgan McGuire has announced this, but i think it was not shown broadly yet:
https://morgan3d.github.io/articles/2019-04-01-ddgi/
More pictures here, and he says the probes can be calculated in 1ms:
https://twitter.com/casualeffects (Don't know anything about lag or scene size limits, but that's great performance as it seems!)
Does it look real? Nope. Can it be improved if we use 5 or 10ms? Will it look realistic? I don't know.
But after one year of RTX, and not really any news about it on GDC, we likely have no reason to think wonders will happen.
RT is the future, as it always was. That's fine. But the assumption it would bring realism may be much more enthusiasm and wishful thinking than ground truth. I somehow think many people are currently in the process to realize this, including myself.
I'm still optimistic realism is doable right now, but i am (or was) willing to sacrifice a lot of details. RT has raised the bar regarding to those details. My definition of realism may no longer be enough to fulfill peoples exceptions.
Also we see great demos each year at GDC or elsewhere. Demos that show IQ not practical to be achieved in games. This is how the industry shoots itself into the foot, because it can only cause disappointment on the long run after a short wow.
It is very bad marketing. And the same applies to the suggestion 'Rasterization is wrong and RT is right', which seems to come up in an unintended way. Both are useful tools, but neither is the solution for realistic realtime graphics. 'It just works' is just wrong.
But we'll get there...