The big issue that bothers most people raising these questions of RDNA 2.0 vs 1.9 (or maybe even RDNA 1.9.2.85.3 for that matter) is: Does it support Mesh/Geometry Shaders. Does it support Sampler Feedback. Does it support VRS.
Best case scenario: Yes, 100% to all of the above.
Worst case scenario: Pretty much yes, 90% to all of the above.
Let me explain.
There is no way PS5 does not have something akin to mesh shaders. AMD has been advertising this "unlocking" of the geometry pipeline for half a decade already. It was one of their main game changing features for their (then) future architectures. Then it was advertised as Next Gen Geometry Pipeline.
Now, does the implementation in PS5 work and perform exactly the same way as it does in RDNA 2.0 for pc under DX12 or vulkan? Best case scenario: Yes, and then some. Worst case scenario: yes for 90% of use cases, with slight workarounds for the others. That is it, that is the worst it gets, and I state this completely pulled out of my ass, but trust me on that one.
Sampler Feedback & VRS: those are not as much cornerstones of AMD vision for GPU arch as Geometry was, so I think those have a more solid chance of actually not making it to PS5. Yet, it really matters very little. The things they achieve can be done (and with good performance) with other aproaches.
PS4 PRO has been using checkerboarding and reconstruction extensively which adresses the same things as VRS. I really think VRS is way more usefull for DX12 scenarios, where you cant cant build algos that are optimised for the exact particularities of one GPU. It is very likely that hand crafted compute shaders and rasterization tricks targetting PS5 can perform better and achieve better results than VRS ever will on PC.
Sampler feedback also does things that can, and in fact have been, adressed in software before. Aka. RAGE. God, I keep mentioning this title, but honestly, anybody who wants to discuss and speculate about Virtual Texturing HAS TO re-read their papers and re-watch their presentations. How did the Rage Engine know which texture pages to load into their cache and at which mips? That is an interesting problem which saw a lot of experimentation from Carmak, which he describes by the way. His ultimate solution was quite simple. Rage renders a "proxy camera" encoding UV's only and discovers the needed pages from that. There are thousands of ways to optmize that by the way. Render at lowe res, render only section of the frustrum each frame and get the coverage temporally spread across multiple frames. One can also improve results by having a wider FOV on that proxy, or randomily render things behind the camera every so often to cache the full 360... Here, Sampler Feedback really is a WAY more ellegant way to solve the problem, and a great loss if not present for that reason. But there is still a viable workaround, if not more. The workaround performance is also clearly less performant, but probably it is a drop in the ocean for modern engines.