Alright, I am going to be frank as much as possible here, since no one else is going to.
I would say only Nvidia, Intel and AMD stand to gain as continually worse performance will drive people to upgrade in an attempt to brute force passed the issues.
History tells us a little different version of the story here, even Khronos alluded to it.
Lower Level APIs were created on the promise that they need to increase draw calls and reduce driver complexity to eliminate CPU overhead, it was said that DX11 was the limiting factor, and we need to dumb it quickly and move on. AMD in particular championed this direction (along with few developers), AMD was having a very hard time properly optimizing DX11 and OpenGL, they still have by the way, as they only managed to properly optimize for DX11 last year, when they released several drivers that increased fps for old DX11/OGL games many many years after these games were launched. So the problem still exist on their end to this day.
So they created Mantle, and managed to amass some support with several developers, Mantle had the same problems we have now, but to a lesser degree, never the less Microsoft didn't like AMD's move to a new exclusive API, so they quickly assembled DX12, after which AMD ceased their Mantle efforts, and scrapped the plan to implement it in future games.
So all was good in the lands of AMD, DX12 was demoed with Ashes of the Singularity game, which was designed around the API, however the adoption of DX12 was slow as hell, and with each game, problems were introduced, stuttering increased, CPU overhead stayed the same or became worse, performance is not faster but slower than DX11 .. etc. Soon several after developers expressed their disgruntle with DX12, implying you should only move to DX12 if your needs meet with that API. Even the developers of Ashes of the Singularity moved to DX11 in their next game (Star Control), citing that it didn't make sense to ship it with DX12, and they advised people to move to DX12 only for features, not performance.
So while the move to DX12 was highly beneficial to AMD who still can't develop proper drivers for DX11 quickly enough, the rest of the industry suffered, NVIDIA (who has the largest market share) suffered more, the developers suffered tremendously (as stated by Khronos and several other developers), even the developers who supported DX12 initially quickly had their enthusiasm die down! We gamers suffered as well, the experience of playing games was hectic, unpredictable and buggy for almost a decade, even engine designers suffered, as Unity, Frostbite and Unreal are moving away from the direction of DX12's core concept.
And this is really the crux of the problem, AMD insisted on changing the core API, swiftly without proper consideration from a more wider industry consensus, Microsoft quickly gave in and made that change a reality without proper considerations too, then Khronos followed their lead as well, and now we have this current situation, people realizing that this core change only benefited so few, while the rest of the industry gained nothing, only suffering and a vastly worse user experience.
We are in 2023, and the problems DX12 set out to solve became worse, CPU overhead is vastly increased with no visual gains, we struggle with our overpowered CPUs rendering last gen graphics! VRAM consumption is blown out of proportion for last gen graphics! stuttering is increased ten folds for practically the same effects, and performance is worse on both AMD and NVIDIA! AMD didn't gain much from this either, as their performance and features lagged behind NVIDIA, their market share dwindled to the absolute lowest point, and they are a generation behind NVIDIA in ray tracing, machine learning and upscaling. On the other hand, we the users had none of the promised explosion of draw calls (thus rendering more stuff on screen), and none of the promised proper utilization of our hardware, in fact it's the opposite really.
So, the question is, did we waste it all so that one IHV can make their driver life easy?
Khronos wants to penetrate more markets and more developers, that means going back to the way things were, when stuff was just easier, and we got more done by the power of more powerful hardware, instead of the API getting in the way. This will again make things harder for AMD no doubt, but to be honest it won't change things much for them, their problems were never just bad DX11 drivers, they should focus on doing hardware better, not cripple APIs so that their hardware can become a little bit better.
I say it's time for things to return to the way they were, those who want to do very advanced stuff can stick to the old DX12 paradigm, provided that they handle it's problems better (increased VRAM usage/stutter/increased CPU overhead), they don't get to pick and choose here, they want to go DX12 then they should give it their best shot, cover all corner cases and make sure their code is actually faster and more performant, in short, do a proper job, not a half-assed one with all of the usual suspect problems present. If they can then so be it, if they can't then they should stick to the (new and old) DX11 paradigm, so that all of us can have a better experience.