The 'slow down' perhaps, but the ergonomics were designed to suit a mechanical typewriter to prevent jamming keys and the limitations QWERTY was designed around don't apply to computer input. If a better keyboard is possible with better ergonomics, less fatigue, fewer injuries, and/or greater speed, it still won't get used.
This is the article cited from Reason Magazine (and it's the little brother to the other article written bu the same authors) -https://reason.com/1996/06/01/typing-errors/ - it isn't really talking about efficacy but why QWERTY becoming a standard not just for being first but for other reasons and the fallacy of people leaning on an example without properly confirming it, the old urban myth problem.
As said article states, data suggesting Dvorak isn't advantageous tends to compare people with existing QWERTY experience converting over, which is not the same as comparing someone who grew up with only Dvorak versus someone (of identical natural skill and KB use to develop at the same rate) who only knows QWERTY. In short, the comparisons are much like trying to see the value of Larrabee in running existing games of the time instead of with 20 years of graphics evolution designed around Larrabee.
I can quite accept people manage to overcome the limitations imposed by QWERTY, but QWERTY isn't designed ideally as the perfect computer input. It's just the best option because everyone was already used to it. Even if QWERTY isn't disadvantageous to typing, we are still 'locked in' to it. We'll be locked in to it regardless of how optimal it is for modern typing workloads and there's no point trying to research the true ideal KB layout (Dvorak isn't necessarily that so comparing QWERTY to Dvorak isn't proving QWERTY isn't imposing limits). Same as everyone driving on the left in the UK and the right in the US. Same as mains electricity being 120V in the US versus 220V in Europe. We're wrestling with IPv4 which wasn't designed to be future proof but just happened to be the starting point for addressing internet devices, and everyone started running with it and building a network around it, and then inventing complicated fixes like NAT to overcome its inherent limitations. We end up with a lot of legacy baggage limiting future options where, even if we recognise a change would be beneficial, the cost to change is prohibitive.
That's where consoles used to have an advantage, allowing a whole new paradigm in a new machine with new software, though of course business concerns limited how much investment they get to develop and explore new ideas that conflicted with larger common patterns.
In short, it really is impossible to compare alternative techs fairly where one is mainstream and the other experimental. A huge amount of a system lies not just in its immediate qualities, but the world and human thinking that is shaped around it. As hardware develops RT solutions, software will develop around that hardware, and an alternative paradigm that'd yield a net better results (from different tradeoffs) can't prove itself or be adopted. We just have fringe cases like Dreams where MM had to create their own entire tool-chain, an infinitesimally small investigation into the possibilities of non-triangle rendering against a world of decades of 3D triangle rasterisation thinking.