I am not an audio expert but I think we lack Atmos specific information here, from my vague reading on it I think the key point is it is not a codec it's 3d audio object metadata. DDPlus and DD Master are the codecs, audio streams and yes you could argue close to LPCM whoever even thoes contain basic metadata about intent that the final audio device should be aware of. Sending LPCM removes that.
Atmos is the object meta data on top of the 7.1 or whatever actual data.
LPCM is not the same as the mix is set, fire and forget. In ideal setups that's probably OK but the other formats are designed to get as close to the authors intent regardless.
https://www.dolby.com/us/en/technologies/dolby-metadata.html
Yes, Atmos uses 3d object metadata. It works as if it was an audio game engine where the distribution of the objects throughout the speaker channels is processed on the AV Receiver (and here's probably where the 32 voice limit comes in, since that might be how much the DSPs in current Atmos receivers can widthstand).
But if you have an audio game engine running in your game then there's no need to process that on the receiver, you can process that on the console.
Which is what's already been happening on the PC for decades. The Creative EMU10K1 soundcards from 1998 would already distribute positional object data (i.e. output from videogame directional audio engines), and in games like Unreal or Thief 2 it would distribute it towards 2 front + 2 rear speakers, using the popular
FourPointSurround set of that time.
Then in the next year the Live 5.1 was doing that on analog 5.1 sets, then a couple of years later the Audigy 2 upped that to 7.1. In the meanwhile nvidia with SoundStorm in 2001 on the nForce chipsets did that while encoding the result for Dolby Digital, followed by other soundcard manufacturers using Realtek and C-Media codecs (by now Dolby was licensing the encoding algorithm and called it Dolby Digital Live).
In the console space the Playstation 2 did multispeaker surround on a few games through Dolby Digital 5.1, then the first XBox did it on all its games (using nvidia's SoundStorm), and even the GameCube had directional sound for surround speakers in its games by using Prologic II through stereo analog outputs.
The only difference AFAIK is that Atmos uses height information, whereas most positional audio engines in videogames use only a horizontal plane.
So to summarize, in the end, it doesn't really matter if you process each speaker's output in the A/V Receiver (using Dolby Atmos or DTS: X) or in the PS5 (using LPCM) in a videogame. The room correction features from an AV Receiver will work whether you process Atmos on the receiver or just forward a LPCM stream (they'll even work if you use analog inputs for each speaker which is certainly an option for many PC setups -> this is a question I personally posed to a number of reps from IIRC Pioneer and Onkyo a while ago).
In the case of Sony, the decision to go with LPCM (instead of encoding voice coordinates into an Atmos stream) seems to have been a conscious one not only to save licensing costs and processing time to encode and compress the audio stream but also to overcome some limitations like Atmos' 32 voices that most receivers can process.
I don't think Sony would invest all this money on developing Tempest, an updated audio engine, new HRTF technologies etc.and then decide to skip Atmos just to save on royalties, if they had anything to gain in adopting Atmos.
On some newer and higher end Yamha AVR's YPAO has angle measurement and calibration, but we should not turn this into a Hifi thread
Are you sure? If you're talking about YPAO RSC then it doesn't measure angle, and that's the tech they're using in their top of the line receivers (at least the ones I can see in the website right now).
Regardless, I think AVRs using a small microphone array to detect and correct angle placements is bound to happen. I just don't know of any consumer product that does it yet.