NextGen Audio: Tempest Engine, Project Acoustics, Windows Sonic, Dolby Atmos, DTS X

Could be, at the time it was more advanced then what AMD had to offer.



Developers don't have to use anything. The 3d audio hardware in the consoles is most likely not going to see as much attention as ray tracing or graphics in general. SSD's are also most likely going to be more prioritized aswell.

I doubt the audio processor in the xbox is specific to atmos. I don't see any reason why the 3D audio hardware in the consoles would not be used extensively. Game engines and game studios have audio teams. They're going to use the hardware that's given to them.
 
Microsoft also has the Windows Sonic for Headphones audio format they support on PC and Console but I don't know what devs would target to use outside of headphones.

Is it safe to assume these formats dont apply the HRTFs when not using headphones?
 
All I've really seen mentioned about Series X audio is this brief snip saying they're supporting Project Acoustics with hardware. We have zero information about the limits, but the focus for them appears to be environment modelling where Sony is focusing more on very accurate positioning/panning largely talking about HRTF.

 
Microsoft also has the Windows Sonic for Headphones audio format they support on PC and Console but I don't know what devs would target to use outside of headphones.

Is it safe to assume these formats dont apply the HRTFs when not using headphones?

Virtual surround from stereo speakers may do some kind of HRTF, but no you wouldn't do HRTF with a 5.1 or 7.1 mix as far as I know.

For headphones, I think developers can easily use a multitude of middle-ware or game engine solutions that support HRTF. Unreal Engine and Unity should have solutions for all of these things through plugins to various middle-ware or just built into the engine. Depends on what the API for the audio processor on Xbox looks like. I'm assuming it's Windows Spatial like on PC.
 
Xbox developers don't even have to use Atmos. Most don't. I'm not sure what the advantages of Dolby Atmos are to be honest, besides it being the latest standard for home theater. Being able to encode to atmos is good if you have a dolby atmos home theater setup. Atmos for headphones is nice, but I'm not sure sure that it's better than a number of other HRTF solutions that are available.
Generally if you switch it to Atmos or DTS:X, it'll use it's encoding for the game. Without dedicated audio to support the codecs, the improvement is minimal. But in some ways its still a lot better. In Apex Legends and call of duty I'm able to use sound better than others, sometimes they hear things that they shouldn't be able to, like hear my footsteps but I'm across the screen, I've never run into that issue. It's very good in figuring out which objects are closer and further away from me.

I doubt the audio processor in the xbox is specific to atmos. I don't see any reason why the 3D audio hardware in the consoles would not be used extensively. Game engines and game studios have audio teams. They're going to use the hardware that's given to them.
There does seem to be some form of hardware sound chip that can support multiple codecs.

Microsoft also has the Windows Sonic for Headphones audio format they support on PC and Console but I don't know what devs would target to use outside of headphones.

Is it safe to assume these formats dont apply the HRTFs when not using headphones?
Correct. Formats will not apply HRTF without the headphone, but you can ask your device to output it. It just won't sound like HRTF without headphones.
 
All I've really seen mentioned about Series X audio is this brief snip saying they're supporting Project Acoustics with hardware. We have zero information about the limits, but the focus for them appears to be environment modelling where Sony is focusing more on very accurate positioning/panning largely talking about HRTF.

Seems to be that Sony focuses on VR for the audio, and MS on traditional non VR gaming for audio?
 
Seems to be that Sony focuses on VR for the audio, and MS on traditional non VR gaming for audio?

HRTF is important for any headphone audio, VR or not. More important for VR, but still important if you're just playing a first or third person game on a tv with headphones.
 
HRTF is important for any headphone audio, VR or not. More important for VR, but still important if you're just playing a first or third person game on a tv with headphones.

Yes true, guess all or most next gen games will be using HRTF. OG Xbox had it already and even sound cards before it. About time we continue with it :)
 
Generally if you switch it to Atmos or DTS:X, it'll use it's encoding for the game. Without dedicated audio to support the codecs, the improvement is minimal. But in some ways its still a lot better. In Apex Legends and call of duty I'm able to use sound better than others, sometimes they hear things that they shouldn't be able to, like hear my footsteps but I'm across the screen, I've never run into that issue. It's very good in figuring out which objects are closer and further away from me.

...

The atmos for headphones feature on Xbox can take a standard 5.1 or 7.1 non-atmos mix and convert it to virtualized surround (HRTF) for headphones. That's not nearly as good as just doing HRTF audio natively with something like ambisonics, which is what games are going to be doing this upcoming gen. If the game outputs stereo audio the Atmos for headphones feature does nothing.
 
Hey, looky here. Series X audio is definitely going to be a DSP.

https://docs.microsoft.com/en-us/gaming/acoustics/what-is-acoustics
https://www.microsoft.com/en-us/research/project/project-triton/

Project Acoustics' key innovation is to couple real sound wave based acoustic simulation with traditional sound design concepts. It translates simulation results into traditional audio DSP parameters for occlusion, portaling and reverb. The designer uses controls over this translation process.

Basically the SDK voxelizes the scene and pre-computes a lot of the physics calculations up front so the runtime impact is very small. Then the runtime output is suitable for a DSP. Just a question of how many convolutions the DSP can do etc.

I'm very curious how, or if, this solution could handle dynamic environments with moving walls, or destruction etc.




Edit:

Really comes down to what the DSP can do. How many convolutions etc. Will they do HRTF on the cpu? What would hardware acceleration of the runtime look like? Etc etc.
 
Last edited:
I'm very curious how, or if, this solution could handle dynamic environments with moving walls, or destruction etc.
Nope.

Can Project Acoustics handle dynamic geometry? Closing doors? Walls blown away?
No. The acoustic parameters are precomputed based on the static state of a game level. We suggest leaving door geometry out of acoustics, and then applying additional occlusion based on the state of destructible and movable game objects using established techniques.
 
It looks like project acoustics outputs a compressed data set of directional impulse responses so it can play back directional audio that can be convolved and fed into an hrtf. They may have some hardware decompression in the audio processor so the cpu doesn’t have to do it. Just becomes a question of the dsp and how many convolution channels it has for reverb or hrtf. No closer to understanding that.
 
Project acoustics looks really amazing. I'm actually curious if it'll work on playstation. It works on Android and MacOS (not iOS?). The one thing I don't like, and it's a fairly big issue is it's baked. I'm assuming it only works for ambient sounds, environmental sounds and not dynamic objects. It also won't work in a game like Fortnite, or Minecraft where you build the world around you. It also wouldn't work in Battlefield or R6 Siege where you tear the world down.

I'm very curious about what Xbox will have in terms of audio processing power for dynamic objects within games, and for games that have dynamic environments.
 
Great to see both MS and Sony creating custom Audio chips this generation. Improving the audio of both consoles, while talking load off the CPU is great.
I cant see there being much of a difference between the 3D audio of PS5 and the Atmos and Project Acoustic of the XSX. The XSX audio solution is a direct plug in for both Unreal and Unity engines, which should help devs get the most out of it when using those engines.
 
I was wrong. Project Acoustics can handle moving sound sources.

Can Project Acoustics handle moving sources?
Yes, Project Acoustics consults the lookup table and updates the audio DSP on each tick, so it can handle moving sources and listener.

But the dynamic environment with moving walls and destructible terrain is still an issue that requires workarounds.

Here's an example I found of someone testing project acoustics with moving sound source.
 
All I've really seen mentioned about Series X audio is this brief snip saying they're supporting Project Acoustics with hardware. We have zero information about the limits, but the focus for them appears to be environment modelling where Sony is focusing more on very accurate positioning/panning largely talking about HRTF.
There are a few blurbs here:

Project Acoustics – Incubated over a decade by Microsoft Research, Project Acoustics accurately models sound propagation physics in mixed reality and games, employed by many AAA experiences today. It is unique in simulating wave effects like diffraction in complex scene geometries without straining CPU, enabling a much more immersive and lifelike auditory experience. Plug-in support for both the Unity and Unreal game engines empower the sound designer with expressive controls to mold reality. Developers will be able to easily leverage Project Acoustics with Xbox Series X through the addition of a new custom audio hardware block.

Spatial Audio – Spatial Audio delivers deeply immersive audio which enables the player to more accurately pinpoint objects in a 3D play space. With full support for Dolby Atmos, DTS:X and Windows Sonic, Xbox Series X has custom audio hardware to offload audio processing from the CPU, dramatically improving the accessibility, quality and performance of these immersive experiences.

Still no details about the actual hardware, though. In a normal year, I'd expect we might get a bit of info from a HotChips presentation like the ones they did for the original XB1 and the One X, but that probably won't even happen this year, so who knows when we'll get any details.
 
There are a few blurbs here:





Still no details about the actual hardware, though. In a normal year, I'd expect we might get a bit of info from a HotChips presentation like the ones they did for the original XB1 and the One X, but that probably won't even happen this year, so who knows when we'll get any details.

So the blurb on Spatial Audio basically confirms that XBSX has hardware acceleration for HRTF as well as 3D positioning in game space.

I'm guessing Project Acoustics is to handle the physical sound characteristics and modeling of the world and geometry (how sound interacts with the world around you before it reaches your ears) while Spatial Audio is there to handle more dynamic sound sources as well as the location of the sound sources in the 3D soundscape.

Obviously a very naïve and simplistic description of what is going on as Project Acoustics will alter and modify the sound such that by the time the sound reaches your ear, it may not be coming from the direction where it originated (obstruction, for example) or from multiple directions (reflection, for example) with appropriate delays due to sound propagation (like echoes).

Regards,
SB
 
I was wrong. Project Acoustics can handle moving sound sources.



But the dynamic environment with moving walls and destructible terrain is still an issue that requires workarounds.

Here's an example I found of someone testing project acoustics with moving sound source.
Very good! Odd choice for a three-legged avatar though.
 
Yeah, it supports a lot of features but no mention of Dolby Atmos support. This could become a hassle for multiplatform developers if they were looking for common format to use on PC/Xbox/Playstation.

Support for Dolby Atmos is certainly on PS5 when it can play UHD and normal Blu-Ray movies.
 
Support for Dolby Atmos is certainly on PS5 when it can play UHD and normal Blu-Ray movies.

It's being said they wont support it in games, snippets from post:

More annoyingly, they AREN'T going to be supporting Dolby Atmos natively on their hardware. Microsoft are (and indeed already do on XBOne in a limited fashion).​

Atmos is a major change in how we encode and output audio from the hardware, that allows a more accurate surround experience from whatever speaker setup you have. It requires multiple speakers around you and an Atmos decoder amp, which 99% of people don't yet have but it is a growing market.​

Due to the nature of game audio engines, Atmos is a natural fit for it anyway - we have been positioning hundreds of emitters in 3d space for decades, so Atmos simply put allows us to position those into your speakers rather than that information be added to a generic 5.1 or 7.1 downmix.​

What we really want, those of us making multi-platform titles, is parity between consoles - so that we can make the game once and not have to mix differently for each platform. The lack of hardware Atmos support is bad because it means if we want to do it, it will have to be added by software and incur a CPU cost that the Xbox won't.

 
Back
Top