Steam implements AMD TrueAudio Next in Steam Audio

  • Thread starter Deleted member 13524
  • Start date
D

Deleted member 13524

Guest
http://steamcommunity.com/games/596420/announcements/detail/1647624403070736393

We have just released Steam Audio 2.0 beta 13, which brings support for AMD TrueAudio Next technology. TrueAudio Next lets developers accelerate certain spatial audio processing tasks by reserving a portion of the GPU. Combined with Steam Audio's ability to model a wide range of acoustic phenomena, this enables increased acoustic complexity and an increased sense of presence in games and VR applications.

AMD TrueAudio Next (TAN) is a software library that implements high-performance convolution – a filtering technique – for audio data on supported GPUs. TrueAudio Next also provides a feature called Resource Reservation, which allows developers to dedicate a portion of the GPU's computational resources to be spent on time-sensitive tasks such as audio processing. Here is a brief explanation of both of these high-level features of TAN.
(...)
TAN also provides a feature called resource reservation, which allows developers to reserve a portion of the GPU exclusively for audio processing.
(...)
With Resource Reservation, developers can reserve some small number, say 4, of the CUs for audio processing only. This dedicates 4 CUs for audio processing, while reducing the number of CUs available to the renderer by 4. Resource reservation does not allow more than 20 to 25% of the GPU CUs to be reserved.

4 CUs at RX480 clocks is 640 GFLOPs.
How/why do devs need 640 GFLOPs for audio processing? That's the FP32 throughput equivalent of a 12-core Threadripper dedicated for audio alone.
This makes me question the compute efficiency of TrueAudio Next, at least compared to CPU implementations (which would make sense, since GPU audio convolution is 1 year-old and CPU audio convolution is.. 30 years-old?).


They also mention they're only using TrueAudio for reverb and not HRTF, because HRTF is very lean on CPU resources.

It seems it'll only be available as a top-end audio reverb choice for people with RX470 and up.
One odd fact is that it apparently supports Fiji cards (GCN3), but it doesn't support the full Tonga cards which have a compute throughput similar to the RX470.

I also feel there's a missed opportunity with the CU allocation here. Why is there no option to allocate a secondary GPU for TrueAudio Next?
People using a Raven Ridge + discrete graphics could be using the iGPU dedicated to TrueAudio Next, meaning up to 8 CUs for this ultra-high end audio reverb with no performance impact. Also, one could choose to buy e.g. an inexpensive (and #gasp# available) RX540 to use in their system with a GTX1080 Ti so they could get the high-end reverb too.

This is only the first release though, so I guess there's plenty of room to improve.
 
1CU per shader engine might have been the minimum granularity for distribution of quantum phased hyperdimensional acoustic processing in the 4th degree words keep coming out and you're still reading this.

<<rests my chin on my hands>>

Do, go on! :)

Regards,
SB
 
I assume 4 CUs was just an example, and that the actual number of CUs allocated is up to developers to decide. *shrug*
That was my initial thought, but they seem to consider the 4 CUs as the minimum in their examples:

6VmI1LJ.png


And their performance comparison graph even uses 8 CUs:

PLQSgi9.png


If you go by that comparison, a 2 CU solution would actually be slower than a CPU for a low number of sources.

Furthermore, if they allowed 2 CUs to be used as minimum, they had to include the RX560 in the list, since 2 CUs is less than 20% of the full 16 CUs that are present in the chip. However, their list starts with the RX470 as the lowest performing GPU.
 
Not too excited about going all-in on reverberation thru linear filtering/ convolution, they could even drop sample rate to 22.05 khz or lower then. Recent headphones and IEM don't have much output over 10khz anyway, and for linear there is no need for high sample rate . Anyway it won't be convincing without non-linear stuff . BTW it's said HRTF needs at least/over 96khz .
 
Not too excited about going all-in on reverberation thru linear filtering/ convolution, they could even drop sample rate to 22.05 khz or lower then. Recent headphones and IEM don't have much output over 10khz anyway*, and for linear there is no need for high sample rate . Anyway it won't be convincing without non-linear stuff . BTW it's said HRTF needs at least/over 96khz .
*Citation needed
 
I thought with the r290 amd added DSPs to the chip for audio ? what happened with that ?

yes, that supported products list is looking a little short,
I think all GCN cards, apart from GCN1.0 (and the 7790, which had the hardware but disabled) should support it!?
 
http://steamcommunity.com/games/596420/announcements/detail/1647624403070736393





4 CUs at RX480 clocks is 640 GFLOPs.
How/why do devs need 640 GFLOPs for audio processing? That's the FP32 throughput equivalent of a 12-core Threadripper dedicated for audio alone.
This makes me question the compute efficiency of TrueAudio Next, at least compared to CPU implementations (which would make sense, since GPU audio convolution is 1 year-old and CPU audio convolution is.. 30 years-old?).


They also mention they're only using TrueAudio for reverb and not HRTF, because HRTF is very lean on CPU resources.

It seems it'll only be available as a top-end audio reverb choice for people with RX470 and up.
One odd fact is that it apparently supports Fiji cards (GCN3), but it doesn't support the full Tonga cards which have a compute throughput similar to the RX470.

I also feel there's a missed opportunity with the CU allocation here. Why is there no option to allocate a secondary GPU for TrueAudio Next?
People using a Raven Ridge + discrete graphics could be using the iGPU dedicated to TrueAudio Next, meaning up to 8 CUs for this ultra-high end audio reverb with no performance impact. Also, one could choose to buy e.g. an inexpensive (and #gasp# available) RX540 to use in their system with a GTX1080 Ti so they could get the high-end reverb too.

This is only the first release though, so I guess there's plenty of room to improve.
Almost three Xbox 360 just for audio... That's crazy. I rather prefer an external synthesizer do the job and save those valuable CUs for graphics. I used AMD Audio on my RX 570 for a while, though
 
Last edited:
I used AMD Audio on my RX 570 for a while, though
I had a 290X and then 2 390X, but never had an inkling of an idea how to enable or use AMD audio... It never appeared as a sound device on either of my PCs.
 
I had a 290X and then 2 390X, but never had an inkling of an idea how to enable or use AMD audio... It never appeared as a sound device on either of my PCs.
So they support TrueAudio? The device appears as AMD Audio or something like that, can't quite remember. I used it for a while, disabled the onboard audio and it played through the HDMI port of the GPU.
 
So they support TrueAudio?
Oh yes... AMD spent more than half the 290/X presentation talking about friggin 3D surround audio, don't you remember? :D Incredibly frustrating for those who were anxious to hear more of the then-first new high-end AMD GPU in 1.5 years!

Then the initial experience was further marred by the shitty reference design cooler AMD coupled their boards with. Oh those were the good old days! ;)
 
a synth will only do midi

I think that was about procedural audio. Could be the preferable way indeed, yet I'm not really sure it's "samples" that need to be generated, or those "samples" need to be fed into such linear filter array as in TA Next. Seems naiive.
 
Not my idea, https://www.princeton.edu/3D3A/Papers.html

I think they want to interpolate between multiple measurements and at the higher rate less can go wrong, besides they sort of admit even ultra-high quality HRTF "scans" (not coming to consumer space anytime soon) are sort of ineffective above 6khz .
96kHz is unrelated to human perception.
HRTFs are really interesting. I encourage everyone with an interest in positional audio to do some experimenting with attaching small omnis by their own ears (on glasses typically) and record away! The result is generally better than typical binaural recordings, possibly because the sound isn’t passed though the outer ear (pinna) of a dummy head, but only your own.
 
Not too excited about going all-in on reverberation thru linear filtering/ convolution, they could even drop sample rate to 22.05 khz or lower then. Recent headphones and IEM don't have much output over 10khz anyway, and for linear there is no need for high sample rate .
Well, headphones can reproduce 10+kHz, but the amount of information there is typically not much to write home about, and most adults are effectively deaf above 15kHz so any signal going on there isn’t going to make much of a difference to most people on most material.
 
Back
Top