From the most recent article, Cerny is quoted assigning different functions for the two wavefronts.
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive
Thanks for the article. I haven't seen it before. Was searching for their audio clock rate.
"In general, the scale of the task in dealing with game audio is already extraordinary - not least because audio is processed at 48000Hz with 256 samples, meaning there are 187.5 audio 'ticks' per second - meaning new audio needs to be delivered every 5.3ms."
... and
"'GPUs process hundreds or even thousands of wavefronts; the Tempest engine supports two,' explains Mark Cerny. 'One wavefront is for the 3D audio and other system functionality, and one is for the game. Bandwidth-wise, the Tempest engine can use over 20GB/s, but we have to be a little careful because we don't want the audio to take a notch out of the graphics processing. If the audio processing uses too much bandwidth, that can have a deleterious effect if the graphics processing happens to want to saturate the system bandwidth at the same time.'"
So they alternate the 2 waves to prevent consuming too much bandwidth (?)
Hmm.. may be not. Running a second function may potentially increase bandwidth. The 3D audio path renders at real time, so they can't get too far ahead of the "playhead".
They have spare cycles.
But seems like they need low latency to respond to changing player position, and short audio ticks.
This leads to my question what functionality outside of 3D audio and the system can the game assign to the compute throughput that remains, how much compute that is, and what advantages it can have over the more abundant and standardized compute on the CPU and GPU.
Outside 3D audio, perhaps sensor input (e.g, PSEye, Guitar, ...), I/O value add (e.g., conversion, decoding, encryption), AI ?
Those first party studios will have to answer your question.