Why would a custom audio unit in PS5 be suprising ? Sony designed the audio processor of the....Super Nintendo 30 years ago.
A unit is generally a block on the die rather than modifications to existing bites. RT enhancements to the CUs would be called 'Hardware RT' but not an 'RT unit'. However, the 'custom unit' doesn't come from Cerny's mouth, so we can't be sure he described it as such and that wasn't just a (mis)interpretation by the author."The AMD chip also includes a custom unit for 3D audio that Cerny thinks will redefine what sound can do in a videogame." It's described as a custom unit on the AMD chip. That could literally be anything from an entirely separate bespoke block, to modifications on the GPU front-end to further improve on TrueAudio Next.
But why would Sony care to acquire them if their solution is the same as AMDs and XBSX's? Why not just leave them to develop and supply TrueAudio Next support on their own?Audiokinetic will definitely be responsible in part for developing a wwise plugin for the PS5 sdk, whether it's a custom processor or using the gpu for some of the complex computation. They're a software, not a hardware company.
They did add an external 3D audio chip for PSVR despite the cost sensitivity of the product, so at least we can see the intent of putting more specialized silicon to the task. And they own wwise now, this can't be a coincidence. I think they want to offer the entire tool chain, not just add hardware and hope devs will use it. It's the audio middlewares that would be responsible for the hardware support.
I can think of a few reasons that could have contributed to dropping the DSPs.Amd is continually updating their sdk. They dropped DSPs. I don’t know how they made that decision. I’m not up to date on current audio DSPs or how they’d compare in terms of flexibility and overall performance. 8 CUs on an RDNA gpu is a lot of math performance to compete with.
Does it need to be just one DSP? The current gen didn't limit itself to one, or at least I don't think Microsoft's solution did.Can you get the revolution in audio that you want with a tiny DSP? Is that assumption actually true?
...
Does it need to be just one DSP? The current gen didn't limit itself to one, or at least I don't think Microsoft's solution did
...
I'm not saying a DSP is a must-have, but it's a solid addition as you get more bang-per-buck. Reduce the GPU 10%, add a tiny DSP, and get 3D audio as a standard feature and USP for your platform while saving a few bucks on having a larger GPU.
Can you get the revolution in audio that you want with a tiny DSP? Is that assumption actually true? ...
I only phrased it as a tiny DSP because I was responding to the suggestion that you could shrink the gpu 10% and replace it with a tiny DSP that would outperform what TrueAudio Next could provide with that 10%.
There's nothing for most in the general audience to review, since the customers for these DSPs are SOC designers or manufacturers. There may be other factors that would go against licensed DSPs, such as whether the licensing would reflect having dozens of independent DSPs on-die, and whether there is additional complexity in programming or linking them to the memory subsystem. Some of the benefits of licensed IP like pre-validated elements and help with design might not matter as much for a company like AMD. It might not rule out a more streamlined CU with a task-processor execution model, as was alluded to in an AMD patent.I was genuinely asking the question. I don't know how big of an audio processor you'd need to do the same kind of real-time convolution reverb that an RDNA gpu can do across 2-8 CUs, as an example. I really don't have the knowledge of DSPs, or what's available on the market to get a sense of comparison.
Found an actually effective 3D audio example.if I had an amd gpu I’d probably install unity or UE and see if I could play around with Steam Audio and profile it.
Found an actually effective 3D audio example.
Not so proprietary this time... and cheaper than Atmos.I'm still trying to figure out how Dolby Atmos for headphones compares to the ambisonic formats. It would be very hard for Sony to force devs to use a proprietary format, and ambisonics seems to be the growing standard for VR
https://www.iis.fraunhofer.de/en/ff/amm/broadcast-streaming/mpegh.htmlDistribution format
The 360 Reality Audio Music Format was designed around being optimized for music distribution. In an effort to avoid the challenges of proprietary technology, Sony has been partnering with Fraunhofer IIS, part of Europe’s largest organization for applied research, to ensure the format complies with MPEG-H 3D Audio standard, an open audio standard.
Immersive sound offers cinema-like realism
The system may transmit immersive sound with additional front and rear height speaker channels or the Higher-Order Ambisonics sound field technology, improving today’s surround sound broadcasts and streams to provide a truly realistic and immersive audio experience on par with the latest cinema sound systems.
This?That looks like a music format, not really how they'd do real-time audio for games
This?
https://www.soundguys.com/mpeg-h-explained-24471/
Seems to be more suited to games, film, ver etc more than music
So I assume you mean something else?
Cerny had interviews back in 2013, maybe 2012, stating RT was considered for PS4 but devs shut the idea down as they'd need to develop new tools and alter work flows too much from what they were doing. With RT being a consideration for ps4, MS would've been pretty sure of RT in PS5 for several years now. IIRC, sony has RT patents dating far back as well. Not too sure of that one, though.
As for rt with NV/MS, wasn't the launch pretty bumpy and claimed by many to be rushed? Poor driver maturity at launch, compared to typical NV drivers, no software support, poor performance by anything outside the top spec card.
OK thanks, https://www.theverge.com/2019/10/15/20915250/sony-360-reality-audio-release-date-amazon-partnersthe 360 music format from Sony.