Predict: The Next Generation Console Tech - AUDIO

Do you mean that is already here? Like on PS3 or something to that extent?

Yes, I think whatever comes in the next generation can already be done in the PS3.

Maybe the next-gen will handle more voices and higher quality audio samples, but nothing that would be noticeable using TV speakers or a "good-looking" 5.1 speaker set.

Plus, what's being used on today's consoles is already a rather small improvement over what the XBox 1 had, afaik.


Most people don't care about sound anyways. Many gamers don't even mind playing some of their games without sound at all.
 
I see no point in changing audio compression format. I've never yet heard audio compression artefacts in a game.

Play those Back to the Future adventure games. The voice acting is compressed all to hell and back in what must be 16 kpbs or something MP3s with crappy encoding.
 
That's not a fault of the technical limits of modern audio engines, but a fault of the delivery platform. the BttF games don't need anything better than a larger package and far less aggressive compression. They don't need 192 kHz, 24 bit, 5 channel audio samples! ;)
 
I wonder if MS will be including stereo headphone jacks in the controllers this time around? Seems like a good way to guarantee they get their money's worth with people getting experiencing the benefits of quality sound without relying on them having higher end stereo systems or dedicated gaming headsets.
That's an opportunity Sony seemed to have missed. Advertising convincing 3D stereo audio would actually be a big thing for me. A suitable YouTuibe campaign would also successful educate all gamers. Tell the viewer to plug in a pair of cans and show COD/Battlefield while switching between stereo and holophonic audio, and it'll blow them away. Advertise your console as the only one that does this, and bag a load of customers, I bet.
 
That's an opportunity Sony seemed to have missed. Advertising convincing 3D stereo audio would actually be a big thing for me. A suitable YouTuibe campaign would also successful educate all gamers. Tell the viewer to plug in a pair of cans and show COD/Battlefield while switching between stereo and holophonic audio, and it'll blow them away. Advertise your console as the only one that does this, and bag a load of customers, I bet.

Could Sony not offer at least an approximation of this using GPU compute?
 
Could Sony not offer at least an approximation of this using GPU compute?

It'd be possible depending on how parallel friendly sound processing is. But then depending on how many resources it takes, it could reduce the amount of rendering power available for graphics below 1.2 TF. But if it doesn't you might as well use it for that as multiplatform titles will seek relative graphics parity anyway.

So something like that could happen, or perhaps one console just ends up with better sounding game audio than the other.

I expect we won't see this taken advantage of in the first wave of titles, however. Especially if they weren't successful with emulating it with even 8 CPUs. Developer's wouldn't have time to be able to take advantage for launch titles.

And since no one has developed seriously for anything similar (EAX for example) for almost a decade now, there isn't anything existing on the PC side that they could have used as a basis to prototype.

I'll be in heaven if we see a resurgence of focus on proper audio simulation. Gaming has been dreadful when it comes to audio for over a decade now, and it makes me sad just to think of it.

I wonder if Bkillian had a hand in designing the audio capabilities of Durango. :) If so I'll have to send him a virtual pint of the best beverage he likes if audio in games improves significantly this generation. :)

Regards,
SB
 
Would it really be difficult to take advantage of this audio processor right away? As far as the rumour suggests, it uses the Xaudio2 api which is well understood. They already have a companion API that calculates volumes for sounds based on positions relative to the viewpoint.
 
...
I wonder if MS will be including stereo headphone jacks in the controllers this time around? Seems like a good way to guarantee they get their money's worth with people getting experiencing the benefits of quality sound without relying on them having higher end stereo systems or dedicated gaming headsets.

The rumour does suggest support for four independent headset mixes, so a full-on headset jack on the controller would make a lot of sense. You could play halo in 4-way splitscreen and each person could have an audio mix relative to their view.
 
Could Sony not offer at least an approximation of this using GPU compute?

Good question. Would gpu latency be an issue for real-time audio? Haven't seen it on the PC, but PC gpus have a different memory access behaviour than Durango and PS4.

Edit: I might hazard a guess that the gpu is a bad place for real-time audio in a game only because it has to become the highest priority processing job. You can't accept any audio stuttering at all. You can accept torn frames and inconsistent framerates more than you can tolerate stuttering audio. Maybe this is totally wrong, but graphics needs to be the 1st-class citizen on the gpu.
 
Need some more specs on the PS4 audio chip to compare. The only detail that's been released about it was from Vgleaks I think. According to them, it's capable of playing back around 200 MP3 streams. Assuming that's stereo, that'd be 400 channels of audio. But that's only playback though.

Silent_Buddha said:
I'll be in heaven if we see a resurgence of focus on proper audio simulation. Gaming has been dreadful when it comes to audio for over a decade now, and it makes me sad just to think of it.

Very true. The whole game audio part of the industry just kind of plateaued, and everything was just "good enough" and has stayed that way for ages. But really that's only indicative of the level of disinterest or importance most people put on it. Why bother putting more R&D and dev time into something most wouldn't or couldn't take notice of due to crappy speaker setups and bad or undiscerning ears?

I think we'll see a resurgence in the importance of it all when VR takes off. Then we'll really need to have sound get more realistic to immerse the player better.
 
That's an opportunity Sony seemed to have missed. Advertising convincing 3D stereo audio would actually be a big thing for me. A suitable YouTuibe campaign would also successful educate all gamers. Tell the viewer to plug in a pair of cans and show COD/Battlefield while switching between stereo and holophonic audio, and it'll blow them away. Advertise your console as the only one that does this, and bag a load of customers, I bet.

The current output to the Stereo headset jack on the DS4 seems to support 32Khz for 2 players, and apparently drops slightly as you add more. That should be fine for voice chat, but would that actually be adequate for the full gameplay audio (music, effects, dialog, etc)? The limitation appears to be one of bandwidth, as its sharing the BT connection with its attached controller and any other active BT peripheral. So, I suspect they'll do what they did this generation, offer stereo headsets that sit off of the BT piconet and on its own separate 2.4 Ghz channel for full bandwidth (Pulse seems to support 16bit @ 48Khz based on the Windows driver). Not sure there's a cost effective way to do that for a pack-in though.
 
But doesn't Xenon also have much poorer IPC than Jaguar, 0.2 vs 1/1.1?

And why is doing 3D sound for headphones so difficult exactly?
Is the actual HRTF mixing that difficult or is it mostly due to the need for headtracking to do accurate 3D sound?

As I don't think you can assume the headphones are parallel to the screen and calculate the HRTF for that (ie so sounds to the right of the onscreen view come from the right earpiece etc) as most people sit somewhat off axis to their TV and do not face the screen absolutely head on - this would screw up the HRTF as you'd actually be sitting to the right or left of the screen
 
But doesn't Xenon also have much poorer IPC than Jaguar, 0.2 vs 1/1.1?
Yes, for general workloads. But for optimised audio workloads, the theoretical maximum is achievable. And the IPC for jaguar is across the CPU, not only the floating point module.
And why is doing 3D sound for headphones so difficult exactly?
Is the actual HRTF mixing that difficult or is it mostly due to the need for headtracking to do accurate 3D sound?

As I don't think you can assume the headphones are parallel to the screen and calculate the HRTF for that (ie so sounds to the right of the onscreen view come from the right earpiece etc) as most people sit somewhat off axis to their TV and do not face the screen absolutely head on - this would screw up the HRTF as you'd actually be sitting to the right or left of the screen
For one thing, there's only a single source per ear. Our brains use many cues for determining audio directionality, including integrating results due to the ear shape, us moving our heads slightly, and subtle phase differences. Worse is that it's different for each person, so you'd need a personalised HRTF.

It's like the difference between head tracking 3-d effect and true 3d effect, it's a nifty illusion, but your brain isn't fooled, it's still getting the same image in both eyes. If you have any of those pseudo-3d fresnel images, try the experiment of turning it on it's side, you still see the 3-d effect as you move the image around, but your brain immediately makes it look flatter and less realistic. Same with headphone 3-d, with only two speakers, it's very hard to trick the brain completely.
 
Here's the cool part. _they're right_. They _can_ hear a difference (if they know the difference is there). Tests using non-ABX, with brain scanners, show that their brain reacts differently to (what they consider) the better audio, in effect, they enjoy that audio more because they "know" it's better.
Did they try to rule out downmixing as the cause of perceptibility?
 
That's an opportunity Sony seemed to have missed. Advertising convincing 3D stereo audio would actually be a big thing for me. A suitable YouTuibe campaign would also successful educate all gamers. Tell the viewer to plug in a pair of cans and show COD/Battlefield while switching between stereo and holophonic audio, and it'll blow them away. Advertise your console as the only one that does this, and bag a load of customers, I bet.

Not sure if it will make it to the PS4 but Sony has a patent for it if that count for anything.

http://www.google.com/patents/US20130041648
 
Could Sony not offer at least an approximation of this using GPU compute?
Sony's audio hardware might be able to do it for all we know. However, the description of their controller and its audio jack means Sony aren't doing this; their ubiquitous headphone support isn't good enough to provide holophonic audio.
 
Sony's audio hardware might be able to do it for all we know. However, the description of their controller and its audio jack means Sony aren't doing this; their ubiquitous headphone support isn't good enough to provide holophonic audio.
Holophonic audio uses plain stereo, it's just a brand name for a form of binaural recorded audio. Why would the jack not be good enough?
 
Holophonic audio uses plain stereo, it's just a brand name for a form of binaural recorded audio. Why would the jack not be good enough?

I think its less the jack and more the ability of the console to synthesize multiple distinct holophonic audio 'experiences' for different controllers without the processing hardware up front. (Assuming the use case is 1-4 players, otherwise it would just for the one)
 
Back
Top