Technical investigation into PS4 and XB1 audio solutions *spawn

You can apply all sorts of processes to the audio, but to the end user it's not obvious. So, using your list, if we start with simple stereo sample playback in the 90s, we can add "reflection" in 1998, "refraction" in 2004, "diffraction" in 2007, "dispersion" in 2010, "propagation" in "20012", and be looking forwards to "transmission" in 2013 next-gen games. The differences some of these make to the audio aren't particularly obvious, unlike almost every visual effect we have. God rays, lens flare, subsurface scattering, screen-space ambient occlusion, SVO lighting - every step is obvious and contributes. Every step changes the graphics notably, giving developers and gamers an obvious sense of progress. With audio, you don't get the same ROI. You can add reverb and everyone notices it. You then then improve the reverb algorithm and get a cleaner sound but it'll be lost on most people. You need a massive sea-change, a whole shift in the audio, for people to notice the difference and value it. Hell, many folk think better audio is just cranking up the bass!

As for not requiring special hardware, ideal audio may well be doable in realtime on CPU, but the CPU already has its hands full with the rest of the game. Unless devs want to make sacrifices in more obvious places, dedicated hardware that is far more efficient is certainly going to help, or even be necessary, to encourage adoption of better audio.
 
Try Battlefield Company, or The Last of Us then: they easily blow away any game from the late 90's or whichever time period you were referring to.

sb was referring to audio on the pc not consoles, it certainly has taken a step backwards on the pc
my one hope was openal,
Ive been meaning for years to write a sort of open letter to the openal developers
I dont know how much its been promoted to games devs but they have done practically nothing to promote it to end users, even people who are very knowledgeable about pc gaming have no idea about openal and what capabilities it has, and its damm hard to actually find out. AFAIK they have done nothing to make people care about it.
 
You can apply all sorts of processes to the audio, but to the end user it's not obvious. So, using your list, if we start with simple stereo sample playback in the 90s, we can add "reflection" in 1998, "refraction" in 2004, "diffraction" in 2007, "dispersion" in 2010, "propagation" in "20012", and be looking forwards to "transmission" in 2013 next-gen games.

My point is that the current generation haven't even reach 'Reflection' yet - Cerny hopes that PS4 will introduce an extremely crude approximation (The same quality as Doom 1 graphics) by using the CU's.

The differences some of these make to the audio aren't particularly obvious, unlike almost every visual effect we have. God rays, lens flare, subsurface scattering, screen-space ambient occlusion, SVO lighting - every step is obvious and contributes. Every step changes the graphics notably, giving developers and gamers an obvious sense of progress.

I respectfully disagree. None of your features are sea-changing by it self, but multiple different advancements will be noticable. Not many will experience a sea-changing difference by going from phong-blinn to cook-torrance ilumination model, neither would happen with spherical to lambert radiation model in the audio world. But multiple different changes would result in a noticable difference in both cases.

You need a massive sea-change, a whole shift in the audio, for people to notice the difference and value it. Hell, many folk think better audio is just cranking up the bass!

And the sad part is that they're right! I have demonstrated in this very thread that current audio tech in games can't reproduce low frequency content, so people have to compensate for the lack of low frequencies. That's how bad the current tech is - it's like having a graphic engine that can't reproduce red color information.

As for not requiring special hardware, ideal audio may well be doable in realtime on CPU, but the CPU already has its hands full with the rest of the game. Unless devs want to make sacrifices in more obvious places, dedicated hardware that is far more efficient is certainly going to help, or even be necessary, to encourage adoption of better audio.

More hardware is always better, but none of the next gen hardware have the necessary power to start emulating anything from the list earlier, but the overall sound quality can easily be improved in next gen primary because of additional ram and cpu's.
 
Last edited by a moderator:
Because not everyone has the most powerful CPU. The best thing about PC's is that you can use anything to build them. The worst thing about PC's is that you can use anything to build them.
.
.
.

Regards,
SB

Not everyone has the best GPU, but that doesn't stop devs from presenting an array of options for various levels of GPUs. AA, AF, resolution, texture quality, reflections, draw distance, etc. The same could be done for audio, if anyone cared.
 
I respectfully disagree. None of your features are sea-changing by it self, but multiple different advancements will be noticable.

For a demonstration of what relab is talking about please watch the following 2 videos

Without environmental audio modeling

With environmental audio modeling
 
Not everyone has the best GPU, but that doesn't stop devs from presenting an array of options for various levels of GPUs. AA, AF, resolution, texture quality, reflections, draw distance, etc. The same could be done for audio, if anyone cared.

No it couldn't. Any change in the level of graphics doesn't impact CPU useage significantly, AFAIK. And the CPU is important for everything non-GPU related in the game. Anything you change with regards to audio could then impact physics simulation, AI, game logic, etc. The same can't be said about graphics options offered in games generally (although that isn't always the case).

As well, you ignore the most important thing of what I said. There is no standard API for advanced audio effects on PC. For graphics you have DirectX or OpenGL. For GPU compute you have Direct Compute or OpenCL. For physics you have Havok, PhysX, Bullet, etc. For audio you have? Not bloody much of anything.

History shows that when there was an open standard and a plethora of hardware devices that supported it, the games industry was very keen on exploring and exploiting the possibilities that it offered. Once the standard was closed and thus cheap hardware devices dried up, including onboard MB audio (meaning budget systems could no longer support that feature as you had to buy relatively expensive audio cards to support it then), that was when support for audio modeling, environmental modeling, audio manipulation, etc. by game developers all virtually ceased.

Regards,
SB
 
its working for me have you tried a diff browser ?
edit: try this (its missing the annotations)
Neither of those videos has stereo for me - all explosions are centred. this is better


It's clearly better audio than PS3's COD audio, but it's not particular spatial. The audio has direction but not distance.
 
Neither of those videos has stereo for me - all explosions are centred. this is better


It's clearly better audio than PS3's COD audio, but it's not particular spatial. The audio has direction but not distance.
I clocked 26 hours or so in BF3 and it never sounded nearly as good as in the video -Xbox 360 version-, even compared to the weakest part -the Dolby headphones for me at least-. The Creative X-Fi Titanium part was music for my ears, and the helicopter sounded like they were using Doppler sound effects --same for the quad motorbike and the jet plane!! Wow, just wow.

On a different note, (semi-OT) the father of noise reduction and audio innovator Ray Dolby has died a few days ago -R.I.P.- His sound changed the movies, the videogames, everything.

http://www.washingtonpost.com/enter...7f52c2-1cae-11e3-a628-7e6dde8f889d_story.html

http://www.techradar.com/news/audio/6-ways-ray-dolby-changed-the-way-the-world-listened-1180930

Any chance for the next gen consoles to support Dolby Atmos?
 
Turns out the SHAPE is not exactly what you guys had in mind...

"The audio block is completely unique. That was designed by us in-house. It's based on four tensilica DSP cores and several programmable processing engines. We break it up as one core running control, two cores running a lot of vector code for speech and one for general purpose DSP. We couple that with sample rate conversion, filtering, mixing, equalisation, dynamic range compensation then also the XMA audio block. The goal was to run 512 simultaneous voices for game audio as well as being able to do speech pre-processing for Kinect."

I am pretty sure at this point that the actual game-audio processing is nothing to write home about, and most certainly not "2 CU's worth of TLoU audio"
 
There is no standard API for advanced audio effects on PC. For graphics you have DirectX or OpenGL. For GPU compute you have Direct Compute or OpenCL. For physics you have Havok, PhysX, Bullet, etc. For audio you have? Not bloody much of anything.

Well there is openal, but as i said earlier they have failed miserably to make people care about it
(been contemplating for years writing a sort of open letter to them)
As an example try and find out what capabilities openal has its really hard, almost like they are hiding them
 
Turns out the SHAPE is not exactly what you guys had in mind...



I am pretty sure at this point that the actual game-audio processing is nothing to write home about, and most certainly not "2 CU's worth of TLoU audio"

I would draw exactly the opposite conclusion as you. This only solidifies what has been discussed here which was that it is a powerful bit of audio hardware and should clearly alleviate/remove the CPU from audio processing overhead.
 
Last edited by a moderator:
No it couldn't. Any change in the level of graphics doesn't impact CPU useage significantly, AFAIK. And the CPU is important for everything non-GPU related in the game. Anything you change with regards to audio could then impact physics simulation, AI, game logic, etc. The same can't be said about graphics options offered in games generally (although that isn't always the case).

As well, you ignore the most important thing of what I said. There is no standard API for advanced audio effects on PC. For graphics you have DirectX or OpenGL. For GPU compute you have Direct Compute or OpenCL. For physics you have Havok, PhysX, Bullet, etc. For audio you have? Not bloody much of anything.

History shows that when there was an open standard and a plethora of hardware devices that supported it, the games industry was very keen on exploring and exploiting the possibilities that it offered. Once the standard was closed and thus cheap hardware devices dried up, including onboard MB audio (meaning budget systems could no longer support that feature as you had to buy relatively expensive audio cards to support it then), that was when support for audio modeling, environmental modeling, audio manipulation, etc. by game developers all virtually ceased.

Regards,
SB

Well there have attempts for audio standards, but ultimately no one cares and they flounder or die, which is kind of my point. The reason you see so much work in graphics and physics, is because you can see the results. Demand drives supply, there is simply little demand. Most people have ghetto audio systems because they don't care. Stereo is good enough for a large majority of the gamers, hell most of them chat over the game audio right?

Isn't DirectSound a part of DirectX to this day? So we have one standard alive, but it is so important you forgot it exists? ;)
 
Turns out the SHAPE is not exactly what you guys had in mind...
I am pretty sure at this point that the actual game-audio processing is nothing to write home about, and most certainly not "2 CU's worth of TLoU audio"

I'm 100% certain that you're 100% wrong.
 
Have you read any of the contributions by bkilian on this subject?

Yes, but I probably read over these parts, and thus still perceived the SHAPE (it's like the developers were shouting when they came up with the name btw :( ) as something that could alleviate the big performance gap between both next generation consoles. Because of statements like "worse case it helps close the computation gap between the PS4 and Xbox One. Best case we may get some advancement in audio."

Bkilian:

The Audio processor was originally devised to be able to offload Kinect Audio processing, and the chip designers came to the audio team and said "We have a bunch of extra transistors we can throw in for free, what would you like them to do?" or something close to that. The SHAPE block was the result of that conversation.

It's not news at all. It's not big, and most of the audio block is reserved for the system.

With "nothing to write home about" I meant that if half of it is Kinect and the other half is.. 50% scheduling? It's possible that they just put a relatively standard component as an extra core. And by standard I mean: something like an integrated audio chip like you have on a 60 dollar motherboard these days: even the cheapest ones can do HRTF 3D Positional Audio, which would probably be received as a very big thing over here.

We need to get Cerny on the line, because I am pretty sure that the PS4 does have game centered audio hardware. Also I'm pretty sure that if he had 4 audio cores at his disposal, he would have used all of them for game audio, and none of them for voice recognition. I base this on the design mantra of both consoles.
 
Back
Top