Technical investigation into PS4 and XB1 audio solutions *spawn

is environmentally modeled audio like 3d audio? if i remember right sucker punch said something about environmentally modeled audio like placing emitters in various objects something like hdr audio. shape wouldnt be able to handle that?
 
So is there enough power to do great game audio on SHAPE without X1 having to hit the Jaguar CPU?

If 'great' equals environmental modeling/reverberation systems (or any other function than basic IIR filtering and mix/volume), then no - BUT it's not a question about power but functionality. You could minimize the CPU/CU usage with pre-computed data.

Forza use 1+ Jaguar core for convolution etc., since SHAPE doesn't have those functions.

EDIT: Duh - too slow. Davros already answered.
 
Last edited by a moderator:
If 'great' equals environmental modeling/reverberation systems (or any other function than basic IIR filtering and mix/volume), then no - BUT it's not a question about power but functionality. You could minimize the CPU/CU usage with pre-computed data.

Forze use 1+ Jaguar core for convolution etc., since SHAPE doesn't have those functions.

is there a link on forza's usage? i thought shape was handling everything?

so if im understanding it correctly the main difference between shape and ps4s audio chip is that shape can do filtering and mixing while on ps4 the cpu has to handle it? but for stuff like 3d audio, reverbation, audio occlusion and other things they both need to make use of cpu/cu?

but isn't filtering/mixing a really big deal?
 
From his interview, Cerny's gamble is that future games will balance workloads between CPU and GPGPU, including audio work, ray casting, and things that are more efficient on a GPU. They claim to have modified the GPU specifically for that. The problem is that none of the launch games does yet.

Be interesting if GPGPU ends up widely used for audio. It would seem like an opportunity but who knows. Interesting the Sony slide says "presumably on the CPU" or whatever. It might not be super easy to put audio on the GPU.

That's why I find it hilarious when we're supposed to deduce the console's power based on launch games. PS3 launch games didn't look very good in comparison to the ones that came later down the road, so did the PS2 games.

Same for every console...
 
If 'great' equals environmental modeling/reverberation systems (or any other function than basic IIR filtering and mix/volume), then no - BUT it's not a question about power but functionality. You could minimize the CPU/CU usage with pre-computed data.

Forza use 1+ Jaguar core for convolution etc., since SHAPE doesn't have those functions.

EDIT: Duh - too slow. Davros already answered.

Do you have a link about Forza?

Basically from a lay perspective it seemed like "SHAPE is this awesome sound chip that will handle everything and give you awesome game sound for free" and now it seems a little bit like it may have been nerfed (in terms of reserved cores) for the sake of that bane of Xbox, Kinetic (it kind of makes me mad whenever I hear of an engineering concession made for Kinect, it was mentioned a few times in the recent Eurogamer article too "well this thing really helped with Kinect" bleh).
 
Be interesting if GPGPU ends up widely used for audio. It would seem like an opportunity but who knows. Interesting the Sony slide says "presumably on the CPU" or whatever. It might not be super easy to put audio on the GPU.
Yeah, I'm really interested to see if it's possible to do convolution reverb by GPGPU, that would be a big thing if it can be efficient. It would be purely a middleware job so no impact on game dev time. The kind of things that can be in a useful library and nothing game-specific.
 
And then, what are the "pros" of SHAPE if Xbox One must do sound effects on CPU too?

You don't have to do any of the fixed functions it provides on the CPU. You get decoding, sample rate conversion, mixing, filtering, volume, compression, equalization on each voice without having to touch CPU. Those are pretty much the most basic things every game engine is going to have to do to every single sound.
 
is environmentally modeled audio like 3d audio? if i remember right sucker punch said something about environmentally modeled audio like placing emitters in various objects something like hdr audio. shape wouldnt be able to handle that?
The xb1 as a whole could but some of the calculations will have to be done on the cpu

is environmentally modeled audio like 3d audio?
I'll have a go of explaining it

Imagine you and me at different ends of a very large room
I start firing a gun, some of that sound will travel straight to your ears and will not need any processing at all. but for example the some of the sound will hit the back wall,(well it will hit everything in the room apart from bits that are occulded) the wall will depending on that its made of absorb some of that sound (some of it will travel through as well) so you have a lowering of volume of the sound reflected but its not that simple because different materials (wood/concrete ect) absorb some frequencies more than others so have to calculate that effect as well. Now the reflected sound may then go straight to your ears or only part of it may go to your ears or it could hit something else (another wall, the ceiling, a carpeted floor a statue in the room)
the vortex 2 chip would calculate up to 64 reflections for each sound(mentioned just because I remember that spec)
So 1 gunshot creates a sound that travels outward in all directions including up and down and gets split up depending on what it hits and the direction that it gets reflected (one sound wave becomes many) and every time it gets reflected it gets altered you also have to track the distance travelled if part of the sound has to travel an extra 34meters you will hear that part of the gunshot 1 tenth of a second later.
Now with shape can you tell it the level geometry and a list of materials and their acoustic properties the positions of the sound source and the listener and have it work out exactly how to alter all the sounds so they are correct without having to get the cpu to do some of the work for it I dont think so, not becuse it lacks the power just because it wasnt designed to do everything on its own.
 
Is there any way to try this GPU reverb and report utilisation? Edit: Duh, it has a utilisation feedback, averaging ~6% and peaking at 12% of his ATi 9800GT, although I'd want calibration for accuracy.

 
Someone posted a presentation by intel (sorry i cant remember the thread) and part of it was intel doing audio via opencl, it was on the cpu but as opencl can also be run by gpu's there is hope.
 
I'm no audiophil but the reverb there sounds great. The extra CU's on the PS4 should be able to some nifty stuff.
 
The xb1 as a whole could but some of the calculations will have to be done on the cpu


I'll have a go of explaining it

Imagine you and me at different ends of a very large room
I start firing a gun, some of that sound will travel straight to your ears and will not need any processing at all. but for example the some of the sound will hit the back wall,(well it will hit everything in the room apart from bits that are occulded) the wall will depending on that its made of absorb some of that sound (some of it will travel through as well) so you have a lowering of volume of the sound reflected but its not that simple because different materials (wood/concrete ect) absorb some frequencies more than others so have to calculate that effect as well. Now the reflected sound may then go straight to your ears or only part of it may go to your ears or it could hit something else (another wall, the ceiling, a carpeted floor a statue in the room)
the vortex 2 chip would calculate up to 64 reflections for each sound(mentioned just because I remember that spec)
So 1 gunshot creates a sound that travels outward in all directions including up and down and gets split up depending on what it hits and the direction that it gets reflected (one sound wave becomes many) and every time it gets reflected it gets altered you also have to track the distance travelled if part of the sound has to travel an extra 34meters you will hear that part of the gunshot 1 tenth of a second later.
Now with shape can you tell it the level geometry and a list of materials and their acoustic properties the positions of the sound source and the listener and have it work out exactly how to alter all the sounds so they are correct without having to get the cpu to do some of the work for it I dont think so, not becuse it lacks the power just because it wasnt designed to do everything on its own.
I don't understand the last sentence. I mean... if it is an order of magnitude -which means from ten to 99 more times- more capable than the X-Fi, how is that possible if you are using an specialized piece of hardware for sound in both cases, basically? Isn't Vortex 2 replicable in any way?

I'd elaborate but I gotta go.
 
I would imagine that Shape wouldn't do the hit detection, but it would only do the part where you give it all parameters that were detected by the hit - the CUs would be better for the raycasting/hit detection part, and then the hits that are relevant would be given to SHAPE with the right parameters to modify the original sound sample.

I could definitely see an advantage in that.
 
The Reflection/Raycasting engine will produce an echogram - you can either use convolution (CU's/CPU) or recursive delayline aproximation (CPU). SHAPE can't help with that. It only have basic filtering.
 
The Reflection/Raycasting engine will produce an echogram - you can either use convolution (CU's/CPU) or recursive delayline aproximation (CPU). SHAPE can't help with that. It only have basic filtering.

Do you have a link of SHAPE documentation?
 
Back
Top