Let's forget video, what about the other half?

Kryton

Regular
It struck me (as ideas normally do) whilst trying to go to sleep last night that we have all heard, probably, far to much fact and speculation, on the GPU and, the CPU but, not a deal about what's left. Specifically - I am referring to audio.

With all the talk brandished of being able to "use all 7 SPUs" and on the X360 "only have total control of core0" I wondered, more on the PS3 side of things, where does all the audio processing go? For X360 this is an almost rhetorical question as I recall it being answered, to run on core0, but inveitably, I forgot the source of such divine information (like most people here tend to; actually it was a friend of a friend... you know the routine).

We have here, ladies and gents, half the picture (or a third - but as the Revolution is using a far more traditional architecture, I assume, it will have a traditional solution) but, what about the other half, where and, how does audio fit into the SPU paradigm? Surely, if it runs on the SPUs, we have a far more limited system (perhaps the 4 SPU claim referred to being only able to totally use 4? This too seems a neglected issue in the voodoo 'calculations' I often see here)? Is it magically tiedied under the OS catch-all clause (lets discuss that too, how much power will the OS occupy on PS3)?

So B3D, I ask, how will the PS3 handle audio (devs please reply or, at least, give us some more about ketchup on our steak pies I had an excuse to eat then)? Let the hyperbole begin...
 
Last edited by a moderator:
Kryton said:
It struck me (as ideas normally do) whilst trying to go to sleep last night that we have all heard, probably, far to much fact and speculation, on the GPU and, the CPU but, not a deal about what's left. Specifically - I am referring to audio.

With all the talk brandished of being able to "use all 7 SPUs" and on the X360 "only have total control of core0" I wondered, more on the PS3 side of things, where does all the audio processing go? For X360 this is an almost rhetorical question as I recall it being answered, to run on core0, but inveitably, I forgot the source of such divine information (like most people here tend to; actually it was a friend of a friend... you know the routine).

We have here, ladies and gents, half the picture (or a third - but as the Revolution is using a far more traditional architecture, I assume, it will have a traditional solution) but, what about the other half, where and, how does audio fit into the SPU paradigm? Surely, if it runs on the SPUs, we have a far more limited system (perhaps the 4 SPU claim referred to being only able to totally use 4? This too seems a neglected issue in the voodoo 'calculations' I often see here)? Is it magically tiedied under the OS catch-all clause (lets discuss that too, how much power will the OS occupy on PS3)?

So B3D, I ask, how will the PS3 handle audio (devs please reply or, at least, give us some more about ketchup on our steak pies I had an excuse to eat then)? Let the hyperbole begin...


Hahaha...Audio utilizing half the CPU resources?! That is ridiculous first of all. And second of all...with CELL, I would assume audio would just be another thread. Because your not actually programming down at the metal...CELL will handle audio as if it were just more game code. The resources necessary will dynamically be allocated based on the situation.
 
is there any system that offers more than 5.1 sound support? ie 6.1, 7.1? or is this pointless if games will need to be specifically written for 5.1 sound?
 
Audio and SPUs ought to be a good match. I'm not sure whether you'd allocate an entire SPU to sound or not, but it seems unlikely that you'd need more than one.

A PS2 could encode multichannel audio using only a portion of VU0 and EE time (i.e. not so much that you can't run a game at the same time) - a single SPU is much more capable than that, both in terms of streaming data and in terms of computational ability.
 
MrWibble said:
Audio and SPUs ought to be a good match. I'm not sure whether you'd allocate an entire SPU to sound or not, but it seems unlikely that you'd need more than one.

A PS2 could encode multichannel audio using only a portion of VU0 and EE time (i.e. not so much that you can't run a game at the same time) - a single SPU is much more capable than that, both in terms of streaming data and in terms of computational ability.

Depends on what and how much compression your using on the source data.
I can't think of anything else in an audio pipeline that is likely to eat a lot of CPU.
 
What about synthetic sound generation using accoustic wave modelling and the like? I sometimes find games have an annoying amount of repetition in sound effects, such as three different footstep sounds so you hear 'clonk clunk clonk clonk thunk clonk clunk clonk thunk thunk clonk.' It'd be nice if there'd be some accoustic mixing-up, perhaps realtime morphing between a few sounds to generate new sounds, or synthesizing new sounds on the fly. I'm sure if someone had a mind to it they could readily gobble up any number of processor cycles on audio!
 
Shifty Geezer said:
What about synthetic sound generation using accoustic wave modelling and the like? I sometimes find games have an annoying amount of repetition in sound effects, such as three different footstep sounds so you hear 'clonk clunk clonk clonk thunk clonk clunk clonk thunk thunk clonk.' It'd be nice if there'd be some accoustic mixing-up, perhaps realtime morphing between a few sounds to generate new sounds, or synthesizing new sounds on the fly. I'm sure if someone had a mind to it they could readily gobble up any number of processor cycles on audio!

Would probably be doable, but I don't really expect a significant change from the way we currently approach sound.

I know at least some current gen games use acoustic models for things like car engines.
 
weaksauce said:

Right, where to begin...

I understand where KK is coming from here. The Source engine shows what he means by 'sound as an object' with all the cool stuff they have done with soundscapes etc. especially in DoD:S with the battle-field effects.

What I don't understand is the concept of SPU and code dealing with the manipulation of sounds. To 'context switch' on the SPU is, if I understand correctly, an expensive operation so swapping out whatever-is-running and sound-processing-code is surely an inefficent operation? Then you have the latency of initalising the local store in such a switch, which I believe to be phenomenal in processing terms (insignificant in sound terms)? So, using the same example, if we want effects like DoD:S must an entire SPU be given up to save on continuously swapping code in/out?

With 5.1 sound and the variety of samples required, unless procedural generation is used, to provide a decent level of imersiveness is the 256Kb store simply too limited to handle such manipulations? The stream nature of an SPU may alleviate some burden but, if multiple samples are used, does it solve the issue?
 
Shifty Geezer said:
What about synthetic sound generation using accoustic wave modelling and the like? I sometimes find games have an annoying amount of repetition in sound effects, such as three different footstep sounds so you hear 'clonk clunk clonk clonk thunk clonk clunk clonk thunk thunk clonk.' It'd be nice if there'd be some accoustic mixing-up, perhaps realtime morphing between a few sounds to generate new sounds, or synthesizing new sounds on the fly. I'm sure if someone had a mind to it they could readily gobble up any number of processor cycles on audio!

Wouldnt supporting something like the Aureal 3.0 sound engine be more advanced than the current sound modelling used in most games?

They had a DSP (I think, or was it Creative that had the DSP, anyway Aureals was better) at around 100mhz. So one SPE should provide us with quite advanced sound modelling.
 
Kryton said:
To 'context switch' on the SPU is, if I understand correctly, an expensive operation so swapping out whatever-is-running and sound-processing-code is surely an inefficent operation? Then you have the latency of initalising the local store in such a switch, which I believe to be phenomenal in processing terms (insignificant in sound terms)? So, using the same example, if we want effects like DoD:S must an entire SPU be given up to save on continuously swapping code in/out?
You don't need to continuously swap from doing audio to doing something else. When it comes to generating sound, you set a SPE to the job, and if it finishes before the end of the frame (very likely in most situation) you set it doing something else.
 
Back
Top