Ray Casting for Audio - Audio tech patent...(Used in PS4)

Just heard of this today after going through Mark Cerny's talk again ... most devs on here are probably aware of it .. .

But whoa! For a first timer encountering this tech ..this is rather cool...I just thought it was 5.1 surround processing and that was it .. .or is this a derivative of digital surround decoding? I dunno .. .

http://www.google.com/patents/US8139780

"One technique of simulating sounds within a three-dimensional scene may be to calculate the resulting sound at the location of the listener due to the propagation of sound waves throughout the three-dimensional scene. A sound wave is a longitudinal wave produced by variations in a medium (e.g., air) which is detected by the human ear. A game system may calculate the effects of all objects on all sound waves (e.g., collisions, constructive/destructive interference, etc.) as they travel through three-dimensions. Furthermore, the sound engine may calculate the resulting sound due to a number of sound waves which reach the listener location. However, calculating the propagation of sound waves throughout the three-dimensional scene interference may result in a large number of complex calculations (e.g., differential equations) which a game system may be unable to perform in a time frame necessary to provide real time sound effects."

UmpOi.gif
 
The patent doesn't seem to say whether it's done on the GPU or the CPU. Did Mark Cerny mention anything about it?

Cool gif btw!
 
Yeah, it is mentioned here and there as examples of what to use CU compute jobs for.

It sounds a lot like Naughty Dogs tech for Uncharted 2, which had occlusion logic for sounds using raycasting with SPUs. This seems to go one step further, creating one or more echos. Not necessarily mind-blowing, but could still be pretty cool.
 
How does this differ from wave tracing ?
pioneered by aureal then used by creative
there the room geometry is sent to the dsp, with a3d 2.0 the soundwave would be traced up to 64 reflections
(dont know if creative increased that amount) it would also deal with surface materials, occlusion and obstruction even air density (good for underwater)
 
It's sounds (sic) cool but I bet that, in reality, the GIF that was posted is way cooler!
 
Yeah, it is mentioned here and there as examples of what to use CU compute jobs for.

It sounds a lot like Naughty Dogs tech for Uncharted 2, which had occlusion logic for sounds using raycasting with SPUs. This seems to go one step further, creating one or more echos. Not necessarily mind-blowing, but could still be pretty cool.

In TLoU, if you don't have line of sight with whatever is making a noise or speaking, the sound gets muffled. I guess this was also in Uncharted?

IRL obviously it's not that simple and sounds reverberate off surfaces, so you hear the sound itself and the echoes all based off of size, shape, and materials of the room. A small detail to most, but a tangible step towards better immersion... especially if VR headset use takes off.
 
how does this tech differ from Dolby Surround? As in the calculations of vector paths etc...dolby can't do that surely?


Also...didn't realize that collision detection is used for audio...but reading about it it makes sense...something collides...audio coincides

this bodes well for more immersion (audiowise) next-Gen? Being a sucker for things that fleet left and right and back and forth whilst battling a dragon ...I'm intrigued...
 
Yeah, it is mentioned here and there as examples of what to use CU compute jobs for.

It sounds a lot like Naughty Dogs tech for Uncharted 2, which had occlusion logic for sounds using raycasting with SPUs. This seems to go one step further, creating one or more echos. Not necessarily mind-blowing, but could still be pretty cool.
I think this is a compelling argument.

I will say now I highly doubt developers are going to use that feature much if they want to use the extra compute in other things. However, if it is, I'd be very hard pressed not to love it.
 
Back
Top