Technical investigation into PS4 and XB1 audio solutions *spawn

If HRTF demo material hasn't worked well for you, try another headphone style and see if that makes a difference.

That's probably some good advice. I've long lusted after custom molded in-canal phones, but never actually owned any. I imagine any good quality "ear bud" might be worth a shot, however.

However, I have on occasion, albeit rarely, experienced good HRTF "believability". (That YouTube demo that I linked works for me, for instance.) And that's with regular ol' headphones. So, I do think my particular problem can probably be largely "fixed in software". My HRTF probably just needs to be tweaked a bit to seem more convincing to me. (I've been hoping that I will eventually learn to better "interpret" the HRTF that my headphone processor uses. However, I've been gaming with this setup for well over a year now, and I'm not noticing much improvement. Then again, maybe it has snuck up on me, too gradually to notice!)

I agree that speakers have a lot to recommend them.
 
Doesnt the kinect already do that from it's own POV during the setup process, so it can calculate speaker distance, delay, etc in order for it's noise cancellation to work properly? Once they're doing that, it's not too much of a stretch to ask gamers to move it to their seat to fire off a few more impulses.

Yep, that's optimizing what kinect hears. Moving to the gamers position can help optimize what the gamer hears. This ability is now fairly common even in midlevel consumer avr's and prepro's, and you can argue that such setup is the responsibility of the audio system hardware, nit the game console... but MS has the processing power and all hardware bits necessary to offer this ability should the gamer not have another way of doing it.
 
bkillian, let's say that SHAPE will save 1 core of Jaguar CPU.

What about the dedicated hardware related to Kinect speech recognition?
If I remember correctly you once told that speech recognition compute task at the time of X360 were quite heavy.

Hoping that the tech has evolved since then, could we say that this dedicated hardware for speech recognition could save another core if used? More? Less?
 
Yep, that's optimizing what kinect hears. Moving to the gamers position can help optimize what the gamer hears. This ability is now fairly common even in midlevel consumer avr's and prepro's, and you can argue that such setup is the responsibility of the audio system hardware, nit the game console... but MS has the processing power and all hardware bits necessary to offer this ability should the gamer not have another way of doing it.

I suppose they could take the opportunity to do some basic room correction, even though I'd also prefer the receiver handle that.

The main advantage is that it would remove the excuse that they can't do 3D audio properly because they don't know the speaker positions. Putting the kinect on your seat would give it full view of your front three, and the array mic should be able to triangulate the positions of the surround speakers that are out of view. Not to mention they could potentially use the kinect for head tracking when users are on headphones, although I dunno if it'll be precise enough to work well.
 
Why do you need head tracking you look forward if you dont you cant see the screen.
plus Its not about where your looking its about where your ingame character is looking
 
Why do you need head tracking you look forward if you dont you cant see the screen.
plus Its not about where your looking its about where your ingame character is looking

It has to do with how humans localize sound...turning your head very slightly gives your brain more information about where a sound is located. We all do it, but it's so natural you don't even realize it. Being able to detect a very slight tilt of your head and having the game audio react appropriately is the difference between headphone 3D sound that's merely decent and incredibly convincing.
 
Yes I understand the process of moving your head to localise sound, but if you wish to detect sound location by moving your head, its your virtual head you need to move not your real head

ps: I do use head tracking

Ideally you'd want both.
 
well ideally you'd want both and a display that moves with you like occulous rift. With a fixed display it makes less sense.
 
bkillian, let's say that SHAPE will save 1 core of Jaguar CPU.

What about the dedicated hardware related to Kinect speech recognition?
If I remember correctly you once told that speech recognition compute task at the time of X360 were quite heavy.

Hoping that the tech has evolved since then, could we say that this dedicated hardware for speech recognition could save another core if used? More? Less?
I doubt it's a full core being saved. Technically, the AVPs can process more than a jaguar core's worth of code, but I have a hard time believing the speech pipeline has gotten that heavy.

Also, if you're a developer, and you're not interested in Kinect, offloading the Kinect processing hasn't "saved" you anything...

As you can see, I'm still a little bitter that there is all that DSP available, and the Kinect guys reserved the lot. :)
 
As you can see, I'm still a little bitter that there is all that DSP available, and the Kinect guys reserved the lot. :)

If you are at liberty to share, but what did you work on, on the X1?
And whats your feeling about it as a complete package, based on your knowledge.
Is it a good leap forward for games, not just prettier games, but games in general?

If you already have expressed this somewhere on here, feel free to ignore me :)
 
If you are at liberty to share, but what did you work on, on the X1?
And whats your feeling about it as a complete package, based on your knowledge.
Is it a good leap forward for games, not just prettier games, but games in general?

If you already have expressed this somewhere on here, feel free to ignore me :)
I was in the audio team. I worked on WASAPI and the audio hardware.

I have no real opinion on the console as a complete package, I haven't actually seen the games. It's easy to develop for, I wrote a scream tracker player for it in just a few hours when I was bored, but other than that, your guess is as good as mine.
 
I doubt it's a full core being saved. Technically, the AVPs can process more than a jaguar core's worth of code, but I have a hard time believing the speech pipeline has gotten that heavy.

Also, if you're a developer, and you're not interested in Kinect, offloading the Kinect processing hasn't "saved" you anything...

As you can see, I'm still a little bitter that there is all that DSP available, and the Kinect guys reserved the lot. :)

Any chance that Kinect "stuff" is more optimized now and some of those DSPs become available to developers ?
 
The thread is about console audio, not Kinect utilisation of hardware nor what bkilian's opinions on any console are.
 
Any chance that Kinect "stuff" is more optimized now and some of those DSPs become available to developers ?
We can hope. If they do, it would probably be in cooperation with third party libraries, or a set of effects you can get, as opposed to developers having full access to the chip.
 
We can hope. If they do, it would probably be in cooperation with third party libraries, or a set of effects you can get, as opposed to developers having full access to the chip.

Would a possible scenario be something like AMDs TrueAudio API being ported in hardware to XBO by utilizing a combination of SHAPE and some of the DSPs?
 
Would a possible scenario be something like AMDs TrueAudio API being ported in hardware to XBO by utilizing a combination of SHAPE and some of the DSPs?
As far as I can tell, TrueAudio looks like it goes right to the metal on the DSP. The AVP has a different instruction set, so effects written for TrueAudio may not work without rewriting them from scratch. You would have to go one layer up, like port the genaudio HRTF solution directly, or have Wwise give you a convolution abstraction like it does for TrueAudio.
 
Back
Top