Do you mean in games or in general music playback?
I was referring to audio in games
Do you mean in games or in general music playback?
In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together.I wonder about using 96KHz. yes, it's totally useless for playback, but in a game where you're doing lots of processing and mixing it would help for keeping signal's precision, maybe. music can safely stay at 48KHz, or do modtracker/midi if you want.
Yes, that's fine until you get to my 120GB of mp3 (not counting the audiobooks - another 60GB) and iTunes being dumbass about using network drives, so we have three copies of everything (The central repository, and one each on my wife's and my computer for iTunes to play with). At FLAC/WMA lossless rates, that would be about 3 terabytes, and I have a ton of other stuff I could use 3 terabytes for. Since I don't hear mp3 artifacts, It really isn't in my interest to "upgrade" to lossless.
No, actually they don't. Except for a tiny minority, people who say they notice it merely THINK they do, because they like to overspend on audio equipment in a religious belief it'll disproportionally improve their listening experience but it typically never pan out if subjected to a properly conducted blind test.1) losseless audio (yes people notice it)
Reverbs are always nice. I'm all for this.2 ) usefull fx for gaming like convolution reverbs (quite CPU expensive and literally a world of difference)
This is of limited usefulness on a gaming system. It also doesn't work all that well without really complicating things, such as calibrating using microphones etc, which is too complicated for general consumers. Calibration also breaks if you move stuff around which people may do without realizing they've just broken their virtual surround configuration... No, too fiddly to have much impact.3) virtual surround for 2 speakers/headphones (also CPU expensive)
This is of even more limited usefulness in a gaming system. Too complicated to support and try to explain to people, and chances are high they'll hear fuckall of a difference anyway.4) room/system audio correction (also CPU expensive)
This is so esoteric as to be useless in a gaming system. It's a gaming system. Put on a pair of high quality earmuff-type headphones if you want noise cancellation. Besides, noise cancellation vary extremely on your listening position and is thus hard to do. You'd just add twice the noise in places where you don't cancel it...5) noise cancellation
I am not so sure about that, really.They probably would need a (inexpensive) DSP to do all of that but it would be a bigger difference than most can imagine and a really worth one to everyone.
Maybe they'll throw a Gravis Ultrasound in there. Or if they really want to splurge, a Roland MT-32.
menmau said:At first I thought they could just throw an OPL2 chip, probably I am being too conservative.
Some people notice it. Most don't, and when you're in the middle of fragging your friends, you, guaranteed, will not care about the difference between PCM and Dolby Digital.1) losseless audio (yes people notice it)
indeed, and not just CPU expensive, but also hugely bandwidth intensive, especially for larger impulses. Modern GPGPU code can do convolution reverb without much CPU involvement, and could be a good use of idle stream processors.2 ) usefull fx for gaming like convolution reverbs (quite CPU expensive and literally a world of difference)
As mentioned in a post above, pointless. Anyone who cared about it would already have a receiver capable of doing room correction, and everyone else wouldn't care.4) room/system audio correction (also CPU expensive)
Agreed, and standard 32 bit float audio format (which is usually normalized to -1 -> 1) has 24bits of audio resolution. 24 bit resolution has a dynamic range larger than the human ear, and 16 bit covers more than an orchestra can generate (about 80dB to 16bit's 96dB).In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together.
Heh, true story: I just dusted off my almost twenty year old code for a s3m player I wrote and incorporated it into a fun proof of concept app I wrote for the <CENSORED> machine.Modtracker/MIDI music is off the cards unles you want your music to sound cheap or be really limited.
Unless you, like me, have a total of less than 3 terabytes in your entire household. My desktop has 500GB. My wife's _may_ have 700. I'm not spending hundreds of dollars because lossless theoretically sounds better than lossy. Incredibly few people can reliably spot the difference between 192-256k audio and lossless, and I'm not one of them.I doubt your audio books require high fidelity.
FLAC/WMA lossless equates to roughly 600kbit/s. Assuming 128kbit/s mp3s that's just five times the storage. Lossy compression made sense back when 1GB flash storage was considered massive (and expensive), today it is pointless.
Oh, I'm with you on the iTunes hate. In my not so humble opinion, it's one of the worst designed pieces of software I've ever used. It makes me angry every time I fire it up. However, apple has very graciously allowed me the use of only their POS software for loading music onto my ipods. No, I'm not bitter at all.iTunes' network quirks is just another reason to ditch that piece of crap bloatware. Do any of your machines run Windows? Then use Media-center, - or run one of umpteen free media servers (like TVersity).
I wonder about using 96KHz. yes, it's totally useless for playback, but in a game where you're doing lots of processing and mixing it would help for keeping signal's precision, maybe. music can safely stay at 48KHz, or do modtracker/midi if you want.
Lots of audio experience...
In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together. .
Modtracker/MIDI music is off the cards unles you want your music to sound cheap or be really limited. It's a technique used in mobile games where downloads are needed to be small, but an orchestral score playing from a 64 MB general MIDI soundfont will sound ghastly. I personally despise faked orchestras in games more than any other sound fault. I've no complaints with audio compression artefacts or a lack of fidelity, but those fake strings and brass really grate with me. I understand a real orchestra will cost too much money in most cases, and synthietic orchestras are constantly improving, but it still smacks me between the ears every time I hear a grandiose orchestral score being drummed out by a computer
No, actually they don't. Except for a tiny minority, people who say they notice it merely THINK they do, because they like to overspend on audio equipment in a religious belief it'll disproportionally improve their listening experience but it typically never pan out if subjected to a properly conducted blind test.
Human hearing is pretty damn low fidelity, and most peoples' speaker systems are not good enough, their listening environments may be noisy (fans in consoles typically aren't silent, ventilation, maybe kids making ruckus etc...)
Why waste multiples more RAM on something that has such a tinyTINY quality improvement factor?
This is of limited usefulness on a gaming system. It also doesn't work all that well without really complicating things, such as calibrating using microphones etc, which is too complicated for general consumers. Calibration also breaks if you move stuff around which people may do without realizing they've just broken their virtual surround configuration... No, too fiddly to have much impact.
This is of even more limited usefulness in a gaming system. Too complicated to support and try to explain to people, and chances are high they'll hear fuckall of a difference anyway.
Some people notice it. Most don't, and when you're in the middle of fragging your friends, you, guaranteed, will not care about the difference between PCM and Dolby Digital.
indeed, and not just CPU expensive, but also hugely bandwidth intensive, especially for larger impulses. Modern GPGPU code can do convolution reverb without much CPU involvement, and could be a good use of idle stream processors.
As mentioned in a post above, pointless. Anyone who cared about it would already have a receiver capable of doing room correction, and everyone else wouldn't care.
Agreed, and standard 32 bit float audio format (which is usually normalized to -1 -> 1) has 24bits of audio resolution. 24 bit resolution has a dynamic range larger than the human ear, and 16 bit covers more than an orchestra can generate (about 80dB to 16bit's 96dB)
I'm not spending hundreds of dollars because lossless theoretically sounds better than lossy. Incredibly few people can reliably spot the difference between 192-256k audio and lossless, and I'm not one of them.
*sigh* You're one of _those_ people... There are no feedback frequencies, because there will always be a low pass filter inserted at below nyquist to ensure no aliasing occurs. Sampling theorem guarantees that the input frequencies and output frequencies will be identical as long as there are no frequencies above nyquist (half your sampling rate)Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound, unless your sound processing does have anti aliasing, most have nowadays. Less than 64 bit floats is also less than ideal. But it would only be problematic on the memory, CPU wise it is quite low.
I agree with you, but it all depends on the quality of the patches you're using. The default one included in windows is about 5MB, and has been since the 90s. It sucks.Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.
http://www.roland.com/products/en/JUPITER-80/
https://www.korg.com/kronos
Or even virtual synths like Alchemy or Omnisphere
http://www.camelaudio.com/Alchemy.php
http://www.spectrasonics.net/products/omnisphere.php/
Anyone would have a real hard job on distinguishing real stuff from good and well sequenced midi stuff
A lot of this is pseudoscience. In real tests, users chose louder lower quality sounds over softer high quality ones, simply because we prefer louder sounds in general. I agree that the better quality an audio track is, the better in general, but there is no definitive science determining what the crossover is, since it's different for everyone. Most AAA game audio today has quality easily comparable to high quality movie mixes.Actually they do notice, many times they dont know they do
That does have a lot to do with several factors.
One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training, but this is the lowest reason.
But the more important reasons are.
1) Volume - when you start to put your music louder a hi quality sound really sounds so much better, but many people blame their speakers before the compression, it also means it is easier to put louder.
2) Time of gaming - it is proven that lo fi sound gets tiresome much faster than the same sound at hi fi, even without we notice it, that would mean better gaming experience and would contribute for longer gaming sessions, even without people notice it.
3) people notice more good sound than bad sound (unless it makes noise)
4) people actually notice it that is why in some places you only use hi fi components and sound, from cinemas to conferences and any low profile/cheap disco/club, because even with soft and low volume music people would dislike it quite fast.
We generate a room impulse today with the current Kinect (that's what audio calibraton does), for use with echo reduction. We also do user tracking for beamforming.I am quite sure that with Kinetic 2.0 you could get a nice 3D reading of your room and track people on it so it would all be automatic
Dude, I do audio _for a living_. I work with, and have worked with, leaders in the field of audio engineering. I have _done_ ABX testing, and that's why I know 100% that I cannot reliably tell the difference between 192Kb MP3 and CD quality. I've tried. I am by no means a "golden ear", although my frequency range is higher than average (I can still hear 18KHz, which at my age is pretty good ). That's why for me, I know that I would get no boost at all, so any investment is worthless.As a advice I suggest people to try only listening hi fi (lossless or good HQ youtube videos) sounds for a while (eg two weeks) and then try to go back, you will notice, I am sure of that.
Anyway our hearing s really flexible and for at least not to much time it can stand to very bad sound.
On another side must of the musics nowadays is produced to be heard in such ways so it doest help to get used to the difference.
The funny but sad thing is that you could get a really good boost with a very small investiment...
I wish I could give you answers, but I really can't, sorry. I'd find it hard to stay away from revealing things I shouldn't.bkilian,
If I came to you as a console designer and said: Based on your experience of working with developers and their issues with audio on the Xbox platforms could you (a) pros and cons of the current audio subsystems on consoles, (b) explain what your ideal, within a reasonable budget, console sound hardware would look like on a future console, and (c) what kind of software tools/changes would you implement to improve audio in games, movies, etc on consoles?
Bonus Question: (d) if you could go nuts (think targeting the top 5% of consumers who appreciate good audio and willing to pay a modest premium for such, but not the 1% audio nuts who want no holds bar quality at nearly any cost) with audio what would you do on a console--and do you think there is a market for such?
This is not of any great concern. Current audio tech works just fine, and if there's actually any problem here, it's been dealt with satisfactorily.Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound
Are you talking about samples here, or during the mixing stage? Because claiming that 64-bit float samples is the only "ideal" solution is completely ludicrous of course.Less than 64 bit floats is also less than ideal.
Yeah, but we can't have every console owner be forced to license synth software and instrument banks worth potentially thousands of dollars just to get good-sounding MIDI audio. I think what Shifty was saying is that any standard, reasonably-priced general MIDI instrument bank is going to sound pretty crap playing orchestral music (which is true, btw.)Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.
You got some independent studies-linkage to post, proving that claim?Actually they do notice, many times they dont know they do
That seems like placebo or (self-)indoctrination to me TBH. Again, some independent studies to back up this vague claim?That does have a lot to do with several factors.
One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training, but this is the lowest reason.
A Kinect reading would treat everything scanned as hard surfaces, as even a 3D camera can't properly identify fabrics and foam pillows and so on. It wouldn't create an accurate representation. Better than nothing perhaps, but you seem like such a stickler for details (64-bit floats... indeed!), I'm surprised you'd settle for setting the bar this low!I am quite sure that with Kinetic 2.0 you could get a nice 3D reading of your room and track people on it so it would all be automatic
Again, independent, proper studies rarely back up such claims. It's mostly just in the heads of the owners of such Hi-Fi gear, not anything tangibly measurable in the real world. Your language indicate as much btw, like when you speak of the need to "train" yourself to hear differences and so on. I doubt this is even possible.And yes it hard to explain, but once you try it... and you actually dont need high end hi fi systems to hear the difference, although would be much better with those.
Well, good for you! Honestly. I too love getting exactly the gear that I want.For one I am not fortunately enough to have a sound system like I would like.
Hmm. I edit audio on Audcaity in 32 bit floats, and it doesn't sound terrible. Or even in any way perceptibly bad. So I'm not sure what you count as less than ideal, unless ideal is a scientific ideal well beyond the typical human's perception.Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound, unless your sound processing does have anti aliasing, most have nowadays. Less than 64 bit floats is also less than ideal. But it would only be problematic on the memory, CPU wise it is quite low.
This is somewhat tangential. By 'cheap' I was refering to synthetic orchestras. Even the good stuff (Vienna) that can sound very convincing can sound very artificial, and that's the quality of many game scores. For synthetic instruments, and even some simulated natural instruments (I own TruePiano's and it is very impressive), modern synths are superb, yes, but that's where I said 'limited', because synthetic sounds only take you so far in musical styles.Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.
This argument seems counterproductive to me. What you're saying is Joe Public can't tell the difference between $200 and $20,000 hardware until they are trained. Then, once trained, the cheap stuff they used to be happy with no makes them miserable, and they have to invest in expensive stuff to get enjoyment from their audio. Isn't that an argument to keep everything lofi and everyone happy?One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training...
Aren't most core gamers able to hours at a time? I don't see the evidence for gamers getting tired of terrible sounds. On the 8 bit machines with their infinitely repetitive tunes, young gamers still played hours at a time!2) Time of gaming - it is proven that lo fi sound gets tiresome much faster than the same sound at hi fi, even without we notice it, that would mean better gaming experience and would contribute for longer gaming sessions, even without people notice it.
Because people can perceive the improvement in graphics. When graphics are at a point that people can't perceive any improvement (from 1 million polys a character to 10 million), we won't bother advancing them. When people can't perceive the difference between one audio quality and another (CD and SACD, or CD and MP3s), then there's no reason to progress them. You just consume more resources for no benefit.If you upgrade the gfx even without needing then why not sound too?
I can hear the difference when listening to the music, but when it's background audio to a game, I'm not that focussed on the audio.As a advice I suggest people to try only listening hi fi (lossless or good HQ youtube videos) sounds for a while (eg two weeks) and then try to go back, you will notice, I am sure of that.