Predict: The Next Generation Console Tech - AUDIO

They could do so much to improve sound that it is even sad if they dont do.

The funny thing is that people dont notice it unless if asked about but they do really prefer it, that is the hard job of a audio guy (like me).

Some examples

1) losseless audio (yes people notice it)
2 ) usefull fx for gaming like convolution reverbs (quite CPU expensive and literally a world of difference)
3) virtual surround for 2 speakers/headphones (also CPU expensive)
4) room/system audio correction (also CPU expensive)
5) noise cancellation
6)...

They probably would need a (inexpensive) DSP to do all of that but it would be a bigger difference than most can imagine and a really worth one to everyone.
 
I wonder about using 96KHz. yes, it's totally useless for playback, but in a game where you're doing lots of processing and mixing it would help for keeping signal's precision, maybe. music can safely stay at 48KHz, or do modtracker/midi if you want.
In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together.

Modtracker/MIDI music is off the cards unles you want your music to sound cheap or be really limited. It's a technique used in mobile games where downloads are needed to be small, but an orchestral score playing from a 64 MB general MIDI soundfont will sound ghastly. I personally despise faked orchestras in games more than any other sound fault. I've no complaints with audio compression artefacts or a lack of fidelity, but those fake strings and brass really grate with me. I understand a real orchestra will cost too much money in most cases, and synthietic orchestras are constantly improving, but it still smacks me between the ears every time I hear a grandiose orchestral score being drummed out by a computer.
 
Yes, that's fine until you get to my 120GB of mp3 (not counting the audiobooks - another 60GB) and iTunes being dumbass about using network drives, so we have three copies of everything (The central repository, and one each on my wife's and my computer for iTunes to play with). At FLAC/WMA lossless rates, that would be about 3 terabytes, and I have a ton of other stuff I could use 3 terabytes for. Since I don't hear mp3 artifacts, It really isn't in my interest to "upgrade" to lossless.

I doubt your audio books require high fidelity.

FLAC/WMA lossless equates to roughly 600kbit/s. Assuming 128kbit/s mp3s that's just five times the storage. Lossy compression made sense back when 1GB flash storage was considered massive (and expensive), today it is pointless.

iTunes' network quirks is just another reason to ditch that piece of crap bloatware. Do any of your machines run Windows? Then use Media-center, - or run one of umpteen free media servers (like TVersity).

Cheers
 
1) losseless audio (yes people notice it)
No, actually they don't. Except for a tiny minority, people who say they notice it merely THINK they do, because they like to overspend on audio equipment in a religious belief it'll disproportionally improve their listening experience but it typically never pan out if subjected to a properly conducted blind test. :)

Human hearing is pretty damn low fidelity, and most peoples' speaker systems are not good enough, their listening environments may be noisy (fans in consoles typically aren't silent, ventilation, maybe kids making ruckus etc...)

Why waste multiples more RAM on something that has such a tinyTINY quality improvement factor?

2 ) usefull fx for gaming like convolution reverbs (quite CPU expensive and literally a world of difference)
Reverbs are always nice. I'm all for this.

3) virtual surround for 2 speakers/headphones (also CPU expensive)
This is of limited usefulness on a gaming system. It also doesn't work all that well without really complicating things, such as calibrating using microphones etc, which is too complicated for general consumers. Calibration also breaks if you move stuff around which people may do without realizing they've just broken their virtual surround configuration... No, too fiddly to have much impact.

4) room/system audio correction (also CPU expensive)
This is of even more limited usefulness in a gaming system. Too complicated to support and try to explain to people, and chances are high they'll hear fuckall of a difference anyway.

5) noise cancellation
This is so esoteric as to be useless in a gaming system. It's a gaming system. Put on a pair of high quality earmuff-type headphones if you want noise cancellation. Besides, noise cancellation vary extremely on your listening position and is thus hard to do. You'd just add twice the noise in places where you don't cancel it...

They probably would need a (inexpensive) DSP to do all of that but it would be a bigger difference than most can imagine and a really worth one to everyone.
I am not so sure about that, really. :)
 
menmau said:
At first I thought they could just throw an OPL2 chip, probably I am being too conservative.

I still have an MT-32 ... :)

@Grall: it is not quite that simple. People have shown to have much stronger, primitive reactions to high quality sound. For instance if you here a glass fall and break, you have a pretty strong physical reaction to this. It took the best Laserdisc had to offer back in the day to get close to that same kind of physical reaction.

Personally, I think sound like in Uncharted 2 is pretty fantastic. I don't know that a dedicated DSP makes much sense today, as you should basically do similar ray-tracing or casting to light sources in order to get realistic sound occlusion and reverberation, so you can reuse the graphics pipeline quite well I think if you're going to go all the way. Uncharted did that to some extent, and the SPEs in the Cell were probably a big help, but I assume modern GPUs should be able to do that work just as easily.
 
1) losseless audio (yes people notice it)
Some people notice it. Most don't, and when you're in the middle of fragging your friends, you, guaranteed, will not care about the difference between PCM and Dolby Digital.
2 ) usefull fx for gaming like convolution reverbs (quite CPU expensive and literally a world of difference)
indeed, and not just CPU expensive, but also hugely bandwidth intensive, especially for larger impulses. Modern GPGPU code can do convolution reverb without much CPU involvement, and could be a good use of idle stream processors.
4) room/system audio correction (also CPU expensive)
As mentioned in a post above, pointless. Anyone who cared about it would already have a receiver capable of doing room correction, and everyone else wouldn't care.
In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together.
Agreed, and standard 32 bit float audio format (which is usually normalized to -1 -> 1) has 24bits of audio resolution. 24 bit resolution has a dynamic range larger than the human ear, and 16 bit covers more than an orchestra can generate (about 80dB to 16bit's 96dB).
Modtracker/MIDI music is off the cards unles you want your music to sound cheap or be really limited.
Heh, true story: I just dusted off my almost twenty year old code for a s3m player I wrote and incorporated it into a fun proof of concept app I wrote for the <CENSORED> machine.
Er, Ok, the <CENSORED>? <CENSORED> to be released <CENSORED>. Dammit. I knew I shouldn't have let them install that chip in my brain... ;)
I doubt your audio books require high fidelity.

FLAC/WMA lossless equates to roughly 600kbit/s. Assuming 128kbit/s mp3s that's just five times the storage. Lossy compression made sense back when 1GB flash storage was considered massive (and expensive), today it is pointless.
Unless you, like me, have a total of less than 3 terabytes in your entire household. My desktop has 500GB. My wife's _may_ have 700. I'm not spending hundreds of dollars because lossless theoretically sounds better than lossy. Incredibly few people can reliably spot the difference between 192-256k audio and lossless, and I'm not one of them.

iTunes' network quirks is just another reason to ditch that piece of crap bloatware. Do any of your machines run Windows? Then use Media-center, - or run one of umpteen free media servers (like TVersity).
Oh, I'm with you on the iTunes hate. In my not so humble opinion, it's one of the worst designed pieces of software I've ever used. It makes me angry every time I fire it up. However, apple has very graciously allowed me the use of only their POS software for loading music onto my ipods. No, I'm not bitter at all.
 
I wonder about using 96KHz. yes, it's totally useless for playback, but in a game where you're doing lots of processing and mixing it would help for keeping signal's precision, maybe. music can safely stay at 48KHz, or do modtracker/midi if you want.

It wouldn't really buy you anything to go to a higher sampling rate than 48kHz. Some people may claim to be able to hear stuff above 24 kHz, but the 99% of the public can't, nor do they have speakers/headphone with that kind of wide frequency response.

You can benefit from using more bits for precision of the samples you already have. I theory if you didn't have quantized samples, you could do a perfect reconstruction of the audio signal. I would say using 24-bit or 32-bit integers for arithmetic would be beneficial. Floating point has good dynamic range, but the mantissa is only 23+(1 implied bit) so you're always at 24-bits of precision. I think you're better off keeping things integer. However at those sample sizes, I doubt it matters. Each additional bit can buy you ~6dB of dynamic range and that can be useful when mixing, filtering, and combining samples.
 
Lots of audio experience...

bkilian,

If I came to you as a console designer and said: Based on your experience of working with developers and their issues with audio on the Xbox platforms could you (a) pros and cons of the current audio subsystems on consoles, (b) explain what your ideal, within a reasonable budget, console sound hardware would look like on a future console, and (c) what kind of software tools/changes would you implement to improve audio in games, movies, etc on consoles?

Bonus Question: (d) if you could go nuts (think targeting the top 5% of consumers who appreciate good audio and willing to pay a modest premium for such, but not the 1% audio nuts who want no holds bar quality at nearly any cost) with audio what would you do on a console--and do you think there is a market for such?
 
In game audio can be (and probably is) processed at whatever resolution the devs choose. I imagine 32 bit floats are used during processing, and output is converted 48 kHz 16 bit. I don't see a huge advantage to working at twice the frequency of the output audio as those high frequencies that'd suffer from processing are too high to be of much importance, especially when you mix lots of samples together. .

Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound, unless your sound processing does have anti aliasing, most have nowadays. Less than 64 bit floats is also less than ideal. But it would only be problematic on the memory, CPU wise it is quite low.

Modtracker/MIDI music is off the cards unles you want your music to sound cheap or be really limited. It's a technique used in mobile games where downloads are needed to be small, but an orchestral score playing from a 64 MB general MIDI soundfont will sound ghastly. I personally despise faked orchestras in games more than any other sound fault. I've no complaints with audio compression artefacts or a lack of fidelity, but those fake strings and brass really grate with me. I understand a real orchestra will cost too much money in most cases, and synthietic orchestras are constantly improving, but it still smacks me between the ears every time I hear a grandiose orchestral score being drummed out by a computer


Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.

http://www.roland.com/products/en/JUPITER-80/
https://www.korg.com/kronos

Or even virtual synths like Alchemy or Omnisphere

http://www.camelaudio.com/Alchemy.php
http://www.spectrasonics.net/products/omnisphere.php/

Anyone would have a real hard job on distinguishing real stuff from good and well sequenced midi stuff



No, actually they don't. Except for a tiny minority, people who say they notice it merely THINK they do, because they like to overspend on audio equipment in a religious belief it'll disproportionally improve their listening experience but it typically never pan out if subjected to a properly conducted blind test. :)

Human hearing is pretty damn low fidelity, and most peoples' speaker systems are not good enough, their listening environments may be noisy (fans in consoles typically aren't silent, ventilation, maybe kids making ruckus etc...)

Why waste multiples more RAM on something that has such a tinyTINY quality improvement factor?

Actually they do notice, many times they dont know they do ;)

That does have a lot to do with several factors.

One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training, but this is the lowest reason.

But the more important reasons are.

1) Volume - when you start to put your music louder a hi quality sound really sounds so much better, but many people blame their speakers before the compression, it also means it is easier to put louder.

2) Time of gaming - it is proven that lo fi sound gets tiresome much faster than the same sound at hi fi, even without we notice it, that would mean better gaming experience and would contribute for longer gaming sessions, even without people notice it.

3) people notice more good sound than bad sound (unless it makes noise)

4) people actually notice it that is why in some places you only use hi fi components and sound, from cinemas to conferences and any low profile/cheap disco/club, because even with soft and low volume music people would dislike it quite fast.




This is of limited usefulness on a gaming system. It also doesn't work all that well without really complicating things, such as calibrating using microphones etc, which is too complicated for general consumers. Calibration also breaks if you move stuff around which people may do without realizing they've just broken their virtual surround configuration... No, too fiddly to have much impact.

This is of even more limited usefulness in a gaming system. Too complicated to support and try to explain to people, and chances are high they'll hear fuckall of a difference anyway.

I am quite sure that with Kinetic 2.0 you could get a nice 3D reading of your room and track people on it so it would all be automatic ;)


And yes it hard to explain, but once you try it... and you actually dont need high end hi fi systems to hear the difference, although would be much better with those.

For one I am not fortunately enough to have a sound system like I would like.




Some people notice it. Most don't, and when you're in the middle of fragging your friends, you, guaranteed, will not care about the difference between PCM and Dolby Digital.

I dont since I tried the first CS yet they are doing a third gfx upgrade ( yet I still prefer the gameplay and looks of the old ones), although the overcompreesed sound in GS is weaker (using compressor and limiter on the sound design).

If you upgrade the gfx even without needing then why not sound too?

indeed, and not just CPU expensive, but also hugely bandwidth intensive, especially for larger impulses. Modern GPGPU code can do convolution reverb without much CPU involvement, and could be a good use of idle stream processors.

I am hoping it will :D

As mentioned in a post above, pointless. Anyone who cared about it would already have a receiver capable of doing room correction, and everyone else wouldn't care.

Most people almost never tried, and actually do notice, like I said above, that is why every place but our houses are equipped with hi fi sound systems.

And everyone that can, pay good, really good money, to have the best audio engineers for their films/music/games.

When one of those have lower than expected sound people complain, a lot, but they rarely notice if it is actually very good.


Agreed, and standard 32 bit float audio format (which is usually normalized to -1 -> 1) has 24bits of audio resolution. 24 bit resolution has a dynamic range larger than the human ear, and 16 bit covers more than an orchestra can generate (about 80dB to 16bit's 96dB)

Any acoustic/analogic instrument can do a virtually unlimited dynamic range, if (eg a guitar) you can strike it with a intermediate force it will produce a intermediate level of sound..



I'm not spending hundreds of dollars because lossless theoretically sounds better than lossy. Incredibly few people can reliably spot the difference between 192-256k audio and lossless, and I'm not one of them.


As a advice I suggest people to try only listening hi fi (lossless or good HQ youtube videos) sounds for a while (eg two weeks) and then try to go back, you will notice, I am sure of that.

Anyway our hearing s really flexible and for at least not to much time it can stand to very bad sound.

On another side must of the musics nowadays is produced to be heard in such ways so it doest help to get used to the difference.




The funny but sad thing is that you could get a really good boost with a very small investiment...
 
Last edited by a moderator:
Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound, unless your sound processing does have anti aliasing, most have nowadays. Less than 64 bit floats is also less than ideal. But it would only be problematic on the memory, CPU wise it is quite low.
*sigh* You're one of _those_ people... There are no feedback frequencies, because there will always be a low pass filter inserted at below nyquist to ensure no aliasing occurs. Sampling theorem guarantees that the input frequencies and output frequencies will be identical as long as there are no frequencies above nyquist (half your sampling rate)
And 64bit is completely unnecessary. 24bit is 140dB of dynamic range. That's the difference between the softest sound you can think of in a soundproof room, and standing right behind a fighter jet as it takes off. 32bits would be in the range of 180dB, a sound that loud would pretty much instantly implode your head. The average room has a noise floor of about 50dB on a calibrated SPL meter, anything under that will be practically inaudible, and a good half of the 16 bit range is below that.
Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.

http://www.roland.com/products/en/JUPITER-80/
https://www.korg.com/kronos

Or even virtual synths like Alchemy or Omnisphere

http://www.camelaudio.com/Alchemy.php
http://www.spectrasonics.net/products/omnisphere.php/

Anyone would have a real hard job on distinguishing real stuff from good and well sequenced midi stuff
I agree with you, but it all depends on the quality of the patches you're using. The default one included in windows is about 5MB, and has been since the 90s. It sucks.
Actually they do notice, many times they dont know they do ;)

That does have a lot to do with several factors.

One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training, but this is the lowest reason.

But the more important reasons are.

1) Volume - when you start to put your music louder a hi quality sound really sounds so much better, but many people blame their speakers before the compression, it also means it is easier to put louder.

2) Time of gaming - it is proven that lo fi sound gets tiresome much faster than the same sound at hi fi, even without we notice it, that would mean better gaming experience and would contribute for longer gaming sessions, even without people notice it.

3) people notice more good sound than bad sound (unless it makes noise)

4) people actually notice it that is why in some places you only use hi fi components and sound, from cinemas to conferences and any low profile/cheap disco/club, because even with soft and low volume music people would dislike it quite fast.
A lot of this is pseudoscience. In real tests, users chose louder lower quality sounds over softer high quality ones, simply because we prefer louder sounds in general. I agree that the better quality an audio track is, the better in general, but there is no definitive science determining what the crossover is, since it's different for everyone. Most AAA game audio today has quality easily comparable to high quality movie mixes.
I am quite sure that with Kinetic 2.0 you could get a nice 3D reading of your room and track people on it so it would all be automatic ;)
We generate a room impulse today with the current Kinect (that's what audio calibraton does), for use with echo reduction. We also do user tracking for beamforming.
As a advice I suggest people to try only listening hi fi (lossless or good HQ youtube videos) sounds for a while (eg two weeks) and then try to go back, you will notice, I am sure of that.

Anyway our hearing s really flexible and for at least not to much time it can stand to very bad sound.

On another side must of the musics nowadays is produced to be heard in such ways so it doest help to get used to the difference.

The funny but sad thing is that you could get a really good boost with a very small investiment...
Dude, I do audio _for a living_. I work with, and have worked with, leaders in the field of audio engineering. I have _done_ ABX testing, and that's why I know 100% that I cannot reliably tell the difference between 192Kb MP3 and CD quality. I've tried. I am by no means a "golden ear", although my frequency range is higher than average (I can still hear 18KHz, which at my age is pretty good :)). That's why for me, I know that I would get no boost at all, so any investment is worthless.
 
bkilian,

If I came to you as a console designer and said: Based on your experience of working with developers and their issues with audio on the Xbox platforms could you (a) pros and cons of the current audio subsystems on consoles, (b) explain what your ideal, within a reasonable budget, console sound hardware would look like on a future console, and (c) what kind of software tools/changes would you implement to improve audio in games, movies, etc on consoles?

Bonus Question: (d) if you could go nuts (think targeting the top 5% of consumers who appreciate good audio and willing to pay a modest premium for such, but not the 1% audio nuts who want no holds bar quality at nearly any cost) with audio what would you do on a console--and do you think there is a market for such?
I wish I could give you answers, but I really can't, sorry. I'd find it hard to stay away from revealing things I shouldn't.

A few things that our developers have put in their wishlists have been more voices, built in effects like reverb and fourier transforms (so they can do effects in frequency space), better control over looping, higher quality sample rate conversion (we currently only provide linear), and lower pipeline latency. Oh, they'd also like native OGG and MP3 decoding.
 
Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound
This is not of any great concern. Current audio tech works just fine, and if there's actually any problem here, it's been dealt with satisfactorily.

Less than 64 bit floats is also less than ideal.
Are you talking about samples here, or during the mixing stage? Because claiming that 64-bit float samples is the only "ideal" solution is completely ludicrous of course.

Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.
Yeah, but we can't have every console owner be forced to license synth software and instrument banks worth potentially thousands of dollars just to get good-sounding MIDI audio. ;) I think what Shifty was saying is that any standard, reasonably-priced general MIDI instrument bank is going to sound pretty crap playing orchestral music (which is true, btw.)


Actually they do notice, many times they dont know they do ;)
You got some independent studies-linkage to post, proving that claim? :)

That does have a lot to do with several factors.

One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training, but this is the lowest reason.
That seems like placebo or (self-)indoctrination to me TBH. Again, some independent studies to back up this vague claim?

I am quite sure that with Kinetic 2.0 you could get a nice 3D reading of your room and track people on it so it would all be automatic ;)
A Kinect reading would treat everything scanned as hard surfaces, as even a 3D camera can't properly identify fabrics and foam pillows and so on. It wouldn't create an accurate representation. Better than nothing perhaps, but you seem like such a stickler for details (64-bit floats... indeed!), I'm surprised you'd settle for setting the bar this low! :)

And yes it hard to explain, but once you try it... and you actually dont need high end hi fi systems to hear the difference, although would be much better with those.
Again, independent, proper studies rarely back up such claims. It's mostly just in the heads of the owners of such Hi-Fi gear, not anything tangibly measurable in the real world. Your language indicate as much btw, like when you speak of the need to "train" yourself to hear differences and so on. I doubt this is even possible.

For one I am not fortunately enough to have a sound system like I would like.
Well, good for you! Honestly. I too love getting exactly the gear that I want. :D
 
I heard a stock orchestral music played back in a PS3 game (some James Bond 007, not very inspiring game).
at like 128K mp3 quality it sounded really bad. you need a high bitrate (a few MB more ram if you load the piece) and that will be better.
what's funny is I had listened to the same thing in flac on my PC before. the difference is like between a 78 rpm record and a home cinema set up. almost :LOL: (the 78 rpm with its needle, mechanical spin and horn doesn't sound like youtube)
 
Last edited by a moderator:
If that 128kbit/s MP3 REALLY sounded anything like a 78RPM record on an old phonograph it must have been the world's worst-encoded MP3 ever. ;) It's certainly not the norm for 128kbit/s MP3 audio though, that's for sure.
 
it's a mad hyperbole for 80% of the sound data being lost (and covered in artifacts as if covered in mud). you know, there's like 50 musicians playing, not three and a singer, it's the most pathological case you can dream of.
 
Anythinhg bellow 88Khz can create feedback frequencies that will alter the sound, unless your sound processing does have anti aliasing, most have nowadays. Less than 64 bit floats is also less than ideal. But it would only be problematic on the memory, CPU wise it is quite low.
Hmm. I edit audio on Audcaity in 32 bit floats, and it doesn't sound terrible. Or even in any way perceptibly bad. So I'm not sure what you count as less than ideal, unless ideal is a scientific ideal well beyond the typical human's perception. ;)

Digital synths of todays can do real wonders but are quite expensive in both $$$ and CPU but I ASSURE THEY DO NOT SOUND CHEAP AT ALL quite the contrary.
This is somewhat tangential. By 'cheap' I was refering to synthetic orchestras. Even the good stuff (Vienna) that can sound very convincing can sound very artificial, and that's the quality of many game scores. For synthetic instruments, and even some simulated natural instruments (I own TruePiano's and it is very impressive), modern synths are superb, yes, but that's where I said 'limited', because synthetic sounds only take you so far in musical styles.

Within the confines of a MIDI or mod track in a game, sophisticated wave modelling or massive sample sets aren't an option. A MIDI module will play off a tiny soundfont with exceptioanlly fake sounding orchestral instruments. A mod will be limited to the small smaple set used to construct it, with it's looped sample regions. The quality you can hit can sound good, there's no denying, but it won't do for epic scores in the same way it'll suffice for a puzzler or shooter on a handheld.

One of them is habit, if you are used to low fi sound it wil take some time to really notice what is best, hearing can be really hi fi but it needs habit/training...
This argument seems counterproductive to me. What you're saying is Joe Public can't tell the difference between $200 and $20,000 hardware until they are trained. Then, once trained, the cheap stuff they used to be happy with no makes them miserable, and they have to invest in expensive stuff to get enjoyment from their audio. Isn't that an argument to keep everything lofi and everyone happy?

2) Time of gaming - it is proven that lo fi sound gets tiresome much faster than the same sound at hi fi, even without we notice it, that would mean better gaming experience and would contribute for longer gaming sessions, even without people notice it.
Aren't most core gamers able to hours at a time? I don't see the evidence for gamers getting tired of terrible sounds. On the 8 bit machines with their infinitely repetitive tunes, young gamers still played hours at a time!

If you upgrade the gfx even without needing then why not sound too?
Because people can perceive the improvement in graphics. When graphics are at a point that people can't perceive any improvement (from 1 million polys a character to 10 million), we won't bother advancing them. When people can't perceive the difference between one audio quality and another (CD and SACD, or CD and MP3s), then there's no reason to progress them. You just consume more resources for no benefit.


As a advice I suggest people to try only listening hi fi (lossless or good HQ youtube videos) sounds for a while (eg two weeks) and then try to go back, you will notice, I am sure of that.
I can hear the difference when listening to the music, but when it's background audio to a game, I'm not that focussed on the audio.

I was playing Sniper Elite yesterday and the real problem wasn't audio sample quality or the repetitious nature of the explosion samples playing in the background (I'm sure they're reusing the same sound over and over, but I haven't noticed, unlike preating textures and models), but naff positional audio (on my monitor's cheap stereo speakers). It's not possible to identify where an enemy is, or how far away. I want positional audio, on headphones if necessary, to make immersion realistic. That's the glaring area that needs to be advanced IMO. Higher audio quality seems a waste of processing and resources when it doesn't sound in any way bad with what we've got.
 
Back
Top