Audio Processing on the PS3 and Xbox 360

Discussion in 'Console Technology' started by Asher, Nov 13, 2007.

  1. Phil

    Phil wipEout bastard
    Veteran

    Joined:
    Nov 19, 2002
    Messages:
    4,786
    Likes Received:
    377
    Location:
    127.0.0.1
    :shock: That's a pretty ignorant and arrogant statement!

    Differences in fidelity are most certainly better heard on a better system. You can't expect to compare both systems by testing them through a 20 year old hifi system and expect to hear a difference either. And no, the amount of channels your system supports doesn't automatically make it expensive, qualified, nor good for such comparisons.

    If an audiophile (accoarding to your words) can distinguish a difference, I think the is far more interesting question why there is a difference (which leads to the question of the actual sound processing) than outrightly dismissing it and calling them detached from reality.

    I also find it strangely amusing how you come into this thread asking the question about if there are differences, then someone claims there is and you dismiss them, and then go on to answer your own question on the basis that you can't hear a difference, so there must not be much of a difference in the audio processing.
     
  2. MonkeyLicker

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    129
    Likes Received:
    0
    The problem is getting an honest opinion. People who just spent thousands on their audio set up are much more likely to claim there is a big difference.
    People can see and hear what they want to.
    That's not to say there isn't a difference, though.
     
  3. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,709
    Likes Received:
    145
    It doesn't have to be that way. As an observer to my friend's system, I can also tell (him) whether it's worth it or not. :)
     
  4. -tkf-

    Legend

    Joined:
    Sep 4, 2002
    Messages:
    5,634
    Likes Received:
    37
    I guess it would be technical possible and could be used for a soundtrack for example.

    However for the actual soundeffects it´s no use and classic compression like MP3,WMV,AAC would be used instead.

    Why?

    Imagine you have an explosion in front of your character and a scream behind it. The game would place the explosion in the front speakers and the scream in the rear speaker. In order to get this to through to your speakers the game encodes the sound as Dolby Digital or DTS, your reciever then decodes the stream and channels the sound through the speakers.

    So DD and DTS is both compression and a way to get multichannel information across to your receiver/decoder via a bandwidth limited connection.
     
  5. Zaphod

    Zaphod Remember
    Veteran

    Joined:
    Aug 26, 2003
    Messages:
    2,267
    Likes Received:
    160
    Going off topic here, but (while I have no idea what's going on with that disc in particular) most of his 'explanation' made no sense (albeit no sense with buzzwords). Dialog normalization does not affect dynamic range, nor does impact 'fidelity' (it just expresses the level of dialogue as how much lower it is then the peak). The dialog normalization value is also used as the reference (i.e. the 'center' of compression, sounds under it are raised while sounds above it are lowered) for Dynamic Range Control, but this is user selectable on the decoder end. In that case all he'd have to do is turn it off (the DRC setting wouldn't have affected the PCM track).
     
  6. -tkf-

    Legend

    Joined:
    Sep 4, 2002
    Messages:
    5,634
    Likes Received:
    37
    Did you read the complete Audio review?

    He thinks that the decoder decodes the Stream and then applies DRC as it´s set in the stream. First i would think the DRC would be decoded as such during the Decoding processing (one of the reasons i wanted Streams to be decoded on my Reciever instead of the player).

    Whatever the reason is he could clearly hear a difference.
     
  7. Zaphod

    Zaphod Remember
    Veteran

    Joined:
    Aug 26, 2003
    Messages:
    2,267
    Likes Received:
    160
    Yes, of course. That's why I said it makes no sense. Off topic ahead. Sorry everyone.

    DRC isn't 'decoded' as such. It's just some flags in the bitstream telling the decoder what volume levels to boost and what to clamp relative to the DN setting (dependent on the preset used during the encoding). Turn it off at the decoder side and you get the full dynamic range regardless of the DN setting. DRC is an entirely separate concept from DN (although it relies on the DN value). The DN setting adjustment is no different from turning the volume knob on your amp, so all that "rewriting every bit word" is just mumbo-jumbo.

    If he didn't want DRC he should turn it off for a level playing field comparison. If it's still applied, he should complain to the manufacturer of his hardware (player or amp), not to the studio or Dolby.

    Another possibility is that they (the studio) applied a screwy DN value when mastering the TrueHD track (or a correct one based on the actual audio levels) that the reviewer didn't anticipate. If this is the case, when he says that he "level compensated" he might not have since the DN value wouldn't have been what he thought it would be. (I'm guessing he used 4dB as the 'universal value', but some tracks are different because they should be and others are different due to screwups.)

    To sum up: If he actually heard a qualitative difference in 'fidelity', then the reason he gives can not be the cause as that only would affect playback volume. Discounting placebo, a lower fidelity track could either be 1) because the studio actually encoded it from an inferior master; 2) due to user error; or 3) due to hardware error.

    Either way, neither of these reasons for the perceptual differences have anything to do with lossless compression in general (or TrueHD in particular).
     
  8. Asher

    Regular

    Joined:
    Jul 1, 2005
    Messages:
    976
    Likes Received:
    10
    Location:
    Seattle, WA
    Yes, but can't the AC3 files be stored individually for each sound, then muxed together?
     
  9. Asher

    Regular

    Joined:
    Jul 1, 2005
    Messages:
    976
    Likes Received:
    10
    Location:
    Seattle, WA
    It's a technical forum, in the Console Technology subforum. I'm looking for technical discussion, not what people claim they can hear since it's entirely subjective (to say the least). I've already clarified this several times.

    It's fantastic if you prefer Vinyl or you think you hear a difference on your 11,000 quid system, but what I'm interested in is how audio processing on the PS3 / Xbox 360 is implemented and not what audiophiles think sounds best on their systems.
     
  10. -tkf-

    Legend

    Joined:
    Sep 4, 2002
    Messages:
    5,634
    Likes Received:
    37
    It would a stupid thing to do since you would dismiss better compression technology and you would still need to decode every file unless you made a super custom encoding engine that could handle compressed DD files.
     
  11. Asher

    Regular

    Joined:
    Jul 1, 2005
    Messages:
    976
    Likes Received:
    10
    Location:
    Seattle, WA
    I don't see why it's that outlandish to create such an engine.

    I'm also not convinced it's stupid, since I'm willing to bet a VAST majority of people use DD for surround sound in games. To me, it's less stupid than encoding sound twice.
     
  12. Zaphod

    Zaphod Remember
    Veteran

    Joined:
    Aug 26, 2003
    Messages:
    2,267
    Likes Received:
    160
    That would make sense if we were just tanking about 'dumb' mixing of samples together. However, the audio will in the vast majority of cases be filtered and transformed by DSP effects and/or through X3DAudio. Thus, you need to 'encode sound twice' anyway, so it better to store your original samples in a higher quality/more efficient format than AC-3.
     
  13. Asher

    Regular

    Joined:
    Jul 1, 2005
    Messages:
    976
    Likes Received:
    10
    Location:
    Seattle, WA
    Ahhh, right. Forgot about DSP effects.
     
  14. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
    It would also make sounds like the pops of enemies shooting guns at you come where the deloper though those enemies might be rather than where they actually are. The sound samples uses to create the surround sound mix are mono, the surrround sound has to be calculated in real time based on the location of the sources.
     
  15. Hazuki Ryu

    Regular

    Joined:
    Nov 3, 2007
    Messages:
    373
    Likes Received:
    0
    When it comes to sound its really difficult to tell the differences unless ur testing on the exact same system on the exact same room, because sound systems differ alot and so do room acoustics.

    I never heard ps3 on my sound system, i have an average quality system i use B&W speakers 2 large floor on the front and small on the back, wich i carefuly have chosen to go well with my reciver and i use a yamaha subwoofer wich was the best thing i could buy for the price.

    xbox 360 to me sounds better than many many movies, dont ask me why i dont know i think bitrate is the same but the sound is just better also xbox 360 games sound alot better than the normal xbox games.

    Edit (xbox games bit rate is equal to 360 games right?)
     
    #55 Hazuki Ryu, Dec 1, 2007
    Last edited by a moderator: Dec 1, 2007
  16. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
    The consoles output at the same fixed bitrate, but the bitrate of the samples and processes used to create that output can very greatly depending on how much emphsis a developer puts on sound quality.
     
  17. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    190
    Likes Received:
    185
    Background: I know a lot about the internals of both consoles and a lot about implementing game audio, but I have not implemented game audio on either of these consoles and I have not spent much time listening to either of them.

    Here's the deal: Both consoles have sufficient hardware and software ability to sound friggin awesome. As far as hardware is concerned, game audio is a solved problem as of this generation and the requirements are not going to increase significantly in the forseeable future. Game "audio engines" and tools still have a long way to go, but the primary difference at this point is how much effort is put into the audio production of each game.

    If you insist that there must be a winner I'll have to say that the PS3's SPUs, BluRay and HDMI each give it an edge. Just try to keep in mind that the clarity difference of HDMI is in the last few audiophile percentage points, the quantity difference of BluRay over DVD is likely to be bottlenecked by production budget before disk space and the capability difference of the SPUs is analogous to listening to 100 conversations simultaneously vs 200 conversations --when there is a difference its to busy for you to make out anyway.

    I expect the confusion around the PS3 using "uncompressed audio" is likely from out-of-context quotes regarding the HDMI output. All of the consoles decompress to PCM before doing any audio processing. WMA/MP3 compression allows you to have 4-8x as many samples in memory and 4-8x less disk bandwidth when streaming. You always want to use it. Not using compression would be a sign of either processor weakness (not the case) or poor judgement of the audio director (not likely).

    Since I know that someone is going to try to call me on the "requirements are not going to increase" claim I'll add some clairfication:
    We can already play hundereds of sounds through 5.1 or more speakers at 48 Khz with heavy DSP work. You might dream of 10.2 at 96Khz, but that's about as far as audiophiles can go without being completely absurd and it's only a 4x multiplier. Consoles tend to do 10x CPU multipliers between generations.
    You could reasonably claim that a lot of sound should be generated in real time from the physics simultation or from text-to-speech or something similar and that would up the requirements significantly. My claim is restricted to decompression, signal processing and playback.
     
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    On the processing sound, what about wave traversal through the scene taking into consideration materials? Ordinarily broad hacks are used to simulate environment. I imagine acoustic environment modelling will always be a back-seat choice and so never more demanding than the devs care to allocate processing resources, but even then, in your opinion can a sophisticated audio processing system be applied now at a level that we could happily consider as not needing to be improved on, and without eating too much processing resources?
     
  19. ShootMyMonkey

    Veteran

    Joined:
    Mar 21, 2005
    Messages:
    1,177
    Likes Received:
    72
    That's an odd place to put the blame. Normalization is at fault? You'd think PCM tracks would also be normalized. If not, then the difference he'd be hearing would be akin to the loss/masking you might get by turning the volume knob a little too much one way. Short of certain processing software/hardware having odd definitions of volume adjustment and normalization, it just seems like he's reaching for some totally mechanical cause in order to take his own subjectivity out of the picture.

    Moreover, there are no small differences, apparently. Everything is night and day even if the reality is more like night and... 60 seconds later that night. It's not really all that different from fanboyism. Hearing a difference is the same as justifying their purchase, so it's unacceptable for there not to be a difference. Even when listening to the same track and being told that one listening is an MP3 and the other isn't, they'll hear a difference almost all the time.

    Thank GOD someone else said it. Unfortunately, you could say this a thousand times and not a single self-professed audiophile would listen. People seem to forget that there is a pretty wide range of audio applications where a wide range of bitrates still produces good or even perfect results. It's one thing to say that background music needs high bitrates and high quality. It's another thing to say that you must use uncompressed audio on bullet sound effects or voiceovers. They simply don't need it. A standalone voiceover dialogue of a single human voice hardly needs high samplerate to sound near perfect. There are all sorts of sound effects where even a 16 Kbps MP3 would be lossless. Moreover, when you play a dozen sound effects on top of each other with dialogues and music in the background at relatively low volume, things get muddled anyway, though you can still make arguments that there's a difference... what matters in the end is less of how much of a difference there is in absolutes, but how much it *makes* a difference. What does it change? People notice audio, but it tends to disappear from the radar after a reasonable-length stint of playing the game.
     
  20. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,709
    Likes Received:
    145
    While we are at it...

    What about voice input for gaming and general use ? Anyone knows if that Eyedentify game is still alive ? (I have my doubts but some confirmation either way would be nice)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...