Will the PS3 be able to decode H264.AVC at 40 Mps?

Will the PS3 decode H.264/AVC at 40 Mps?

  • Yes, the PS3 will decode H.264/AVC at 40 Mps

    Votes: 86 86.9%
  • No, the PS3 won't decode H.264/AVC at 40 Mps

    Votes: 13 13.1%

  • Total voters
    99
Is sound processed by BD-J ?
I thought BD-J/BD-Live only handles the interactivity and the network access ? So for video decoding, BD-J/BD-Live is not really used unless there are interactive content.

No. I'm sorry about the bad working on my part. The sound is ofcourse seperate for the main feature. There might be some corroletaion with the IME stuff but I'm not too sure.

Correct about BD-J but when it is called on, how will it effect the AVC HP encoded scene that could be getting decoded at it's peak bit rate? Will you see stutter, will there be skipping, frames dropped? or will it all be smooth as silk? we're all just speculating but these are questions that are important.

For BD-J to be called on and having to respond instantly and mix in with the feature playing, I'd assume it'd take resources? It'll also sit in the back ground and gather info to keep track of progress per chapter (when/if that feature is implemented). Basically you go into scene selection and can see a progress bar of the current playing chapters scene time or that of the whole feature.
 
If J2ME can run on a cellphone, it should run on a PPE thread or a spare SPE core (to display small UI for subtitle, audio track change on-the-fly). If it's displaying a full-screen menu, then I assume the movie would pause (Store the playback time for resume later). So we can use more of the PPE "immediately".

If it's a full blown interactive movie title, it can be handled at the design stage (to not encode the movie at such a high bit-rate).

So far, we have not seen any evident why CABAC would run badly on Cell given that it can be parallelized (The paper in my link ran a similar algorithm on a 4096-CPU-machine with each CPU assigned to 1 pixel at a time).

Furthermore if SIMD can be used in AVC HP profile decoding (other stages), Cell has even more potential to pull away from traditional CPUs.
 
Last edited by a moderator:
Do you think if Amir was making stuff up that:

I think he uses the tactics of a very intelligent and smart man. He posts technical details and facts that are hard to counter. For example, he claims that VC-1 is better than h.264 by quoting his companys results when they tested VC-1 against h.264 :) He tries very hard to look "objective", but i have yet to find him saying anything but postive things about his own products (doh? :)).

He ignores that MPEG2 @ 28mbit is "just as good" as his VC-1 codec, which it seems to be if you compare D-VHS releases with the same HD-DVD releases.

For me VC-1 is something totally new to Microsoft, it´s first real commercial use (HD-DVD) is actually good. Not average or anything but good. Of course they invested alot of time and money with the Codec development leading up to VC-1.
 
The design didn't have 1080p resolutions as a primary focus. Sony knew with MPEG-4 AVC they were going to encounter serious obstacles.

Are you saying that Sony bet the farm on MPEG2 because they knew that the Cell processor couldn´t handle h.264 hp?
 
I think he uses the tactics of a very intelligent and smart man. He posts technical details and facts that are hard to counter. For example, he claims that VC-1 is better than h.264 by quoting his companys results when they tested VC-1 against h.264 :) He tries very hard to look "objective", but i have yet to find him saying anything but postive things about his own products (doh? :)).

He'll be the first to admit his bias but there are more than enough people on avsforum to challenge him and keep him in check. Even the slightest of slips turns into a 100+ post thread but he does handle himself well, even through those. We've seen examples on AVC HP in the Japanese HD DVD titles and they are not at equal footing to their VC-1 counterparts. Whether it's the encoder, compressionists, hardware in the Toshiba, who knows? but they simply are not. That's using the Tosh AVC HP. We'll see what the Panasonic AVC HP encoder can do at some point.

Getting back to character, on the other side you have "talkstr8" and "rio" who refuse to go on record about their affiliation when it's already quite clear. It's like going into a car dealership. Some guy comes up to you and starts talking you up about the car you're standing next to but refuses to admit that he's a salesman for the dealership. Still I respect their opinion but affiliation disclosure would be fair.

He ignores that MPEG2 @ 28mbit is "just as good" as his VC-1 codec, which it seems to be if you compare D-VHS releases with the same HD-DVD releases.

He counters it with, VC-1 can acheive the same thing at less than 15mbps (upcoming Batman Begins being 13.xx ABR) and lower peaks. Remember, this isn't D-VHS. You don't have 50GB of space so you're forced to adapt. Not many will refute that given enough bits, Mpeg2 can look great but those bits take up space. Space, which you might run out of real fast. VC-1 also has room to grow so it'll likely keep getting better. Even the BR guys will admit that Mpeg2 has had it's time. Ofcourse, they believe in AVC HP but still the same conclusion about Mpeg2. Mpeg2 support is however important to have to bring over extra's that do no needed to converted to HD. Smaller independant stuios who do not have the time/money/personnel to help develop the new codecs might opt to stick with Mpeg2 for their films (HDNet comes to mind already) untill a plug and play solution is available to them.

For me VC-1 is something totally new to Microsoft, it´s first real commercial use (HD-DVD) is actually good. Not average or anything but good. Of course they invested alot of time and money with the Codec development leading up to VC-1.

There's a pretty detailed history about VC-1 in one of his posts. I'll let you look it up. I suck at the search function. Ofcourse, read it with implied bias. :)

I personally want the best for my money. I don't care if it's HD DVD or BR or AVC HP or VC1. As long it's good HD at a good price, I'll be happy.
 
Last edited by a moderator:
I'm pretty sure it couldn't have been CABAC given the time-frame, but I need to ask. And besides it was run on 300mhz CPU which happens to suck at random memory acceses (albeit a bit less then the new IBM cpus).

Which does bring me to a question though - is there some CABAC papers available that I could use to educate myself about why this algorithm should suck so badly on Cell? I've googled a bit and came up empty handed so far.

Faf, dunno how helpful, but these are what I was using to try and understand CABAC... hope it's of some use.

(PDF) http://bs.hhi.de/~marpe/download/cabac_icip02.pdf#search="Context Adaptive Binary Arithmetic Coding"

(PDF) http://www.rgu.ac.uk/files/h264_cabac.pdf#search="Context Adaptive Binary Arithmetic Coding"

http://www.bilsen.com/index.htm?http://www.bilsen.com/aic/CABAC.htm

http://www.freepatentsonline.com/6876317.html

Also, Robert, could you actually link to these shootouts you keep referencing?
 
If J2ME can run on a cellphone, it should run on a PPE thread or a spare SPE core (to display small UI for subtitle, audio track change on-the-fly). If it's displaying a full-screen menu, then I assume the movie would pause (Store the playback time for resume later). So we can use more of the PPE "immediately".

One of the big features of the new interactivity standards is that menus don't have to pause the movie. Both interactivity standards allow for menus, animations, videos, and other objects that float over the movie, as well as picture-in-picture, all composited and rendered at 1080p.

So far, we have not seen any evident why CABAC would run badly on Cell given that it can be parallelized (V3's linked paper ran a similar algorithm on a 4096 CPU machine with each CPU assigned to 1 pixel at a time).

Which paper would that be?

This one that VE3 posted? http://www.ac.usc.es/arquivos/articulos/2004/gac2004-c7.pdf.gz

No, it describes a hardware implementation of an encoder.

This one that you posted? http://www.cs.duke.edu/~jsv/Papers/HoV95.pdcfull.pdf

No, it doesn't apply.

First, Huffman encoding is not CABAC. CABAC. Binary Arithmetic Coding.

Second, what the paper proposes changes the results that come out. The parallel arithmetic coding algorithm it proposes basically takes an operation that generated one arithmetic coded stream, and changes it into an operation that outputs several arithmetic coded streams instead, which means the decoder has to know this in order to successfully decode the result. I suspect this would render the output not bit compatible with the CABAC encoder defined in H.264, and hence no good for the PS3.

Third, I see no evidence that the parallel arithmetic coding algorithm proposed in the paper is Context Adaptive -- CABAC. If I'm understanding it correctly, in CABAC you have to update your probability model after each processed symbol in sequence -- the "adaptive" part. The algorithm in the paper uses a pre-computed probability model, and hence is not adaptive.

If you process them in a different order, your probability model at any given symbol will be different, and your output will be wrong. Hence the extreme difficulty in parallelizing CABAC.

There was a paper floating around where someone managed to implement parallel CABAC hardware that could process *2* symbols at once, but it only worked by guessing what the result of one symbol would be before processing the next, and so it could at best speed things up by just a fraction, for a big cost in hardware.

And to repeat what I said earlier, I am not saying either way if PS3 will be able to decode full H.264 HP @ 40 mbps, because I don't know. I'm sure Sony will get it to work.

All I am saying is that from what I heard, it is likely that it will take a fair amount of engineering effort to optimize the PS3 H.264 BD decoder.
 
Last edited by a moderator:
Are you saying that Sony bet the farm on MPEG2 because they knew that the Cell processor couldn´t handle h.264 hp?

I'm suggesting if you backtrack and look out how the codecs evolved, you can speculate why Sony went with such a aggressive blue laser spec.

Things were murky as far as what would be achievable. MPEG-2 was a known quantity and Sony was confident they could come up with a blue laser standard that could provide a high enough bit rate to make MPEG-2 a guranteed solution.


I'm questioning when CELL was on the drawing boards if IBM, Sony, and Toshiba anticipated what h.264 AVC would become.
 
One of the big features of the new interactivity standards is that menus don't have to pause the movie. Both interactivity standards allow for menus, animations, videos, and other objects that float over the movie, as well as picture-in-picture, all composited and rendered at 1080p.

Depends. I mentioned in my previous post that "on-the-fly" interactivity (usually with small, simple UI and quick interaction e.g., to change subtitle) should be possible. In specific cases, if I bring up full screen, complex menu, I may prefer to pause the movie. I remember someone mentioned that the current BD movies do not use BD-J to achieve menu interactions. So I supposed we will have non BD-J and BD-J approaches.

For full blown interactive movie (with dynamic, clickable movie objects), then it would be addressed at design time (don't encode using such a high bit-rate)... probably using BD-J or something else (Make it into a game).

aaaaa00 said:
Which paper would that be?

This one that VE3 posted? http://www.ac.usc.es/arquivos/articulos/2004/gac2004-c7.pdf.gz

No, it describes a hardware implementation of an encoder.

This one that you posted? http://www.cs.duke.edu/~jsv/Papers/HoV95.pdcfull.pdf

No, it doesn't apply.

First, Huffman encoding is not CABAC. CABAC. Binary Arithmetic Coding.

Second, what the paper proposes changes the results that come out. The parallel arithmetic coding algorithm it proposes basically takes an operation that generated one arithmetic coded stream, and changes it into an operation that outputs several arithmetic coded streams instead, which means the decoder has to know this in order to successfully decode the result. I suspect this would render the output not bit compatible with the CABAC encoder defined in H.264, and hence no good for the PS3.

Third, I see no evidence that the parallel arithmetic coding algorithm proposed in the paper is Context Adaptive -- CABAC. If I'm understanding it correctly, in CABAC you have to update your probability model after each processed symbol in sequence -- the "adaptive" part. The algorithm in the paper uses a pre-computed probability model, and hence is not adaptive.

If you process them in a different order, your probability model at any given symbol will be different, and your output will be wrong. Hence the extreme difficulty in parallelizing CABAC.

There was a paper floating around where someone managed to implement parallel CABAC hardware that could process *2* symbols at once, but it only worked by guessing what the result of one symbol would be before processing the next, and so it could at best speed things up by just a fraction, for a big cost in hardware.

And to repeat what I said earlier, I am not saying either way if PS3 will be able to decode full H.264 HP @ 40 mbps, because I don't know. I'm sure Sony will get it to work.

All I am saying is that from what I heard, it is likely that it will take a fair amount of engineering effort to optimize the PS3 H.264 BD decoder.

Thanks for the details :)

I found the paper you mentioned: http://ics.kaist.ac.kr/ver2.2/publi...Symbol Prediction.pdf#search="parallel cabac"

...but I ran out of time. The above paper handles CABAC 2 symbols at a time in hardware for 24% improvements. It also implies that CABAC is a problem for all (not just Cell) because of its sequential nature and high computation requirements.

If the algorithm is implemented in software, it seems that we can extend the algorithm further (Will have to check back later).
 
As far as I know the main CPUs in both the Toshiba HD DVD player (2.4 ghz P4) and the Samsung BD player just run the interactivity and the player's OS. The heavy lifting of the video decoding is all done by the Broadcom decoder chip.
OK thanks. Seems like huge overkill just for the OS and API :oops: but that would be an explanation.
 
I'm suggesting if you backtrack and look out how the codecs evolved, you can speculate why Sony went with such a aggressive blue laser spec.

Things were murky as far as what would be achievable. MPEG-2 was a known quantity and Sony was confident they could come up with a blue laser standard that could provide a high enough bit rate to make MPEG-2 a guranteed solution.


I'm questioning when CELL was on the drawing boards if IBM, Sony, and Toshiba anticipated what h.264 AVC would become.

Ahh ok, i see. Well 50gb should be "enough" for MPEG2 hd that is equal to h.264 hp or VC-1. But not in the future when HiDef becomes the norm, and extras, behind the scenes and just general fluff starts eating into the space that is needed fror the movie. It´s an interestig thought you have, because the percieved BR advantage right now is space. If BR never had added the other codecs they would be in the same ballpark as HD-DVD if you compare space/codecs.
 
That reads like a typo

CABAC would run badly on Cell given that it can't be parallelized

or

CABAC wouldn't run badly on Cell given that it can be parallelized

As written it's nonsensical, and I guess the former was the point being made, that the method can't be parallized and would be limited to one core, probably PPE, which would be maybe no more effective than a standard CPU.
 
Ahh ok, i see. Well 50gb should be "enough" for MPEG2 hd that is equal to h.264 hp or VC-1. But not in the future when HiDef becomes the norm, and extras, behind the scenes and just general fluff starts eating into the space that is needed fror the movie. It´s an interestig thought you have, because the percieved BR advantage right now is space. If BR never had added the other codecs they would be in the same ballpark as HD-DVD if you compare space/codecs.
In that future you'd probably either be looking at 100 GB BRD disks (are current players required to support this standard?) or an extras disk. It's not like DVDs have been compressed more to fit two disks worth onto a single disk. LOTR didn't get really naff encoding to fit one disk.
 
In that future you'd probably either be looking at 100 GB BRD disks (are current players required to support this standard?) or an extras disk. It's not like DVDs have been compressed more to fit two disks worth onto a single disk. LOTR didn't get really naff encoding to fit one disk.

There have been a fair share of DVD´s where the quality had to compromise because there was to much extras. It could be argued that any movie where the average bitrate was below 6-7 mbit was compromised :)
 
He counters it with, VC-1 can acheive the same thing at less than 15mbps (upcoming Batman Begins being 13.xx ABR) and lower peaks.

Be careful, this is pure pr talk ,the result is not equal to something with an ideal hi bitrate situation , that's just in my opinion.

Remember, this isn't D-VHS. You don't have 50GB of space so you're forced to adapt. Not many will refute that given enough bits, Mpeg2 can look great but those bits take up space.

The BR25 is just a scum in my opinion.

Bye,
Ventresca.
 
Last edited by a moderator:
I'm suggesting if you backtrack and look out how the codecs evolved, you can speculate why Sony went with such a aggressive blue laser spec.

Things were murky as far as what would be achievable. MPEG-2 was a known quantity and Sony was confident they could come up with a blue laser standard that could provide a high enough bit rate to make MPEG-2 a guranteed solution.


I'm questioning when CELL was on the drawing boards if IBM, Sony, and Toshiba anticipated what h.264 AVC would become.



Mmm... Standalone players were planned to be released with the same chipset as HDDVD for a long time. In fact, Sony's "thing" with putting Cell everywhere hardly matters, since there are a lot of other manufacturers who will release Bluray players with no intention to put a Cell chip in there.
In the circumstances, the rumour that Sony went for MPEG2 only because apparently Cell has issues with other codecs is a bit much, until we have official word of the contrary.
 
OK thanks. Seems like huge overkill just for the OS and API :oops: but that would be an explanation.

Well, my guess is Linux + big rush to release + rapidly changing specifications = put a big fat overkill CPU and a GB of RAM in there and forget about optimizing for now.

I'm sure the CPUs will be cut down for later generation players when they have more time to optimize everything and need to start cost-reducing the things.
 
Well, my guess is Linux + big rush to release + rapidly changing specifications = put a big fat overkill CPU and a GB of RAM in there and forget about optimizing for now.

I'm sure the CPUs will be cut down for later generation players when they have more time to optimize everything and need to start cost-reducing the things.

Didn´t they "just" base it on their old topnotch DVD-Player (which was 200$ more :))
 
Mmm... Standalone players were planned to be released with the same chipset as HDDVD for a long time. In fact, Sony's "thing" with putting Cell everywhere hardly matters, since there are a lot of other manufacturers who will release Bluray players with no intention to put a Cell chip in there.
In the circumstances, the rumour that Sony went for MPEG2 only because apparently Cell has issues with other codecs is a bit much, until we have official word of the contrary.


The ability of CELL to decode in the PS3 is important. It won't matter what standalone players decode, because the standard is only as strong as it's weakest link. If a high bit rate .h264 AVC movie can't run on the PS3, then movies studios will never release a movie encoded at that bit rate. Sony is going to be selling 10 of millions of PS3's.
 
Back
Top