Will the PS3 be able to decode H264.AVC at 40 Mps?

Will the PS3 decode H.264/AVC at 40 Mps?

  • Yes, the PS3 will decode H.264/AVC at 40 Mps

    Votes: 86 86.9%
  • No, the PS3 won't decode H.264/AVC at 40 Mps

    Votes: 13 13.1%

  • Total voters
    99
Sis said:
I thought the AVC flavor was different enough that comparing the two weren't meaningful.
Perhaps, but we're talking about two orders of magnitude faster CPU as well.

Brimstone said:
On the RSX the pixel pipelines will just sit idle, while only the vertex shaders can assist CELL.
Given that PS are the ones with output to memory I'd say the exact reverse (not that I really think they'd bother using shaders to assist decoding anyway).
 
At any rate, if my memory serves me correct a certain member of this forum (he'll correct me if I remember wrong) got 1080i H.264 running on PS2 without frame drops.

Who would this be, and which profile?

Because 1080p High Profile @ 30-40 megabits/s is no cakewalk. And I'd find it extremely hard to believe he pulled it off on the PS2. At the present time, probably not even on a dual core PC. ;-)

(Don't point to Quicktime, it doesn't even fully support Main Profile and doesn't have CABAC.)

Perhaps, but we're talking about two orders of magnitude faster CPU as well.

And we're talking about bitrates and algorithmic complexity increases to match.

For this reason, none of the standalone players today use a general purpose CPU or even a GPU to decode the video, it's all cast in silicon for very good reasons.
 
Last edited by a moderator:
Brimstone, I'd think that if there planning to use the RSX, there would be some sort of a licensing deal for Purevideo from nvidia, similar to MS using Avivo for Xenos to handle AVC.

I have not heard such a deal or perhaps it's coming down the line?


At CES, they specified that PS3's software decoding would leverage Cell+RSX. They didn't elaborate to what extent RSX would be used, though.
 
At CES, they specified that PS3's software decoding would leverage Cell+RSX. They didn't elaborate to what extent RSX would be used, though.

Probably the same as the current PC GPU. Most likely RSX will do IT, MC and Deblocking. While Cell do the CABAC and what ever left over. Its a good 50/50 split interm of workload needed.
 
Rumor has it there's a part of the H.264 algorithm called CABAC that is really SPE unfriendly.
CABAC is unfriendly to everything - even dedicated HW :) - but it does do a nice job of squeezing the data down further. <shrug>

Rumours also said that bilinear filtering cut PS2 fillrate by factor of 4 and god knows what else.

At any rate, if my memory serves me correct a certain member of this forum (he'll correct me if I remember wrong) got 1080i H.264 running on PS2 without frame drops. If PS3 can't do this I would be worried about a lot more then its video capabilities.
Was that a profile that used CAVLC rather than CABAC for the entropy encoding/decoding?
 
Last edited by a moderator:
I can't for the life of me imagine why the PS3 wouldn't be able to decode H264.AVC at 40Mbps, provided of course it can get that stream in the first place from its BluRay drive ... But maybe that's just my lack of imagination?
 
So, I've been reading through some of the papers on CABAC and my feeble brain is sorta at a loss as to why it's such a bad thing... Aside from being pretty computationally heavy I don't really see anything that's particularly unfriendly to CELL, SO I turn to the infinitely more versed B3D and say, "What? What? WHAAAAT?!!"
 
So, I've been reading through some of the papers on CABAC and my feeble brain is sorta at a loss as to why it's such a bad thing... Aside from being pretty computationally heavy I don't really see anything that's particularly unfriendly to CELL, SO I turn to the infinitely more versed B3D and say, "What? What? WHAAAAT?!!"

From what I gather:

- CABAC is extremely difficult to parallelize. You basically process from the first symbol to the last in sequential order, and the algorithm depends on them being processed in order. So splitting it across multiple SPEs is out.

- CABAC uses a lot of data dependent branching. SPEs aren't that good at branching.

- CABAC uses a lot of memory accesses. SPEs aren't good at lots of memory accesses, if what they're accessing doesn't fit into LS.

To be 100% clear, I'm not saying that it can't be done, just saying that it's not a cakewalk, even for CELL.
 
Last edited by a moderator:
Just so we're clear on the matter. Sony announced in its E3 2006 press release that the PS3 can achieve the maximum bitrate Blu-ray offers so the burden of proof here is to find evidence why it can't achieve that.

http://www.us.playstation.com/News/PressReleases/341
With the overwhelming computational power of the Cell processor, PS3 is capable of playing back content from Blu-ray (BD) disc at a bit rate of multiplex 48Mbps with ease, the maximum bit rate defined in BD standards.

And on the subject of PR, ATI's announcement of AVIVO inclusion for the 360 did not have a single mention of the CPU, which is still a fundamental part of their decode process. It's just about what you want to talk up, not necessarily a verbatim description of the processes involved.
 
From what I gather:

- CABAC is extremely difficult to parallelize. You basically process from the first symbol to the last in sequential order, and the algorithm depends on them being processed in order. So splitting it across multiple SPEs is out.

Do you need to split it over multiple SPEs ? From what I gather of Cell, a single SPE and PPE should be able to do the job. The other can go do some other stuff, or go to sleep.
 
Just so we're clear on the matter. Sony announced in its E3 2006 press release that the PS3 can achieve the maximum bitrate Blu-ray offers so the burden of proof here is to find evidence why it can't achieve that.

With the overwhelming computational power of the Cell processor, PS3 is capable of playing back content from Blu-ray (BD) disc at a bit rate of multiplex 48Mbps with ease, the maximum bit rate defined in BD standards.
http://www.us.playstation.com/News/PressReleases/341


Great find ! And you are spot on with

burden of proof here is to find evidence why it can't achieve that



Archy
 
Why SPE?

PS3 has the PPE for this type of tasks and if we add that the people isn´t going to play games and see movies at the same time we have the entire Cell at our hand for huge intensively h.264, not only the SPE.

Is sad that the people see the PPE as trash.
 
Just so we're clear on the matter. Sony announced in its E3 2006 press release that the PS3 can achieve the maximum bitrate Blu-ray offers so the burden of proof here is to find evidence why it can't achieve that.

Basing anything off a Company's PR release is very shaky ground at best! I mean no company would every strech the turth in a PR release at a big trade show would they :) The burden of proof lies with both camps well until its released that is, then we will know for sure.
 
Basing anything off a Company's PR release is very shaky ground at best! I mean no company would every strech the turth in a PR release at a big trade show would they :) The burden of proof lies with both camps well until its released that is, then we will know for sure.

I'm sorry but I find this line of reasoning utter nonsense. You don't get to pick and choose what is PR disingenuity and not. Until proven otherwise, we have to accept that the information released by a company is fact. If we're allowed to call this out, then why stop there? What makes this more unlikely than the console actually existing at all? It again comes down to "Why do we not believe this?" which invokes the burden of proof.

I think you're just being overly cynical.
 
From what I gather:

- CABAC is extremely difficult to parallelize. You basically process from the first symbol to the last in sequential order, and the algorithm depends on them being processed in order. So splitting it across multiple SPEs is out.

- CABAC uses a lot of data dependent branching. SPEs aren't that good at branching.

I don't have the details, just some more questions. Is CABAC the only stage in H.264 AVC decoding ? Can you organize the SPEs in stream fashion so that 1 SPE or the PPE focuses on CABAC while the rest perform other pre- and post- stages ?

If done on a SPE, the local store can still help if processing can be chunked, loops can be unrolled and memory latency can be hidden.

Depending on how bad the situation is... you can still throw multiple SPEs at a problem (if you have spare ones), you just get lesser speed up but the overall effect can still be faster than just 1 SPE. Some of these work can include speculative calculation. This is wasteful though (generally not recommended).

- CABAC uses a lot of memory accesses. SPEs aren't good at lots of memory accesses, if what they're accessing doesn't fit into LS.

You mean random memory accesses ? I thought the SPE's async DMA is great for fetching consecutive memory. As long as the SPE can perform other stuff while waiting for its data to arrive, it should do fine.
 
Cell has been demoed decoding 12 HD streams simultaneously so why should 1 give it so much difficulty?

There were DSPs around 3 years ago which could decode H.264 and I doubt they'd even match a single SPE.

Even if there is a section which gives Cell problems it can always be modified to make it more Cell friendly.


BTW H.264 is another name for AVC, H.264 seems to have a small army of different names (including MPEG4 layer 10).
 
The damage that certain people has done to the h.264 image is just impossible to count.

From my understanding a lot of anti-h.264 propaganda comes from Microsoft Windows Media division.
 
Back
Top