Video codecs??

Polarbear53

Newcomer
Mpeg works by splitting the screen into somthing like 16 by 16 squares, and then sees if it needs to update any squares, then for a few more frames it does that then it keyframes it. So instead of keyframing everything they only take info from part of the frame, to save on bandwidth.

Well say you are making a file into a smaller size for the internet(so you do have a really high quality version), and say a sqaure of the video isn't changed for a while, like say it's a halo 2 video :D in downgraded quality, and the person is staring at the wall, why not give the square more information to make the picture better instead of just holding the same square there. Say it has a bitrate of 1mbps, and it fills up a frame that only took a tiney bit of that, and the frame is still there for another second, you would send more info to crispen the picture.

The best analogy i can think of is Fprime for lightwave http://www.worley.com/fprime.html. This a rendering program for lighwave. Only instead of processing capabilities being limted like with this program it's the bandwidth, so if not much changes in a scene, the bandwidth goes into putting more detail into the picture.

Is that understandable or feesable?

And if you didn't understand it at all, are there any other ways instead of mpeg, i find this stuff kind of interesting.

The only draw back i could see is if someone moves a lot, then stays still, then moves, the quality difference would get annoying.

ADD ON:
Maybe it could just be by vectors, like it splits into 16 areas and doesn't go by resolution, and when it has extra bandwidth it could send higher resoultion squares or upgrade the squares resolution.
 
You are not quite there with your description, but certainly heading along the right track.

Rather than saying that you check if one of the squares (they are called macroblocks) needs updating what actually happens is you find a block from the previous frame that looks like the one you are drawing in this frame. What is then transmitted is a vector saying copy this block from over there. This means that when there is motion you are still getting reasonable compression as instead of trying to compress a whole block of data you are simply sending a vector.

Now this would still not get very good quality as what happens if (for example) the person steps out of a shadow? The block will look kind of right but too light or dark, or if some of the detail in the reference block is slightly different than the block you are predicting from, so along with the data is transmitted a block of data which is the difference between the reference block and the predicted block this block is called the 'residual difference' and it gets compressed in the same way as the keyframe 'I-frame' you mentioned (this compression method is one of the things that tends to vary between different codecs).

Along with all this different codecs do other things, MPEG-2 for example can do the motion-compensation prediction from two different frames, by fiddleing the order in which frames are transmitted it can predict from the previous and next frame at the same time, blending the prediction to get a better result. Lots of other compression techniques go into video codecs, such as quantization, run length encoding and huffman encoding.

It is quite amazing when you step through the compression and decompression process to see how a 16x16 block gets squashed down to a fraction of its size and then seeing it reconstructed again.


CC
 
You never know when it is going to change. You might be adding quality for nothing, when it changes next frame. If you did know how long it would be static, by looking ahead during coding, it is much more efficient to just code it at high detail the first time (lookahead coding is going to be the next big thing in RD optimization BTW, slow but will make all codecs quite a bit more efficient).

I doubt it would look good to have detail creep into the picture across multiple frames if it is static, and any bits you do not spend on that can be used to make other parts of the video look better.
 
MfA said:
I doubt it would look good to have detail creep into the picture across multiple frames if it is static

This happens routinely with most video codecs (MPEG etc). It does look bad but is better than it staying blurry.
 
Back
Top