Polarbear53
Newcomer
Mpeg works by splitting the screen into somthing like 16 by 16 squares, and then sees if it needs to update any squares, then for a few more frames it does that then it keyframes it. So instead of keyframing everything they only take info from part of the frame, to save on bandwidth.
Well say you are making a file into a smaller size for the internet(so you do have a really high quality version), and say a sqaure of the video isn't changed for a while, like say it's a halo 2 video in downgraded quality, and the person is staring at the wall, why not give the square more information to make the picture better instead of just holding the same square there. Say it has a bitrate of 1mbps, and it fills up a frame that only took a tiney bit of that, and the frame is still there for another second, you would send more info to crispen the picture.
The best analogy i can think of is Fprime for lightwave http://www.worley.com/fprime.html. This a rendering program for lighwave. Only instead of processing capabilities being limted like with this program it's the bandwidth, so if not much changes in a scene, the bandwidth goes into putting more detail into the picture.
Is that understandable or feesable?
And if you didn't understand it at all, are there any other ways instead of mpeg, i find this stuff kind of interesting.
The only draw back i could see is if someone moves a lot, then stays still, then moves, the quality difference would get annoying.
ADD ON:
Maybe it could just be by vectors, like it splits into 16 areas and doesn't go by resolution, and when it has extra bandwidth it could send higher resoultion squares or upgrade the squares resolution.
Well say you are making a file into a smaller size for the internet(so you do have a really high quality version), and say a sqaure of the video isn't changed for a while, like say it's a halo 2 video in downgraded quality, and the person is staring at the wall, why not give the square more information to make the picture better instead of just holding the same square there. Say it has a bitrate of 1mbps, and it fills up a frame that only took a tiney bit of that, and the frame is still there for another second, you would send more info to crispen the picture.
The best analogy i can think of is Fprime for lightwave http://www.worley.com/fprime.html. This a rendering program for lighwave. Only instead of processing capabilities being limted like with this program it's the bandwidth, so if not much changes in a scene, the bandwidth goes into putting more detail into the picture.
Is that understandable or feesable?
And if you didn't understand it at all, are there any other ways instead of mpeg, i find this stuff kind of interesting.
The only draw back i could see is if someone moves a lot, then stays still, then moves, the quality difference would get annoying.
ADD ON:
Maybe it could just be by vectors, like it splits into 16 areas and doesn't go by resolution, and when it has extra bandwidth it could send higher resoultion squares or upgrade the squares resolution.