DemoCoder said:
Improving television with larger screens and better resolution requires a huge increase in transmission bit rates. The bit rates are ,however, limited by the available broadcast spectrum or network connection. The only recourse is lossy image compression, most commonly JPEG, MPEG-2, Wavelets or Fractals. “Lossy” by name and lossy by nature. The more the image is compressed, using lossy methods, the worse the image quality.
http://www.autosophy.com/videcomp.htm
If this is wrong then so why the uncompressed master have always the best quality ?
Why Dcinema use a jpeg2000 250mbit/sec. bit rate instead of mpeg2 80mbit/sec. ?
Why the new standard is less compressed ?
If the future and the quality is obtained using more compression and more advanced codec, why they use a less compressed codec ?
Simply because your statemente is false , and and the fact are with me.
because no matter how high you boost the datarate, MPEG-2 has a minimal distortion floor that is worse tha MPEG-4. For example, if you give MPEG-2 an an arbitrarily high bitrate, it will not give you lossless or near lossless images, and it's distortion will be higher than H.264 FRext. This is because of the inferior DCT/IDCT match of MPEG-2. Given that perceptual tests of MPEG-2 still show experimentally, a perceivable difference at 24Mbps, the extrapolation that this will be eliminated at higher data rates is not yet supportable.
Wrong, the distiorsion is introducted with the DCT compression, and less bitrare you use less distorsion you have, if you use a 40 : 1 compression you sure have more distorsion than if you use a 7:1 compression, so what you say is wrong, more is the bitrate, less is the distorsion.
You don't have any distorsion on the original master, the distorsion is an effect of the compression.
Dvcprohd like mpeg2 also has a Dtc compression less advanced than MPEG-4 and all what you say about mpeg2 is valid for Dvcprohd.
So why dvcprohd blow mpeg4 out of the water ?
distortion IS an artefact introducted with the compression , more compression you have more error you have to fix, but less compression you apply, less error you can have, less importance the error prediction and correction assume.
Compression techniques that allow this type of degradation are called lossy. This distinction is important because lossy techniques are much more effective at compression than lossless methods.
The higher the compression ratio, the more noise added to the data.
http://www.dspguide.com/datacomp.htm
Moreover, 40Mb/s isn't enough for MPEG-2 if you want to use 4:2:2 or 4:4:4 with 10-12bit samples. Going to 4:2:2 will doublee data rates and 4:4:4 quadruple them.
What people defending MPEG-2 don't seem to realize is that H.264 contains many proposals that were slated for MPEG-2 but got dropped for time constraints. Here are all talking about solving quality problems with enormously expensive hardware upgrades (manufacturing 100gb discs), when software solves the problem much better.
Mpeg4 introduce new and more advanced algoritm because compress more the image, and unless the compression is something loseless, compressing more always lose more quality.
Mpeg4 have need to have a better error prediction because it is made to use more compression with more possible errors.
But less compression you apply, less error you can have, less importance the error prediction assume.
IF you have no error to fix having an algoritm that can fix more error dont do nothing.
This is the point that you dont want to get.
The higher the compression ratio, the more noise added to the data.
http://www.dspguide.com/datacomp.htm
This is simply fallacious. The idea that going beyond MPEG-2 requires a loss of quality makes the assumption that MPEG-2 is already on the edge of the rate distortion curve, which is unknown and most likely not true. The major difference between MPEG-2 and H.264 temporal coding is the extension of the search space from 1-2 frames up to 32 reference frames. None of this throws away more data anymore than extending the window of a lossless windowed compression algorithm. H.264 also allows block sizes to vary (which allows more opportunities for the predictor), and more prediction modes (9 instead of 4) CABAC also adds efficiency with no loss of quality.
And here is where is clear you dont have any clue.
MPEG-4 AVC's have a superior interframe prediction than dvcprohd also.
Do this make mpeg4 look better than dvcprohd at the same bit rate ?
no, dvpcrohd blow mpeg4 out of the water.
Interframe prediction is a 'prediction error' picture.
It have to be more advanced in mpeg4 because mpeg4 have need to apply a bigger compression.
But less compression you apply, less error you can have, less importance Interframe prediction assume.
IF you have no error to fix having an algoritm that can fix more error dont do nothing.
SO the more advanced intraframe prediction is useless at 20:1 compression , and dont make mpeg4 better than mpeg2 at hi bitrare with low compression
You provide no link, no proff, nothing that state that what you say is true, you are pulling all out of your ass without providing any proff that confirm your claim.
I provided a link that state what i am saying is true :
"The more the image is compressed, using lossy methods, the worse the image quality"
http://www.autosophy.com/videcomp.htm
The higher the compression ratio, the more noise added to the data.
http://www.dspguide.com/datacomp.htm
You need to provide a link that state clearly that at 80Mbti/sec. mpeg4 have a better image quality.
And another link that state that a 40:1 compression used my mpeg4 don't destroy more video information than a 20:1 compression using mpeg2
I alredy provided link that clearly state that that i am saying is true, but you never did.
Unless you provide link that proof what you are saying, what you are saying is bullshit