Will Warner support Blu-ray?

Status
Not open for further replies.
iknowall said:
Improving television with larger screens and better resolution requires a huge increase in transmission bit rates. The bit rates are ,however, limited by the available broadcast spectrum or network connection. The only recourse is lossy image compression, most commonly JPEG, MPEG-2, Wavelets or Fractals. “Lossy” by name and lossy by nature. The more the image is compressed, using lossy methods, the worse the image quality.

http://www.autosophy.com/videcomp.htm

If this is wrong then so why the uncompressed master have always the best quality ?

I told you 20 pages ago that argument doesn't hold water when comparing different codecs, nothing has changed since then. Uncompressed master will always have the best quality, but that doesn't have anything to do with the argument which is comparing one lossy format to another.
 
AlphaWolf said:
I told you 20 pages ago that argument doesn't hold water when comparing different codecs, nothing has changed since then. Uncompressed master will always have the best quality, but that doesn't have anything to do with the argument which is comparing one lossy format to another.

It have all to do with the argument since the point stated wrong is that is not true that more compression destroy more video information.

It have to do whit the fact that a 40:1 mpeg4 compression always destroy more video information than a 20:1 mpeg2 compression so you wont have the same video quality.
 
Last edited by a moderator:
iknowall said:
It have all to do with the argument since the point stated wrong is that is not true that more compression destroy more video information.

It have to do whit the fact that a 40:1 mpeg4 compression always destroy more video information than a 20:1 mpeg2 compression so you wont have the same video quality.

Just because something is more compressed it doesn't mean it is more lossy. You have numerous examples prooving it in this thread yet you choose to ignore it and continue to try to steer around the facts.
 
AlphaWolf said:
Just because something is more compressed it doesn't mean it is more lossy. You have numerous examples prooving it in this thread yet you choose to ignore it and continue to try to steer around the facts.

I don't see any link that proof that a 40:1 mpeg4 compression don't destroy more video information than a 20:1 mpeg2 compression
 
Someone should kill this thread, it's worthless, the obvious was stated so many times that now I dream about codecs chasing me all the nights..:)
 
What planet are you from iknowall?

Compression is a balance between space and time/power (space on disc and time/power to decode). With a given balance between those two you will get the same quality results -- you trade space for time/power, or time/power for space.

Mpeg2 trades trades space for time/power -- your going to eat up more space, but it's going to be easier to decode. Mpeg4/H.264/etc are on the other end of the spectrum -- they take less space but more time/power.

H.264 came about because we have more powerful dsps and such to decode the stream, so higher bit rates can be used in the same space (at the cost of requiring more powerful hardware to decode it). H.264 is very much superior at the same bit rates (40mbit h.264 is going to wipe the floor with 40mbit mpeg2), I don't know how you can argue this -- infact, the only proof you seem to be giving is links to random places that say they use mpeg2 over h.264, and for all we know the reasons they use it are completely for cost reasons (no need to upgrade their software/etc).

This debate is flat-out dumb. Any time a thread turns into a point by point, often semantics related, debate (where you have about 10+ 1-2 line quotes being answered with a similarly short 1-2 response per post), it's hit its low and it's usually pointless to continue it -- maybe some of you are having fun with it, but it seems people are more frustrated than anything ;)
 
I love this thread. Everytime something new about Blu-ray or HD-DVD comes out it gets posted in here, instead of making new threads.
 
iknowall said:
I don't see any link that proof that a 40:1 mpeg4 compression don't destroy more video information than a 20:1 mpeg2 compression
No, it depends on the resolution of the source material and the resolution of the compressed material. If the ratio is 1:1 you would be correct, but for higher ratios one will be able to preserve more detail with the better compression schemes because you can't get resolution (detail) back.


To make an audio comparison which will sound better: 128VBR mp3 or 128VBR AAC?

With video you might get more artifacts with mpeg4 (they probably won't be noticable), but you will also get alot more detail.
 
Bobbler said:
What planet are you from iknowall?

Compression is a balance between space and time/power (space on disc and time/power to decode). With a given balance between those two you will get the same quality results -- you trade space for time/power, or time/power for space.

What planet you came from ?

Even if you have a 100GHz CPU and 100TBytes memory, complession ratio itself is limited Shannon's law or "Rate Distortion Theory".


Mpeg2 trades trades space for time/power -- your going to eat up more space, but it's going to be easier to decode. Mpeg4/H.264/etc are on the other end of the spectrum -- they take less space but more time/power.

You are stating the obvious, but the reason because is easyesr to decode is that the algoritm itself became less complicated using less compression so you have to do less work to decode it.

H.264 came about because we have more powerful dsps and such to decode the stream, so higher bit rates can be used in the same space (at the cost of requiring more powerful hardware to decode it). H.264 is very much superior at the same bit rates (40mbit h.264 is going to wipe the floor with 40mbit mpeg2),

You are talking out of you ass :

wco81 said:
Even the MS VP said at AVS that VC-1 bitrates over the mid to high teens don't yield better results.

I don't think wco81 lied so this proof you are wrong and come from Ms that don't have eny interest to say this.


I don't know how you can argue this -- infact, the only proof you seem to be giving is links to random places that say they use mpeg2 over h.264, and for all we know the reasons they use it are completely for cost reasons (no need to upgrade their software/etc).

:rolleyes:


I gave this quote but someone said that it was only pr bullshit because he have to push mpeg2.

Don Eklund apparently said

“Advanced (formats) don’t necessarily improve picture quality.

Now wco81 provided another confirmation , not coming from someone that want to push mpeg2 :

Even the MS VP said at AVS that VC-1 bitrates over the mid to high teens don't yield better results.


This debate is flat-out dumb. Any time a thread turns into a point by point, often semantics related, debate (where you have about 10+ 1-2 line quotes being answered with a similarly short 1-2 response per post), it's hit its low and it's usually pointless to continue it -- maybe some of you are having fun with it, but it seems people are more frustrated than anything ;)

It's not my fault if some people want have more credibility than the Sony Pictures’ senior vice president of advanced technology , the Ms VP, and talking only out of their ass.
 
iknowall said:
It's not my fault if some people want have more credibility than the Sony Pictures’ senior vice president of advanced technology , the Ms VP, and talking only out of their ass.

Ehmm read the doc that was posted, besides being obvious on a pure technology level it's also proven in tests...
 
iknowall said:
So why dvcprohd blow mpeg4 out of the water ?

I think it might be the other way around...

DVCPRO HD, also known as DVCPRO100, uses four parallel codecs and a coded video bitrate of 100 Mbit/s. Despite HD in its name, DVCPROHD downsamples native 720p/1080i signals to a lower resolution. 720p is downsampled from 1280x720 to 960x720, and 1080i is downsampled from 1920x1080 to 1280x1080 for 59.94i and 1440x1080 for 50i. Compression ratio is approximately 7:1. To maintain compatibility with HDSDI, DVCPRO100 equipment internally downsamples video during recording, and subsequently upsamples video during playback. A camcorder using as special variable-framerate (from 4 to 60 frame/s) variant of DVCPRO HD called VariCam is also available. All these variants are backward compatible but not forward compatible.

It downsamples the picture! ?
 
ninelven said:
No, it depends on the resolution of the source material and the resolution of the compressed material. If the ratio is 1:1 you would be correct, but for higher ratios one will be able to preserve more detail with the better compression schemes because you can't get resolution (detail) back.

I saw by myself a 1080p dcinema master encoded with mpeg2 at 80Mbit/sec. with a 20:1 compression ratio and it dont have any type of artifact.

Quality is outstanding. If you dont beleave me go see a digital dmovie by yourself.

No distorsion at 80Mbit sec./sec.

Mpeg4 have what you call a "better compression schemes" that is able to fix more distorsion error than mpeg2 , this is a fact stated .

Ok, if i have no error to fix what quality advantage give mpeg4 ?

If mpeg4 for example can fix 50 error and mpeg2 only 20 but at this bitrate i have only 10 error to fix that advantage i have to have an algoritm that can fix 50 error instead of one that can only fix 20 error ?

It wont give me any advantage.

I undertand if i have a low bitrare with 60 error to fix mpeg4 is more powerfull but not if i have
10 error it is uselss can fix more than 10.


This is the concept that demcoder and the other dont want to get.

This is also the sense of the statement of the Ms VP :

"Even the MS VP said at AVS that VC-1 bitrates over the mid to high teens don't yield better results"



To make an audio comparison which will sound better: 128VBR mp3 or 128VBR AAC?

With video you might get more artifacts with mpeg4 (they probably won't be noticable), but you will also get alot more detail.

No more pointess audio comparison please , we are talking about a specific bitrare and compression ratio
 
Last edited by a moderator:
-tkf- said:
I think it might be the other way around...
:LOL:


It downsamples the picture! ?

Are you surprised ? You really have very little clue about professional standard...

I will tell you another secret :

"HDCAM downsamples from 1920 down to 1440"

HDCAM also downsample the resolution from 1920 x 1080 to 1080x1440.

If a professional digital format like hdcam donwsample the resolution using just a 3:1 compression, i can't imagine how much nasty type of data downsampling the video get with a 60 :1 mpeg4 compression.
 
Last edited by a moderator:
-tkf- said:
Ehmm read the doc that was posted, besides being obvious on a pure technology level it's also proven in tests...

So you state that what the Ms vp say is bullshit also ?

Provide me a link at the doc with the exact test at an high bitrate like 80Mbit/sec. , where mpeg2 is in a situation with absolutly no visible video artifac.
 
Last edited by a moderator:
iknowall said:
Provide me a link at the doc with the exact test at an high bitrate like 80Mbit/sec. where mpeg2 is in a situation with absolutly no visible video artifact.

I think Democoder summed it up nicely here (which i guess you didn't read either?)

Not true, because no matter how high you boost the datarate, MPEG-2 has a minimal distortion floor that is worse tha MPEG-4. For example, if you give MPEG-2 an an arbitrarily high bitrate, it will not give you lossless or near lossless images, and it's distortion will be higher than H.264 FRext. This is because of the inferior DCT/IDCT match of MPEG-2.

And your 80mbit can't be done anyway, not with BR at least...
 
-tkf- said:
I think Democoder summed it up nicely here

what he say go aganist what the Ms Vp said :

"Even the MS VP said at AVS that VC-1 bitrates over the mid to high teens don't yield better results."

And what Don Eklund apparently said

“Advanced (formats) don’t necessarily improve picture quality.

:LOL:
So i assume that you state is that democode that talk out of his ass is absolutly right but the Ms VP and the Sony vicepresident that do this for job are all wrong.
:rolleyes:
Nice discussion , really.

(which i guess you didn't read either?)

I guess you did not read my reply to it :

"Wrong, the distiorsion is introducted with the compression, and less bitrare you use less distorsion you have, if you use a 40 : 1 compression you sure have more distorsion than if you use a 7:1 compression, so what you say is wrong, more is the bitrate, less is the distorsion.

You don't have any distorsion on the original master, the distorsion is an effect of the compression.

Dvcprohd like mpeg2 also has a Dtc compression less advanced than MPEG-4 and all what you say about mpeg2 is valid for Dvcprohd.


So why dvcprohd blow mpeg4 out of the water ?

distortion IS an artefact introducted with the compression , more compression you have more error you have to fix, but less compression you apply, less error you can have, less importance the error prediction and correction assume.

Compression techniques that allow this type of degradation are called lossy. This distinction is important because lossy techniques are much more effective at compression than lossless methods. The higher the compression ratio, the more noise added to the data.

http://www.dspguide.com/datacomp.htm


And your 80mbit can't be done anyway, not with BR at least...

Yes it can . You always talk put of your ass.

80mbit/sec. take about 41Gb for 1 hour of video, so with a 100gb blu ray disk you can store 2 gour of 1080p 80Mbit/sec. mpeg2 hd video.
 
Last edited by a moderator:
iknowall said:
Why Dcinema use a jpeg2000 250mbit/sec. bit rate instead of mpeg2 80mbit/sec. ?

Well, according to your theory, they should have used JPEG instead of JPEG-2000, because according to your idiotic theory, JPEG-2000 is more lossy than JPEG because JPEG-2000 has a higher compression ratio (about 20% higher) for a given PSNR.

So Mr Know It All, why did dcinema use JPEG-2000? They didn't mandate lossless JPEG, which they could have done. Instead, they mandate only the 9/7 (lossy) wavelet filter.

Why the new standard is less compressed ?

Could be several reasons: If bandwidth is cheaper than CPU, then a more computationally intensive decoder imposes a higher overall cost. Simplicity of workflow, because authoring MPEG-2 requires more intervention than non-temporal/predictive coders. Features: intraframe based coding is more error resilant and supports seekable
streams.

Like I said, you claim that more compression = more loss and inferior quality, and then you trot out DCI which uses JPEG2000 instead of JPEG. If they were interested in minumum compression ratio, why didn't they pick JPEG over JPEG-2000 then?

I'll tell you why: your thesis that more compression implies that loss must increase loss is wrong.

Frankly, I'm not very impressed by DCI. They spend 80% of their specification on security, and a few pages for compression/image quality, and the rest on infrastructure/transport which shows that the spec is not concerned with image quality, but with security and cost savings that come from digital distribution.

That is, DCI is about saving studios money and bumping up their margins.


Wrong, the distiorsion is introducted with the DCT compression

DCT doesn't do ANY compression you stupid moron. FFT, DCT, DWT are just transformations into frequency/scale domains. Why do you persist in pretending to know what you're talking about, like referencing Shannon's information and rate distortion theories, when you don't have a clue as to how those theories work. Google fishing expeditions are not helping your case.

MPEG-2 DCT introduces distortions because the DCT and IDCT introduce inverse transform mismatch errors. H.264's pseudo-DCT was designed to avoid these problems.

The compression step for spatial coding in MPEG is not DCT, it's quantization followed by entropy encoding.

So why dvcprohd blow mpeg4 out of the water ?

Number 1, it doesn't. Tests conducted by SMPTE and ISO/ITU show that MPEG-4 AVC/FRext @ 16Mbps is nearly indistinguishable from the uncompressed master in viewing tests.

Number 2, the situations aren't comparable. DVCPRO HD is 4:2:2 Try comparing DVCPRO 10-bit 4:2:2 to 10-bit 4:2:2 H.264 FRext.

The whole reason that DV formats exist is to provide non-linear editing support. This means interframe coding can't be used.

Not only that, but you're ignorant worship of DVCPRO HD ignores a fundamental flaw of DVCPRO HD: DVCPRO HD throws away 50% of the pixels. It downsamples 1920x1080 frames into 1280x1024 H.264 does not throw away 50% of luma information in the signal.


You provide no link, no proff, nothing that state that what you say is true, you are pulling all out of your ass without providing any proff that confirm your claim.

I provided two links. One link showing that OBJECTIVE PSNR tests show that H.264 beats MPEG-2 handily. Another, SUBJECTIVE visual test with human subjects shows that H.264 beats MPEG-2.

The higher the compression ratio, the more noise added to the data.
http://www.dspguide.com/datacomp.htm

Look moron, google fishing doesn't help you. Your statement is trivially wrong. Take JPEG with huffman entropy encoding vs JPEG with arithematic entropy encoding, using the same quantization. The two decompressed images will be bit-for-bit IDENTICAL, but the compression ratio will be greater for JPEG with arithmetic entropy encoding.

You need to provide a link that state clearly that at 80Mbti/sec. mpeg4 have a better image quality.

No, YOU need to provide the link that says the opposite, because YOU are the one making the claims that MPEG-2 is better than MPEG-4.
 
Last edited by a moderator:
The design point must be 3hrs + extras, not 2hrs. 80mbps = 108Gb with 3hrs, not counting extras. Moreover, BD is designed for a disc transfer rate of 54Mbps.
 
Status
Not open for further replies.
Back
Top