Will Warner support Blu-ray part 2 ?

Status
Not open for further replies.
scooby_dooby said:
So, all I can say to that point is, here's forza which has damage modelling, higher polygon models, better AI, online mode, technically graphically superior and takes up less than 1/3 of that space so it's obviously possible to make a game of GT4 calibre with MUCH less than 9GB. Maybe not on PS2...but that's a different issue.

Damage modeling yes, Better AI yes, online mode yes. High Polygon models? Not that I can see. Techinically Graphically superior ? Not that I can see. Links?

IIRC, both IGN and Gamespot have commented that GT4 looks better and runs at twice the frame rate.... not to mention runs at 1080i
 
Amazing how the guy talks about HD-DVD/BR given the fact that it's quite clear he's never seen them in person running H.264 or VC-1.

p.s. Barco DP100 is not the highest end projector on the market. It's got lumens, lumens, lumens, but it's still only 2000:1 CR, it's ANSI CR is inferior, and 1080p is unimpressive when you look at, for example, 1080p HT projectors on the market, which have better CR and colors. In fact, Sony is selling SXRD large venue projectors with 4x the resolution of the Barco.

At CES'05, I saw a triple Qualia stack that blew away the Barco (which I have seen). Oh, and I also saw VC-1 up against D-Theatre, and the VC-1 version looked better.

The fact of the matter is, MPEG-2 underperforms in low-light situations, or scenes with transitions and transparency. Go pick up a dark movie DVD with fog, and look for color banding. Or watch scenes that 'fade to black' or transition with A/B blends to see what I mean.
 
DemoCoder said:
Amazing how the guy talks about HD-DVD/BR given the fact that it's quite clear he's never seen them in person running H.264 or VC-1.

p.s. Barco DP100 is not the highest end projector on the market. It's got lumens, lumens, lumens, but it's still only 2000:1 CR, it's ANSI CR is inferior, and 1080p is unimpressive when you look at, for example, 1080p HT projectors on the market, which have better CR and colors. In fact, Sony is selling SXRD large venue projectors with 4x the resolution of the Barco.

At CES'05, I saw a triple Qualia stack that blew away the Barco (which I have seen). Oh, and I also saw VC-1 up against D-Theatre, and the VC-1 version looked better.

The fact of the matter is, MPEG-2 underperforms in low-light situations, or scenes with transitions and transparency. Go pick up a dark movie DVD with fog, and look for color banding. Or watch scenes that 'fade to black' or transition with A/B blends to see what I mean.

Man, I gave up on "Iknowall"...In the previous thread, I've pointed out to him that H.264 beat MPEG2 in both PSNR and subjective testing (at high bitrate) in studies...Trust me, you're banging your head against the wall. He have not proven that MPEG2 beat h.264 in any studies. The only thing he managed to do is recite some generalization about compression bitrate (without fully understanding the nature of video compression).

Totally agree with you with regard to low light situation with MPEG2 at DVD biterate. Sony has a hundred or so MPEG2 related licenses...so keeping with MPEG2 variant is benefitically to them. So you really can't fault them for it. But you have to call it what it is...So when Sony call MPEG2 visually the best on the market...it's pure PR talk. And if I was the head of the Sony...I might have done the same...

The funny thing is H.264 demands more power to decode...it's something that Cell would be good at. ;) But I guess Sony want their cake and eat it too.
 
rounin said:
Damage modeling yes, Better AI yes, online mode yes. High Polygon models? Not that I can see. Techinically Graphically superior ? Not that I can see. Links?

IIRC, both IGN and Gamespot have commented that GT4 looks better and runs at twice the frame rate.... not to mention runs at 1080i

I was referring to things like the reflections, the dynamic shadows, the 3d wheels, the glowing calipers. GT4 looks really good, better even, but technically Forza seems to have more going on.

Anyways, discussion is really voer since GT4 was only 5.6GB and that was with PS2's horrible compression techniques.
 
scooby_dooby said:
I was referring to things like the reflections, the dynamic shadows, the 3d wheels, the glowing calipers. GT4 looks really good, better even, but technically Forza seems to have more going on.

Anyways, discussion is really voer since GT4 was only 5.6GB and that was with PS2's horrible compression techniques.

But reflections in GT4 are realtime, uses 3d wheels and has glowing calipers and dynamic shadows as well :LOL: The lighting I think is way better looking than Forza's as well. Maybe to best Forza in graphics, there needed to be lots of coding done for optimization in trading compression for performance hence taking up that much space? I dunno I'm a noob thats here to learn. Plus nAo already said the Ps2 wasn't as bad as you say in terms of compression (quite the opposite).

From looking at past games, I think the harder you push the hardware overall in a game, the more space its going to require in the media to write the coding that supports it. ie. look at early games like tekken tag tournament which was less than 500 megs and then tekken 5 at over 4 gigs. This may or may not be an extreme example. But games have usually only have gotten bigger in terms of the media they take up (with CON2 as an exception I thought of off the top of my head). I'm looking at the size of games like condemned, PDZ, PGR, and AMPED3 and they all weigh in at ~7 gigs. I don't think the Xbox360 has much room for growth without being a pain ? (OK that was offtopic :oops: )
 
DemoCoder said:
Amazing how the guy talks about HD-DVD/BR given the fact that it's quite clear he's never seen them in person running H.264 or VC-1.

Amazing is your arrogance and how you pretend to have any crediblity after to be proved to be wrong.

No i am not a liar sorry . Here are you the one who never dealed with the process to get an hd-d5 master encoded in mpeg2hd and with mpeg4.

p.s. Barco DP100 is not the highest end projector on the market.

Name me what are all the official Dcinema projectors selected for the Dtheaters.

If you don't know what models are begin selected please leave this discussion now since is clear that i am talking with someone who pretend to talk about the dcinema but don't have enough clue about it.

It's got lumens, lumens, lumens, but it's still only 2000:1 CR, it's ANSI CR is inferior, and 1080p is unimpressive when you look at, for example, 1080p HT projectors on the market, which have better CR and colors. In fact, Sony is selling SXRD large venue projectors with 4x the resolution of the Barco.


No the 2k resolution is not 1080p, it is 2048 x 1080p.

How many Dtheaters do you know ?

2k is the standard of the Dcinema right now so this is the max res you need at the moment.


First The barco BarcoDp100 is the first official 2k projector used for the Dcinema.

Second it is the most used projector in all the Dtheaters.

Third i remember clearly the Barco eng saying me that the last important thing for this projector is an higer contrast ratio , and that there are others barco projector with higer contrast ratio than this but this is the best one projected for the Dcinema.

How knowledge do you have on what are the condition of a theater projection ?

This is not an HT projector this is an $80.000 digital theater projector .

If the barco engs selected to have some spec higer and other lower they have their reasons.

At CES'05, I saw a triple Qualia stack that blew away the Barco (which I have seen). Oh, and I also saw VC-1 up against D-Theatre, and the VC-1 version looked better.


I hate generic statements. What model of barco did you saw ? What Dcinema film did you saw ? In what condition ? what was the bitrare of the mpeg2 version and the VC-1 version ? who encodec the video and with what hardware ?

You aren't comparing the standards here, you are comparing two difference ENCODER designs againsts a piece of content. What happens with fast motion or film grain can be very different in one versus the other.

What's why you typically use so many different clips for a comparison tests - a single test sequence doesn't tell you much.


The fact of the matter is, MPEG-2 underperforms in low-light situations, or scenes with transitions and transparency. Go pick up a dark movie DVD with fog, and look for color banding. Or watch scenes that 'fade to black' or transition with A/B blends to see what I mean.

Like i said, you are comparing two difference ENCODER designs againsts a piece of content.
 
Last edited by a moderator:
TrungGap said:
Totally agree with you with regard to low light situation with MPEG2 at DVD biterate. Sony has a hundred or so MPEG2 related licenses...so keeping with MPEG2 variant is benefitically to them. So you really can't fault them for it.
Are you sure about how MPEG2 is more beneficial to Sony than H.264 in terms of license revenue?

The owners of MPEG LA's patent pool for MPEG2 are Columbia University, Fujitsu, General Instrument/NextLevel, Kokusai Denshin Denwa Co., Ltd. (KDD), Matsushita, Mitsubishi, Philips, Samsung, Scientific Atlanta, Sony, Toshiba, and Victor Company of Japan (JVC).
http://www.looksmartmac.com/p/articles/mi_m0FXG/is_n9_v11/ai_21041392

The owners of MPEG LA's patent pool for H.264/MPEG-4 AVC are Columbia University, Electronics and Telecommunications Research Institute of Korea (ETRI), France Telecom, Fujitsu, LG Electronics, Matsushita, Mitsubishi, Microsoft, Motorola, Nokia, Philips, Robert Bosch GmbH, Samsung, Sharp, Sony, Toshiba, and Victor Company of Japan (JVC).
http://62.210.150.98/newswire.asp?code=1726

The VC-1 patent pool seems not to have reached the agreement, but if you follow this speculation,
http://www.theregister.co.uk/2005/01/24/ms_codec_patents/
it's likely that Sony is one of them too.
 
one said:
Are you sure about how MPEG2 is more beneficial to Sony than H.264 in terms of license revenue?

The owners of MPEG LA's patent pool for MPEG2 are Columbia University, Fujitsu, General Instrument/NextLevel, Kokusai Denshin Denwa Co., Ltd. (KDD), Matsushita, Mitsubishi, Philips, Samsung, Scientific Atlanta, Sony, Toshiba, and Victor Company of Japan (JVC).
http://www.looksmartmac.com/p/articles/mi_m0FXG/is_n9_v11/ai_21041392

The owners of MPEG LA's patent pool for H.264/MPEG-4 AVC are Columbia University, Electronics and Telecommunications Research Institute of Korea (ETRI), France Telecom, Fujitsu, LG Electronics, Matsushita, Mitsubishi, Microsoft, Motorola, Nokia, Philips, Robert Bosch GmbH, Samsung, Sharp, Sony, Toshiba, and Victor Company of Japan (JVC).
http://62.210.150.98/newswire.asp?code=1726

The VC-1 patent pool seems not to have reached the agreement, but if you follow this speculation,
http://www.theregister.co.uk/2005/01/24/ms_codec_patents/
it's likely that Sony is one of them too.

It is a question of a company having a stronger patent position in MPEG-2
 
Last edited by a moderator:
TrungGap said:
Man, I gave up on "Iknowall"...In the previous thread, I've pointed out to him that H.264 beat MPEG2 in both PSNR and subjective testing (at high bitrate) in studies...

No.

This is the doc you gave me :

http://www.fastvdo.com/spie04/spie04-h264OverviewPaper.pdf

This doc only state that mpeg4 at 8Mbit/sec. can outperform a 20Mbit/sec. mpeg2.

This doc never state that at hi bitrate mpeg4 outperform mpeg2.

I am well aware that at a low bitrare mpeg4 can outperform mpeg2.

The problem is that a good mpeg2 encoding can outperform a not so good mpeg4 encoding
and vice versa.

This doc made a test only with one clip named "Bighisp".

A single test sequence doesn't tell you much. You need a lot of clips tested to make a valid comparation. The ENCODER perform different with different source, the eng can be more skilled with mpeg2 so that with some type of clips with mpeg2 he can get an exellent quality easy but fail with mpeg4.

There are so much variables that can influence the quality of the result that you simply can't take a general test and say "this state that mpeg4 look better than mpeg2".

Because i could do a very bad mpeg4 encoding and a very good mpeg2 encoding at the same bitrate and state that
"mpeg2 always ouperform mpeg4"

Here is where you NEED pratical experience to tell how is the result, no general test give me a damn when i am in the lab and my master don't look good enough using mpeg4.

You don't want to get this point.

There are a lot of things that you can't forget about the advantage that you have using mpeg2 :

First, R:T: encoding capability.

R.T. encoding capability give me the ability to go back and tweak scene by scene or even frame by frame.
This segment re-encode capability, is very well integrated in mpeg2 because it is a mature authoring/multiplexing tools and give me the ability to equal or exceed the performance of other less mature codecs like mpeg4 and VC-1.

Second, HD MPEG-2 Encoders are cheaper and easier and better.
 
Last edited by a moderator:
iknowall said:
Amazing is your arrogance and how you pretend to have any crediblity after to be proved to be wrong.

You're the one who has zero credibility in this forum. Care to explain how Discrete Cosine Transform (DCT) compresses data? Come on, I'm waiting. While you're at it, why don't you describe the difference between DFT, DST, and DCT.

iknowall about nothing said:
DemoCoder said:
DCT doesn't do ANY compression you stupid moron.
Yes it do stupid moron.

The porpoise of DTc is reducing the redundancy of information , DTC delete the video information that are redundancy and you wont notice the difference if they are deleted,
so it make a video smaller and this is a compression.

Also to proof that you can call DTC a compression i give you this quote :

" At the heart of JPEG2000 is a new wavelet-based compression methodology that imparts a number of benefits over the discrete cosine transform (DCT) compression methods used in JPEG.
http://www.us.design-reuse.com/artic...ticle4595.html

Once again, you are tripped up by your reliance on search engines instead of on knowledge. DCT alone does not perform compression anymore than converting from RGB to YUV colorspace performs compression. All it does is convert an nxn block of time domain values into frequency domain. The output of a DCT is actually larger than the input.

It is the quantization and entropy encoding steps that are responsible for compression. Entropy encoding is what reduces redundancy.

Maybe if you actually spend some time reading about the DCT instead of quoting PR text written by MBAs, you wouldn't look like such a fool. Try to understand the difference between something that actually *DOES COMPRESSION* vs something which is a preparatory step, but does not compression itself.

The first step to compressing JPEG or MPEG is to convert RGB to YUV vs a colorspace transform. Then U and V are subsampled (decimated). It is the subsampling which throws away the information, not the RGB->YUV conversio.
 
Last edited by a moderator:
DemoCoder said:
You're the one who has zero credibility in this forum. Care to explain how Discrete Cosine Transform (DCT) compresses data? Come on, I'm waiting. While you're at it, why don't you describe the difference between DFT, DST, and DCT.

Are you dense ? I'm going to stop talking to you, I think I would get better results smashing my head in with an hammer:LOL:

The whole point that you don't want to get is that you typically talk like someone who only read things from internet but have NO pratical experience at all in the subgect of the encoding and say non sense things like pretending to say that mpeg4 outperform mpeg2 at an hi bitrate basing on an internet .doc

The fact that you take this doc for make an assumption on the quality of the codec mpeg2 vs mpeg4 :

http://www.fastvdo.com/spie04/spie04-h264OverviewPaper.pdf

I can assure you that no one with a pratical experience in the professional mpeg2 encoding would never take a test like this in consideration to say that mpeg4 ouperform mpeg2, it would consider using a test like this ridiculous knowing how in the real word there are so much variables that can influence the quality of the result that you simply can't take a general test and say "this state that mpeg4 look better than mpeg2".

Because i could do by myself a fast bad mpeg4 encoding and a very good mpeg2 encoding at the same bitrate and state that "mpeg2 always ouperform mpeg4"


You aren't comparing the standards here, you are comparing two difference ENCODER designs againsts a piece of content. What happens with fast motion or film grain can be very different in one versus the other.

This whole error alone means you absolutly have no clue about how mpeg2/mpeg4/VC-1 industrial encoding work and most important perform in the real word.

No one expert would make an error like this.

Here is where you NEED pratical experience to tell how is the result, no general test give me a damn when i am in the lab and my master don't look good enough using mpeg4.


So if you are the one that pretend to predict how will be the average quality of the movies encoded with mpeg2hd versus the ones encodec with VC-1 well you would be the last person to listen.

The fact that you are so arrogant is also a proof that you fell yourself inferior for not have any pratical experience and make you have need to show how much you are good at reading things on the internet.

Sorry but you can ask to me to explain every definition of any compression term you want, this don't matter, anyone can just read what is the exact right definition of DTC and report it.

DO this make him an expert ? No. Do this give him real word experience ? NO.

Do this make him a person to trusth for a prediction on what standard will give the better quality ? absolutly no.


Now i am well aware at what is the Discrete Cosine Trasform for, and what i said is that at an intraframe level it is used for a better and more compact way to rapresent the data.

This is what i meant saying it do a compression, it rapresent the data in a more efficent way. To me this is a form of compression .


What you miss is that the output of the DCT get compressed also using a statistical theorem that follow the Huffman's rules so saying that output of a DCT is actually larger than the input is not true at all because it get immediatly compressed.

The DCT transforms the data from the spatial domain to the frequency domain.
The spatial domain shows the amplitude of the color as you move through space.
The frequency domain shows how quickly the amplitude of the color is changing from one pixel to the next in an image file.

You miss that "The frequency domain" is a better representation for the data because it makes it possible for you to separate out , and throw away , information that isn’t very important to human perception.

The human eye is not very sensitive to high frequency changes , especially in photographic images, so the high frequency data can, to some extent, be discarded.

And this is why i said that the DCT trow away redaundant information.




But like i said , this is no enough to make anyone an expert, since a real expert is the one who actually spend a month in the lab with the encoding eng trying to get an exellent video quality, not one who repeat exactly every definition he can read on the internet.
 
Last edited by a moderator:
iknowall said:
Now i am well aware at what is the Discrete Cosine Trasform for, and what i said is that at an intraframe level it is used for a better and more compact way to rapresent the data.

This is what i ment saying it do a compression, it rapresent the data in a more efficent way. To me this is a form of compression .

Doesn't matter what it is to you, it's just a transform. The data holds the exact same information as the original image data, just in a different domain.


iknowall said:
What you miss is that the output of the DCT get compressed also using a statistical theorem that follow the Huffman's rules so saying that output of a DCT is actually larger than the input is not true at all because it get immediatly compressed.

First of all: DC didn't miss it, Huffman encoding = entropy encoding. Second: you're wrong, doing entropy encoding on the DCT-ed data would yield fuck all compared to doing it directly on the image data. The compression comes the quantization of the DCT-ed data.

iknowall said:
The DCT transforms the data from the spatial domain to the frequency domain.
The spatial domain shows the amplitude of the color as you move through space.
The frequency domain shows how quickly the amplitude of the color is changing from one pixel to the next in an image file.

You also miss that "The frequency domain" is a better representation for the data because it makes it possible for you to separate out , and throw away , information that isn’t very important to human perception.

The human eye is not very sensitive to high frequency changes , especially in photographic images, so the high frequency data can, to some extent, be discarded.

And this is why i said that the DCT trow away redaundant information.

But it's not the DCT that throws away the data, it's the quantization.

iknowall said:
But like i said , this is no enough to make anyone an expert, since a real expert is the one who actually spend a month in the lab with the encoding eng trying to get an exellent video quality, not one who repeat exactly every definition he can read on the internet.

More comedy gold.

Cheers
 
iknowall said:
Are you dense ? I'm going to stop talking to you, I think I would get better results smashing my head in with an hammer:LOL:

The whole point that you don't want to get is that you typically talk like someone who only read things from internet but have NO pratical experience

Practical experience is irrelevent in this regard, it's mathematics. Amazing that a guy who tosses around references to Shannon rate distortion theory now wants to try and bring experience into a realm where it is irrelevent. Live by the sword, die by the sword. Don't bring mathematical theories into the debate if you can't handle them.

The DCT does not perform compression *by definition*, period. There is no refuting this basic mathematical truth. No matter how you slice it, it doesn't do what you claim.

You've been proven wrong. Deal with it bozo.
 
scooby_dooby said:
Actually GT4 recycles car models much more heavily tha Forza does, sow how many more UNIQUE car models does GT4 have is the question, 20 variations of every model doesn't count.

So? The point is, there are 3 times as many cars in GT4 which will naturally take up more space, regardless if the car models have various pieces of data duplicated from similar cars. Another thing which I'm sure you're missing is that perhaps the GT4 car models have a bit more [static] data behind them because of the immense possibilities with tuning. I wouldn't be surprised if they built in some shortcuts to pack in all the different variations one can achieve by tuning the car differently and feeding the game-engine with it.

scooby_dooby said:
And you missed the point entirely. The question isn't whether GT4 will fit on DVD, it's whether a game of that calibre requires 9GB of space. Forza proves it does not, simple as that.

Wake me up when Forza delivers that calibre, okay? :LOL: Seriously, don't get ahead of yourself. Forza is a great game no doubt, but given the immense content of GT4 (25 courses, music tracks, 700+ cars), it doesn't quite stack up in terms of data requirement.

scooby_dooby said:
Someone threw out 9gb for GT4 as an example of current gen disc sizes, not only is the completely non-representative of the vast majority of games out today, but we don't even have a breakdown of the disc content to make any sort of assessment where that disc space was spent.

Without knowing the exact breakdown myself, I can tell you that it's due to a.) content and b.) redundant datablocks

scooby_dooby said:
So, all I can say to that point is, here's forza which has damage modelling, higher polygon models, better AI, online mode, technically graphically superior and takes up less than 1/3 of that space so it's obviously possible to make a game of GT4 calibre with MUCH less than 9GB. Maybe not on PS2...but that's a different issue.

I know you like having things repeated over and over again, so lets try one more time:

You are comparing two different sets of games, one that has over 3 times the content (cars, tracks) which would roughly equate to that 1/3 of less space used in Forza. On the other hand, as I already pointed out, it could very well be that Forza uses the Xbox's built in harddrive to an advantage. As I already noted, PS2 doesn't have this luxory since data travels directly from DVD into memory - thus, if you wanted to reduce expensive seeks (longer loading times), they would have to make sure that a lot of data comes in redundant blocks of data that can be easily copied in one go into memory. As a reminder, Xbox360 will not have this advantage and neither will PS3. So if you like it or not, redundant datablocks is something that will be used when it can and that takes up a lot space.

Also, I'm still waiting for those exact model sizes on GT4 cars... ;)

Scooby_dooby said:
The car and Track models do not take up alot of room, so that does not work as an excuse as to why GT4 was bigger, and I can give you exact sized on these models when I get home if you insist (hint: 3-5mb each)

If what you say is correct about 3-5mb for each model, then it's pretty easy to calculate what 700 cars would roughly amount to... *hint* 4MB*700 cars = ~2.8 GB. Given your numbers, it would seem that the cars alone would already use as much space as Forza and that's even excluding the track data, music data (GT4 has quite a lot of music though no idea to what degree its compressed). Given this data, I'm not really sure what you're arguing.

As I already posted in one of these threads already;

- Blu-Ray has the potential advantage - FACT

<snip>
here's the other post:
phil said:
http://www.beyond3d.com/forum/showpost.php?p=655141&postcount=75

No seriously, I think this whole argument on DVD vs Blu-Ray is kind of stupid really. Obviously, Blu-Ray holds the greater potential thanks to its much larger storage that can either reduce expensive seeks and offer more content. If it's used is a whole different argument. Developers will be fine for the most part with less space because they'll find ways to deal with it. The only problem I see, and this is Blu-Ray biggest strength, is if PS3 ends up the dominant console as PS2 has been this generation, I could see a large part of 3rd party developers making PS3 their primary platform and with that, use its potential advantages (which would be at the very least the storage space). The problem is when they would actually port their game over to Xbox360 that they could end up with some added problems of having less storage space which would either a.) add porting costs or b.) eliminate the chance of a port all together if it happens to be too challenging. The other alternative situation would be that many 3rd party developers would not take advantage of the Blu-Ray medium to make their games more "port-friendly" and avoid those problems. If PS3 ends up being the very dominant console though and coppled that programmers in general happen to be lazy when it comes to space, I could see them using quite a bit of the storage Blu-Ray offers.
 
Last edited by a moderator:
Gubbi said:
Doesn't matter what it is to you, it's just a transform. The data holds the exact same information as the original image data, just in a different domain.

Nope.

YUV color mode stores color in terms of its luminance (brightness) and chrominance(hue). The human eye is less sensitive to chrominance than luminance.
YUV give a better compression rate.

And it is easly demostrable with a pratical example.

Yuo are pratically saying that an RGB data rate have the same information of an YUV data rate.

Anyone know that an RGB signal is better than an YUV signal, and that if you convert an RGB signal to an yuv signal you lose quality.

First of all: DC didn't miss it, Huffman encoding = entropy encoding. Second: you're wrong, doing entropy encoding on the DCT-ed data would yield fuck all compared to doing it directly on the image data. The compression comes the quantization of the DCT-ed data.


I did not exposed clearly all the passage, since you are so anal here are all the steps :


Divide the file into 8 X 8 blocks.

Transform the pixel information from the spatial domain to the frequency domain with the Discrete Cosine Transform.

Quantize the resulting values by dividing each coefficient by an integer value and rounding off to the nearest integer.

Look at the resulting coefficients in a zigzag order. Do a run-length encoding of the coefficients ordered in this manner. Follow by Huffman coding.



But it's not the DCT that throws away the data, it's the quantization.

Explaine me why if i convert a video from RGB to YUV it takes less space.

The DTC ALSO help to trow away some useless data.

The YUV color mode stores color in terms of its luminance (brightness) and chrominance (hue). The human eye is less sensitive to chrominance than luminance.

What i said is absolutly correct.

"The frequency domain" is a better representation for the data because it makes it possible for you to separate out , and throw away , information that isn’t very important to human perception.

The human eye is not very sensitive to high frequency changes , especially in photographic images, so the high frequency data can, to some extent, be discarded.


More comedy gold.

Cheers


No, this is pure commedy gold, this is the offensive PVd i got from DemoCoder now :


Edit su request of the moderator.


:rolleyes: How old are you ? 7 ? I really hope the mods take care of this.


Btw it’s simple arithmetic to convert RGB to YUV. The formula is based on the relative contributions that red, green, and blue make to the luminance and chrominance factors.
There are several different formulas in use depending on the target monitor. For example:
Y = 0.299 * R + 0.587 * G + 0.114 * B
U = -0.1687 * R – 0.3313* G + 0.5 * B + 128
V = 0.5 * R – 0.4187 * G – 0.813 * B + 128
 
Last edited by a moderator:
iknowall said:
Anyone know that an RGB signal is better than an YUV signal, and that if you convert an RGB signal to an component signal you lose quality.

YUV is not the same as component, anymore than RGB is the same as VGA. YUV is a colorspace (incidently, YUV != YCrCb) sRGB is a colorspace. "component" is a modulation technique. Conversion from RGB->YUV involves loss depending on how many bits of precision one is willing to burn to avoid roundoff error. One would be hard pressed to call this compression, because what's being added is noise (error).


I did not exposed clearly all the passage, since you are so anal here are all the steps :

Well, you finally seem to have looked up how the spatial compression works. But it's too late to repair your loss of credibility. If you were so full of knowledge, why did it take you so many messages to finally turn up a description of how DCT+Quantization+EntropyEncoding works, instead of quoting PR news documents on the web?


Yes but the DTC ALSO trow away some useless data.

DCT doesn't "throw away" useless data, that's quantization that does that. I suppose one could argue that floating point error accumulation is a form of quantization that "throws away" information, unfortunately, it doesn't necessarily throw away the right information.

No, I'm sorry, but no one views DCT as the mechanism that throws away "useless data", infact, compression experts would like DCT to have as little error as possible, since that error is not under control of the end user, whereas the quantization matrix used IS user selectable in MPEG. As I explained in another message, one of the design points of the Integer pseudo-DCT in H.264 was to avoid problems with error accumulation and inverse transform mismatch that plague MPEG-2. But of course, you don't seem to understand that H.264 was designed to have less unintentional error.

If you want to see how much information DCT throws away, I dare you to take a 4:4:4 dataset, perform DCT, perform IDCT, and compare to the original. Also, try to compress the output of DCT with a huffman utility and see how much compression you get.


Btw it’s simple arithmetic to convert RGB to YUV.

Wow, I am so impressed. How many google searches did you run? Why are we getting all of this now from you, whereas before, all we got were attempts to appeal to authority? Instead of trying to explain WHY you were right, you tried to quote other people.

I think it's safe to say that it's too late for you to resurrect your credibility as an expert on any subject now.
 
Last edited by a moderator:
DemoCoder said:
YUV is not the same as component, anymore than RGB is the same as VGA.

A component cable like an sdi calbe or many others can carries a video signal in the YUV colorpace also and it is well known that almost every hd camera on the market the first thing that do is convert the raw RGB data rate coming from the CCD to an YUV stream once stored on the tape to use less space.



YUV is a colorspace (incidently, YUV != YCrCb) sRGB is a colorspace. "component" is a modulation technique. Conversion from RGB->YUV involves loss depending on how many bits of precision one is willing to burn to avoid roundoff error. One would be hard pressed to call this compression, because what's being added is noise (error).

:rolleyes: You miss the fact that YUV stores more relevant data at a lower accuracy than RGB, when you convert between the two colorspaces, either you lose some data, or assumptions have to be made and data must be guessed at or interpolated.




Well, you finally seem to have looked up how the spatial compression works. But it's too late to repair your loss of credibility.


Dude, after your childish and offensive PVt i think it's clear at everyone how much sad you are :rolleyes:


If you were so full of knowledge, why did it take you so many messages to finally turn up a description of how DCT+Quantization+EntropyEncoding works, instead of quoting PR news documents on the web?

You are the one that made the ridicolous statement that at any hi bitrate mpeg4 look better based on a generic .doc you found on the net that don't even say this.

Your desperate damage control will not change this fact.

DCT doesn't "throw away" useless data, that's quantization that does that. I suppose one could argue that floating point error accumulation is a form of quantization that "throws away" information, unfortunately, it doesn't necessarily throw away the right information.

:rolleyes: But YUV stores more relevant data at a lower accuracy than RGB and when you convert between the two colorspaces, either you lose some data, or assumptions have to be made and data must be guessed at or interpolated.


No, I'm sorry, but no one views DCT as the mechanism that throws away "useless data", infact, compression experts would like DCT to have as little error as possible, since that error is not under control of the end user, whereas the quantization matrix used IS user selectable in MPEG. As I explained in another message, one of the design points of the Integer pseudo-DCT in H.264 was to avoid problems with error accumulation and inverse transform mismatch that plague MPEG-2. But of course, you don't seem to understand that H.264 was designed to have less unintentional error.

See above


If you want to see how much information DCT throws away, I dare you to take a 4:4:4 dataset, perform DCT, perform IDCT, and compare to the original. Also, try to compress the output of DCT with a huffman utility and see how much compression you get.

See above.

Wow, I am so impressed.
Wow i am unimpressed , how could you became so sad and chlidish to make an offensive Pvt like that ?
How many google searches did you run? Why are we getting all of this now from you, whereas before, all we got were attempts to appeal to authority? Instead of trying to explain WHY you were right, you tried to quote other people.

:rolleyes: are you dense ? I spent what ? 20 pages trying to explain by myself WHY i was right in the other thread ? No matter what i said, every time i tryed to explaine myself what i got in that 20 pages where only flames and trolling. It was a total waste of time.

It's obvious why , if it's impossible to make a normal discussion , at this point instead to do the same error again i taked a quote from a TRUSTED and RELIABLE source like Amirm that confirm what i was right and that you can't negate.

Because you CAN'T say that he is also wrong since it's a more knowledgable person than you and you have no credibility aganist him.

It's obvuous that if you don't want to beleave me no matter if i am right i search a source that you can't refuse.


And wow look now , look at how could you become so sad and chlidish to make an offensive Pvt like that :


Edit su request of the moderator

This really define your sad person.

But i guess from you i could not expect any better....

I think it's safe to say that it's too late for you to resurrect your credibility as an expert on any subject now.

Do all the damage control you want this fact "too late" is simply ridiculous and is a proof of why is a waste of time to discuss with a troll. And here is only you that take so seriously an internet forum to care so much about your credibility. Like i said, i could not care less about what you or any other troll think or say.

I think you would better stfu for begin wrong but in particular for that offensive and so sad Pvt that i will report to the mods
 
Last edited by a moderator:
The Report button is there for a reason. More people should use it. It's the red triangle thing next to the Reputation button.
 
iknowall is a plagiarist. I thought his "explanation" of compression looked funny, because it was too clean. It lacked his usual dyslexic spelling errors and bad grammar and the english was structured differently. He claimed to have explained how DCT based compression worked, but the reality is, he cut-and-pasted the following, letter for letter, word for word, from slide 3 of the presenation PDF/PPT at

http://www.ws.binghamton.edu/fridrich/580A/JPEGCompression01.pdf

misrepresenting it as his own explanation.

iknowall said:
I did not exposed clearly all the passage, since you are so anal here are all the steps :

Divide the file into 8 X 8 blocks.

Transform the pixel information from the spatial domain to the frequency domain with the Discrete Cosine Transform.

Quantize the resulting values by dividing each coefficient by an integer value and rounding off to the nearest integer.

Look at the resulting coefficients in a zigzag order. Do a run-length encoding of the coefficients ordered in this manner. Follow by Huffman coding.

Here is a cut-and-paste from "Slide 3" of that PDF

Divide the file into 8×8 blocks.

Transform the pixel information from the spatial domain to the frequency domain with the Discrete Cosine Transform.

Quantize the resulting values by dividing each coefficient by an integer value and rounding off to the nearest integer.

Look at the resulting coefficients in a zigzag order. Do a run-length encoding of the coefficients ordered in this manner. Follow by Huffman coding.

He lacks even the competence to rewrite those sentences in his own voice. You are now officially on my ignore list, it's pointless to debate a hardheaded intellectual poseur who plagiarizes other people's work and passes it off as his own.
 
Status
Not open for further replies.
Back
Top