10M HDTVs sold, 15.5M by end of 2005

Brimstone said:
Long term the future looks to be FED if they can keep the price low, which by most accounts they think they will. FED is a huge threat and I'm sure around the time they start shiping TV's, competing technologies will drop in price drastically in order to survie.

Is that the same as the SED?

Manufacturing costs are expected to be better than PDP or LCD but I think Toshiba and Canon has expressed that the first SED products will carry a premium, regardless of the cost advantage.
 
london-boy said:
PC-Engine said:
If it is, why would manufacturers give two numbers, one for CR and another for brightness? :p

Well that's why i've always been confused.. So, next question, what exactly is the contrast ratio? Why is a 800:1 monitor better than a 400:1 one?
PC-engine did a good job of explaining ANSI contrast ratio, which is simply the displays ability to show a range of brightness levels within a single image. That's why the X-ray picture is a good example. The bright portions of the image overpower the dark making it difficult to see the grayscale detail.

There is another way to measure contrast ratio, generally referred to as "on/off" CR, that is becoming more and more common. It is measured by sending the display a pure white signal and measuring the ANSI lumen output (the brightness) and then a corresponding black signal and checking the brightness. The ratio is then the lumen output of the white screen divided by the all black screen.

http://www.hometheatermag.com/frontprojectors/704dlp/index2.html
 
KnightBreed said:
The ratio is then the lumen output of the white screen divided by the all black screen.
Where, if it was a black screen, the ratio would be infinity. As such the best strategy for a high "on/off" CR as you've described is to get as close to pure black as possible and sod the brightness!
 
Shifty Geezer said:
KnightBreed said:
The ratio is then the lumen output of the white screen divided by the all black screen.
Where, if it was a black screen, the ratio would be infinity. As such the best strategy for a high "on/off" CR as you've described is to get as close to pure black as possible and sod the brightness!
That is the theory, of course, but in practicality there isn't a display on this earth that can produce 0 lumen output with a black signal (notice that I'm careful not to say "no signal"). When calibrated properly even a CRT won't reach infinite on/off CR.

The on/off contrast ratio is very useful if you know the lumen output of the display. Knowing one without the other obviously doesn't give you the whole story, but that is always the case with product specs. You don't buy on a single spec, you look at the whole package. Right?
 
Also this isnt for cinema use, so the reflectiveness of the screen itself also influences black level (and hence contrast ratio).
 
PC-Engine said:
Screen wipes are useless because you're basically intentionally burning the screen even more and thus lowering the lifespan of the PDP even faster.

Yes, absolutely true. Screen wipes work by aging the phophors to an even level across the screen (because burn-in is when one portion of the screen has aged unevenly compared to the rest). Regardless, burn-in on current gen plasmas is not really an issue for the vast majority of consumers. In fact Panasonic did some crazy test where they displayed the same image over a hundred hours and there still wasn't burn-in. It was an informal test though.

PC-Engine said:
Sure if you're talking about conventional LCDs. HDR LCDs have higher CR (higher dymanic range) by default because of the way the LED backlights work with the LCDs. We've already talked about this before for example Sunnybrook Tech's HDR LCDs at Sigraph.

Yes but the cost is quite high for now. By that time SEDs might here and FEDs/NEDs right around the corner.

PC-Engine said:
I've only seen wide color gamut display technology on CRTs and LCDs. When used with CRTs it's less than 100%, when used with tricolor LED backlit LCDs, it's over 100%. I've never heard of any wide color gamut developments for PDPs so it's probably too difficult or too expensive.

A negative does not prove a positive. In other words, just because you haven't heard of it, does not imply that it's too difficult or too expensive. You might be correct though, but not logically. Or it could simply be that the drive for high end commercial panels is for LCDs.


Sharp has some of the best (imo) LCD panels - I mentioned the Aquos before. That said, they are more expensive than PDPs by a fair margin.

PC-Engine said:
Sure but you have 1:1 pixel mapping for 1920x1080 HD sources so no upscaling needed.

Sure, but that's not the point. The point is that Alexlux's comment about Plasmas being expensive is way off the mark as decent PDPs (compared to decent LCDs) are cheaper.

I personally can't stand RP-DLP. Uneven brightness is horrible and the vertical viewing angles are atrocious.

PC-Engine said:
That's not a RP-DLP. It's a NEC FP-DLP for industrial/cinema use. It's the one used during Spiderman 2 post production. If you had read the article you'd realize this. Anyway the point is LCDs have made major technological progress compared to PDPs.

Sorry, my point was about DLP in the consumer space. I go to the nearby theater that uses FP DLP all the time. The overall point, however, is that LCDs have also been around longer than Plasmas and yet Plasmas are generally 'better' - at least for now.

PC-Engine said:
The eye can perceive much higher CRs than those numbers. CRTs have a CR of about 500 btw

Incorrect. Contrast Sensitivity (for any given scene) of our Eyes is around 300:1 where C = (LMAX - LMIN) / (LMAX + LMIN). If I remember anyhow. ;)

And there are other factors of course that go into this such as ambient lighting (which is why bias lighting can be a HUGE improvement and of course, source material.)

http://www.hdtvexpert.com/pages/shmontrastsan.htm

Putnam reiterates another # I've seen though, 100:1, not the 300:1 I've mentioned.

MfA said:
Also this isnt for cinema use, so the reflectiveness of the screen itself also influences black level (and hence contrast ratio).

Yes, absolutely.
 
I'm dubious of 'expert opinions'. The human eye can't see more than 16,777,216 shades, but I still notice banding in some gradients, and the human ear can't hear above 22 kHz or whatever it is, but I've certainly heard higher frequencies.
 
Shifty Geezer said:
I'm dubious of 'expert opinions'. The human eye can't see more than 16,777,216 shades, but I still notice banding in some gradients, and the human ear can't hear above 22 kHz or whatever it is, but I've certainly heard higher frequencies.

You should be wary. Never implied that experts can't be wrong. However, I do trust them more than the average person as FUD and internal bias' do not make for accurate representations of reality.

As to your particular abilities, have you actually tested them?
 
Incorrect. Contrast Sensitivity (for any given scene) of our Eyes is around 300:1 where C = (LMAX - LMIN) / (LMAX + LMIN). If I remember anyhow.

Putnam reiterates another # I've seen though, 100:1, not the 300:1 I've mentioned.

So - just how much contrast do you need to see in an image? Empirical data suggests the human eye is limited to a dynamic range of 100:1 at any given instant. That means that if you look at a "scene" with objects of different luminance values, you won't be able to discern more than a 100:1 difference between the darkest and lightest objects. Of course, the instant your eye moves, its built-in auto iris function raises and lowers the grayscale boundaries. That's what allows you to perceive shadow detail and also pick out a white cat scurrying along in a field of snow.

You are aware that that number is only true for static images right? ;)

Ever wonder why you can't see a white rabbit in the snow if it's sitting still like the example above? :p

Anyway we're abe to percieve in the 10s of thousands range of contrast levels in moving scenes. BTW the 100 number is due to the optic nerve's limited dynamic range. So yes in any given still screen the optic nerve is the limiting factor not the photoreceptors in the pupil.

Screen wipes work by aging the phophors to an even level across the screen (because burn-in is when one portion of the screen has aged unevenly compared to the rest). Regardless, burn-in on current gen plasmas is not really an issue for the vast majority of consumers.

If burn-in is not an issue then why would manufacturers build screen wipe functions into the PDP? By including that function they're admitting to PDPs fundamental burn-in problem. By using the screen wipes you're making the PDP dimmer each time you use that function.
 
PC-Engine said:
You are aware that that number is only true for static images right? ;)

Ever wonder why you can't see a white rabbit in the snow if it's sitting still like the example above? :p

Anyway we're abe to percieve in the 10s of thousands range of contrast levels in moving scenes. BTW the 100 number is due to the optic nerve's limited dynamic range. So yes in any given still screen the optic nerve is the limiting factor not the photoreceptors in the pupil.

Nope, not aware of that. Got a paper that explains the difference? I constantly read HT journalists referring to #s around that range and I think I have even seen a still photographer referring to the larger range you quote (so I thought it was reversed).

Our eyes (brain) are very good for catching movement. Finding a white rabbit against a white background is difficult because of several reasons. One is that the TV is a 2D version of a 3D environment, so we already are at a disadvantage. Another is that the color range of the panel may not be wide enough to show the true differences in color between the snow and the rabbit's fur.

Here is the stuff I've read. Not saying it's the bible or anything but just so you know where I'm getting my info from.

http://www.gimlay.org/~andoh/cg/faq/GammaFAQ.html#RTFToC12

And if you read #13 below you'll see the numbers (100:1) Putnam refers to as well. So if it's wrong, by all means, show me papers or some other article so I can learn the whys.

Btw, the photoreceptors are in the retina, not pupil (which is nothing more than a hole which light transmits through.) :p

PC-Engine said:
If burn-in is not an issue then why would manufacturers build screen wipe functions into the PDP? By including that function they're admitting to PDPs fundamental burn-in problem. By using the screen wipes you're making the PDP dimmer each time you use that function.

As I said, for the vast majority of consumers. In addition the first couple of hundred hours of a new PDP are when it is most susceptible to burn-in. After that, IR fades away...... Therefore the screen wipe can be a useful function though personally I wouldn't want to rely on it either.
 
I don't think you're understanding this concept. Let me propose a simple example. Let's assume humans can only perceive 100 different levels of brightness at any given instant (static images).

Let's compare a 5000:1 PDP to a 40,000:1 HDR display. On a PDP, you can display 5000 different brightness levels. On a HDR you can display 40,000. Now even if we assume humans can only see 100 different levels of brightness at any one time, the HDR display will be more accurate in displaying the source image because it will have 40,000 levels to chose from to form said image. It's like comparing an image using an 8-bit color pallete vs one using a 24-bit color palette. It doesn't matter that we can only see 100 colors or brightness levels at any given instant. The 24-bit image will look better than the 8-bit image simply because there are just more colors to choose from to form that image. Now switch to moving images and the situation is even worse, because you are seeing 30 images a second and each image has access to that same 24-bit color palette than changes from one frame to the next. With a HDR display you have a palette from 0-40,000 brightness levels, with a PDP you have a palette from 0-5000.


Our eyes (brain) are very good for catching movement. Finding a white rabbit against a white background is difficult because of several reasons. One is that the TV is a 2D version of a 3D environment, so we already are at a disadvantage. Another is that the color range of the panel may not be wide enough to show the true differences in color between the snow and the rabbit's fur.

Yeah but aren't we talking about 2D devices? ;)

If our 2D display devices were a window into the outside world. I'd take a display with a 40,000:1 instead of a 5000:1 CR/DR anyday.

BTW white is not a color so the panel doesn't need to have a wide enough color palette, it just needs to have a wide enough CR/DR. :p

HDR is the future, whether it takes the form of LCD or some other technology. Of course we will need source material that have HDR to make use of these displays. The DR of 35mm film can be as high as 30,000:1, but I don't know what happens during the digitization process. :oops:
 
PC-Engine said:
I don't think you're understanding this concept.

Probably not so bear with me.

PC-Engine said:
Let me propose a simple example. Let's assume humans can only perceive 100 different levels of brightness at any given instant (static images).

Ah, that's interesting. I've never read that it's 'levels of brightness' we can detect, just that the absolute ratio from lowest to brightest is 300:1. Does that 300 represent 300 levels of brightness? I believe you are basically you are saying that Contrast Ratios represent steps of contrast, right?

PC-Engine said:
Let's compare a 5000:1 PDP to a 40,000:1 HDR display. On a PDP, you can display 5000 different brightness levels. On a HDR you can display 40,000. Now even if we assume humans can only see 100 different levels of brightness at any one time, the HDR display will be more accurate in displaying the source image because it will have 40,000 levels to chose from to form said image. It's like comparing an image using an 8-bit color pallete vs one using a 24-bit color palette. It doesn't matter that we can only see 100 colors or brightness levels at any given instant. The 24-bit image will look better than the 8-bit image simply because there are just more colors to choose from to form that image.

I understand the example of more bits for color as there will be less interpolation from the source data. No problem. But the concept that contrast works this way is new to me.

PC-Engine said:
Now switch to moving images and the situation is even worse, because you are seeing 30 images a second and each image has access to that same 24-bit color palette than changes from one frame to the next.

With a HDR display you have a palette from 0-40,000 brightness levels, with a PDP you have a palette from 0-5000.

Ok, this seems to follow logically from what you've said above.

PC-Engine said:
Yeah but aren't we talking about 2D devices? ;)

I thought you might be referring to finding a real rabbit in the real world. Anyhow my point was that moving objects are much easier to detect than a static version of the same object for many reasons, not just contrast related.

PC-Engine said:
If our 2D display devices were a window into the outside world. I'd take a display with a 40,000:1 instead of a 5000:1 CR/DR anyday.

Assuming the #s are valid, I would generally agree.

PC-Engine said:
BTW white is not a color so the panel doesn't need to have a wide enough color palette, it just needs to have a wide enough CR/DR. :p

White is given the RGB value of 255255255 so it's a color in that sense. :)

PC-Engine said:
HDR is the future, whether it takes the form of LCD or some other technology. Of course we will need source material that have HDR to make use of these displays. The DR of 35mm film can be as high as 30,000:1, but I don't know what happens during the digitization process. :oops:

Yes, if the source material isn't up to snuff, then most of it is just overkill.
 
Black and white are not colors. A B&W televisions is not a color television. Every color is made up of the 3 primary colors: Red Yellow and Blue. You cannot make black or white out of those 3 colors. Notice it's not Red Green Blue since green is made up of blue and yellow. For all practical purposes contrast ratios usually equals contrast levels.
 
PC-Engine said:
Black and white are not colors. Every color is made up of the 3 primary colors: Red Yellow and Blue. You cannot make black or white out of those 3 colors. Notice it's not Red Green Blue since green is made up of blue and yellow.

Well i studied this while i was in Italy, so forgive me if my translation sounds off, but isn't that the difference between what Italians call "additive" and "subtractive" colour methods? Or something along those lines?
 
london-boy said:
PC-Engine said:
Black and white are not colors. Every color is made up of the 3 primary colors: Red Yellow and Blue. You cannot make black or white out of those 3 colors. Notice it's not Red Green Blue since green is made up of blue and yellow.

Well i studied this while i was in Italy, so forgive me if my translation sounds off, but isn't that the difference between what Italians call "additive" and "subtractive" colour methods? Or something along those lines?

I'm not really sure sorry. I was always taught that black and while are not colors because you cannot create them from the 3 primary colors. If you have 0 level red, 0 level of yellow, and 0 level of blue, you essentially have white + white + white = of course white, but white is not a color so you're basically creating white from white.
 
PC-Engine said:
london-boy said:
PC-Engine said:
Black and white are not colors. Every color is made up of the 3 primary colors: Red Yellow and Blue. You cannot make black or white out of those 3 colors. Notice it's not Red Green Blue since green is made up of blue and yellow.

Well i studied this while i was in Italy, so forgive me if my translation sounds off, but isn't that the difference between what Italians call "additive" and "subtractive" colour methods? Or something along those lines?

I'm not really sure sorry. I was always taught that black and while are not colors because you cannot create them from the 3 primary colors. If you have 0 level red, 0 level of yellow, and 0 level of blue, you essentially have white + white + white = of course white, but white is not a color so you're basically creating white from white.


No, i meant that there are 2 systems to form all the colours, one additive and one subtractive (or whatever English people call them). One is made of R G B and the other is made of R Y B.
One forms all other colours by adding the primary colours together to make shades, the other "subtracts" them. It's the difference between paint colours (where the primary colours are R Y B and are added up to make other shades) and the "film" colour where the primary colours are R G B and are "subtracted" (not sure it's the right term) to achieve all other colours.
I'm sure someone who studied physics in the US or England will be able to explain this much better than i can.

Black and white are just black and white.
 
From a phorographers perspective the idea that humans can only perceive a contrast ratio of 100:1/300:1 or whatever doesn't make sense.

Ansel Adams developed a test for film that showed a film has a dynamic range of 2^11 brightnesses. That is, where a light intensity is rendered black on film, detail will be apparent in brightnesses above that level until the brightness gets over 2^11 times the brightness level in the shadow, above which all is white.

eg. If the intensity rendered black on film = 100 units, intensities above 204,800 units are rendered white; insities between are differentiated.

The human eye is much more sensitive. You can stand in a shadowed doorway looking onto a bright scene and see details in the shadowed and bright areas, where the difference in intensity could be maybe 10,000 times.
 
Our eyes have a very high dynamic range on the order of 10^14, however, what the brain sees at any given instant is only 10^2 because the signals have to go through the optic nerve for processing and it only has a dynamic range of 100 at any given instant. Our eyes need to adapt to the conditions before we can see so for example if you were outside on a sunny day then you suddenly walked into a theater, you would not be able to see anything let alone shadow details. It's kinda analogous to color palette vs simultaneous displayed colors.
 
Where did this information come from, that a contrast ratio of 10,000:1 has 10,000 brightness gradients to "choose from"? The constrast ratio is a measured valued that is determined by one of two ways:

1) A ratio of the brightness (in lumens, nits, whatever) of a fully white screen to a fully black screen. An image that measures at 1000 lumens at full white and .5 lumens at black will have an on/off CR of 2000:1.

2) Display a checkerboard pattern of black and white squares. The CR is then a ratio of the average brightness of the white squares to the average brightness of the black squares. This is generally referred to as ANSI CR.

A monitor with a contrast ratio of 10,000:1 might display two brightness levels, black and white and have a CR of 10,000:1. A monitor might also display an infinite number of intensities between 1000 lumens and .1 and the CR will still be 10,000:1.
 
Back
Top