Glad to see there are some people here who have not been marketing-hyped into the whole 40,000:1 scene (no pun intended). I had never heard of the 100:1 thing, but it makes a lot of sense. Once you sit down in a single viewing "setup", your pupil is going to acclimate to a particular level, after which the rods and cones in the retina will only be able to distinguish only so many levels between full on to off. A display that can wildly blow out of this 100:1 range from one scene to the next will end up causing just as much problems as impressive 40,000:1 may sound. So the pupil just has to readjust to scale to the higher levels. Then if the next scene is back down to the lower range, you "blind-out" until your pupil can adjust again and your retina can re-sensitize. So what good is that? Now you can argue that maybe a scene contains whiteness and darkness levels that contrast to 40,000:1 (as an example), in of itself. Your pupil is still going to adjust to an average brightness range for the entire picture. You'll get a 100:1 contrast range somewhere in the middle, and stuff outside of the range will either wash out to full black or full white. So, again, this capabililty ends up being a rather dubious claim taking into account what human eyes are capable of and how they work.
To add to this, consider the 24-bit color system- 16 mil colors, or so? Yeah, but that's a "marketing" number. For any one color, the most it can achieve is 256 possible shades between full-on and off. That includes white, it includes red, it includes blue, it includes green, and it includes any combined color that you could achieve in an additive color system. So realistically, the most CR you can genuinely achieve from that 24-bit color system is 256 shades of any specific color. It's not a matter of being able to discern 16.7 mil colors. It's a matter of being able to distinguish a pattern of colors that only have the resolution of 256 unique shades. That is often what you are seeing when you see the banding in digital video. The image had to jump from 1 shade to the next adjacent shade, where it could have used 3 or 4 finer shades in-between to really make a seamless transition. You don't have those in-between shades, because you only have the 256 shades supported by the system.
Now this is not to say that you cannot achieve higher contrast ratios of 256:1 on these "high contrast" displays (say, anything that claims over 300:1). You can certainly goose the contrast control. However, you then enter the realm of making the picture look more unnatural than natural, not to mention less accurate/faithful to the original signal the digital stream has delivered to you. The 24-bit digital video was intended to accurately deliver 256:1-ish sort of images, so boosting it to 40,000:1 after the fact, is really begging for additional problems. Now let us go back to how distinguishing banding on certain 24-bit video content is just on the verge of perceptibility? That 256 shade of a given color resolution is falling short. Now imagine arbitrarily blowing out the contrast control on a scene such as that, and you will have a real problem with banding becoming more than just on the verge of perceptible. Is that quality, high-performance video? The CR is mind boggling, right? However, it also succeeded in pushing the capabilities of 24-bit video into a zone it does not belong.
A 400/500/600/1 million:1 CR system sounds great, but not any existing digital video format we use today will be able to drive it to its full potential. It's just not up to the task.
...and as LB correctly surmised, yes, we most certainly would have to worry about "tanning" and snowblindness if we had TV's that actually displayed the full contrast level of real life (it's why we have adjusting pupils, so we are not terminally white-blinded or black-blinded by mere daily life on Earth). That could really break the bounds of what is considered enjoyable viewing. Suffice to say, "real" is good, but there is also a point where it is "too real" for its own good.
Where we could really find some benefit is a new digital video model that supports greater color shade resolution than 256. That would address the banding, and hopefully alleviate artifacting in low light scenes. For example, if we could have a 32/33-bit color system and then use a contrast ratio filter that squeezes the CR back down to what we already have with the 24-bit model (i.e, same ultimate CR, more shades/finer shades). Banding should be seamless (if not nearly). Oh, that and perfect a way for artifact-less lossy compression. That would handle a lionshare of what ails current digital video, rather than worry about 40,000:1 CR displays.