No amount of "postprocessing at the decoder" is going to fix the problems stemming from dissolves, fade in/out, dark scenes approaching black level, and detail corruption in high motion scenes.
Actually post processing is a necessity anyways (as is pre-processing at the CCD/CMOS capture level). But all these problems that you complain about have been or are currently solvable and addressable. In fact, today, these are largely human decision problems (outside the limitations constrained on the storage medium).
FWIW, my impression is that the new techniques implemented in MPEG4 (tracking moving entities and such) are simply addressing additional ways to compress data. I am unaware of any refinements which specifically address video quality issues under demanding/extreme circumstances (specifically, conditions that give the current formats trouble).
That's because we're talking about compression, not filtering. You're talking about filtering. The moment you go digital, it immediately becomes a problem of how much data can you handle (upwardly limited by how much data you can sample). Sure it'd be nice if we could watch uncompressed sources from something like a Thomson Viper rig or something like that, but lets be reasonable. The amount of data generated simply makes it unfeasible, so invariably it becomes a problem of how much data can you toss without degrading the quality enough to be disruptive (which is a difficult boundary to define).
"Special conditions" that you bring up are special because they're difficult to compress. If it's difficult to compress then chances are it's going to become a data problem and the storage format is going to determine how much head-room you're going to have to deal with the problem. Unfortunately the featureset of a particular decoding device imposes another limit on the language you can use to describe your problematic data (or scene if you will).
Obviously you're going to have to define a feature set of techniques that a data decoder is going to be able to understand and stick with it if you want to set anything resembling a standard. What MPEG4 gives you (say for instance over MPEG2) is a much more verbose language to describe the data in question. Special conditions start becoming less special because you now have a wider range of options in attacking the data problem.
Well getting to the root of the problem is easier said than done wouldn't you say? I mean what alternatives are there?? Laserdisc Ver. 2?
That still wouldn't get to the root of the problem, that lies further upstream. To a point you can argue if there even "a problem". Fundamentally you're dealing with the process of reproduction. With reproduction comes interpretation... And that is truly the problem (that is if you believe we should all see things the same way).
The problem now is that the digital video paradigm has been a "race for more compression" where it should be a "race for better quality".
Actaully it's been more a race for tradeoffs, not compression... Compression is easy, deciding and managing tradeoff is hard.
One is only asking that digital be called upon to fulfill its full potential in being a truly superior successor to it, not just a get some/lose some/but it is newer proposition.
Well for the most part it *has* proven to be so. But there is no truly superior solution, since new formats always involve trade-offs... However the benefit has truly been greater than the drawbacks...
It's the marketing department which determines the final quality more than the codec developers, they dont determine what bitrates to use.Actually most *marketting* people wouldn't understand bit-rate other than more == better quality spaek... Bit-rate is determined more on a practicality basis. Now if we're talking broadcast, then it's a bit different because available bandwidth is somewhat more variable.
I keep repeating this, but no one seems to get it. The problem isn't necessarily bit rates.
Yeah and it's you!
The problem isn't necessarily bit rates. You can throw as much bit rate as you like at an MPEG-based video. There will still be problems in certain basic situations. The one problem where you get pixelation under inadequate bitrates is only one of the problems. It's an easy fix, as well
No, but bit-rate gives you a basic window to work within the problem. However since encoders aren't defined (only the decoder), the quality can be variable from encoder to encoder...
The choice of MPEG2 vs. MPEG4 is irrelevant,
Ah, well this is finally something you *are* spot on about... (although since MPEG4 is a logical improvement/evolution over MPEG2, it does bring more benefits specifically from a feature headroom point of view)
As an industry spec, do you honestly think they will use the extra data headroom in MPEG4 to ease back on compression? Of course not! They will just compress stuff twice as much as MPEG2, and you end up with a result not that much better in quality than what you had before MPEG4.
Answer? YES!!! and NO!!! It all really depends on the situation... MPEG2 encoders the past 5 years *have* improved and their effect on DVD has been noticeable. Some of this has come from improved mastering processes as well, but DVDs mastered today have the benefit of encoders that give you more headroom in difficult scenes than they had even just 2 years ago.
As far as production goes, improvements in digital compression have allowed video cameras (e.g. DVCAM/DVCPro, IMX, HDCAM) to capture higher and higher quality scenes while still maintaining manageable volumes of data. And it's trickled down into consumer space (e.g. DV, MicroMV)...
OTOH, it can also be a tool in which producers use it to broadcast more, often tilting in the direction of more channels at the cost of quality...
It's a vicious cycle until people wake up and realize the "race for more compression" should not be the primary issue.
People aren't going to wake up, because there is no race for compression... Ever since somebody figured how to quantize an image it's been a problem of tradeoffs (arguably one of quality)...