(Emphasis mine)
You do exactly that in your visual comparison. You compare one image generated with nearest neighbour sampling to another generated with linear interpolation, then you claim that the latter "blurs" the image because it doesn't produce hard edges like the former. But both are guesses. None of them is inherently more correct than the other – the information of what is supposed to be in between the samples just isn't there!
Absolute nonsense. Really, this makes no sense at all.
Are you trying to argue the semantics of the word "hard"? Well, let me repeat since you seem to have skipped it. In this discussion, "hard" and "soft" were defined according to the following:
Xalion said:
Frequencies here are defined as the intensity difference between pixels. So a "high" frequency produces "sharp" lines. In other words, a high frequency corresponds to a sudden change from one intensity to another much higher or lower. Those definitions just give context to the discussion.
Do I need to make it bigger or perhaps repeat it a 4th time? There is no guessing involved in the definition. When I say something is "blurred", I am specifically referring to the fact that high frequencies are suppressed. You may take issue with that definition, but it has been the definition the entire time. "Hard" lines are lines with high frequencies. "Soft" lines are those with low frequencies. There is no question about that.
There is absolutely NO guessing in what I did. I spelled out the EXACT conditions for the experiment. I show visually and mathematically exactly what happens. Several different ways. There is no guessing.
Let us go over what an example is. An example can be defined as "an instance (as a problem to be solved) serving to illustrate a rule or precept or to act as an exercise in the application of a rule". Now, I wanted to illustrate what happens by linear interpolation. So I set up an example. Like most examples, I looked for some key qualities:
1) Simple. The example should not complicate matters beyond what is needed to demonstrate the effect. You can always make things more complicated. However, you rarely should - especially when you are trying to demonstrate and effect.
2) Repeatable. Someone else should be able to take your calculation and repeat it for themselves. Note that this requires you not use any special tools, and normally that you keep the assumed knowledge down to a minimum.
3) Easy to interpret. You don't want people guessing at what your example means.
So let us review my example. First, I chose a 1 dimensional set of points. Those points consisted of 2 color fields and a line between them. Those were defined by a set of points:
{1,1,5,2,2}
That is simple. The small number of points makes it repeatable. It is very easy to interpret. Each number corresponds to an intensity of a pixel. So this suits the need for an example very well. Note that the example is
independent of representation.
I choose to examine this example using 3 separate methods. The first was visual. You MUST make some assumptions to represent this visually. I spelled those assumptions out. To make it large enough to see, I used square pixels and expanded it from each number representing 1 point to each number representing a 5x5 grid. This is simple. It is easy to repeat. It is easy to interpret. For upscaling, I choose linear interpolation. Once again, this is simple. You can perform it with a piece of paper if you need too.
Then there is no guessing whatsoever involved. You can step through the example yourself if you need too. It isn't hard. Visually, there is absolutely no guessing. Mathematically, there is absolutely no guessing. You can calculate for yourself the exact moment things happen and why they happen.
Now, the visual comparison does assume square pixels. I realized people might take issue with that. So I did a purely mathematical comparison. Note that the mathematical comparison depends on only 2 assumptions. A: That the picture can be represented by a series of pixels. B: That those pixels are arranged in a grid. That is it. From there, you may need a calculator to actually perform the DFT, but you can repeat the example exactly. There is no guessing. There is no gray area. You can do the math yourself and you will discover the exact same result.
I then gave a method for visually comparing large samples using the above mathematical comparison. Once again, no guessing. Once again, everything comes from exactly where it comes from.
Perfect reconstruction of the original signal is therefore impossible, that information is lost.
No kidding? I mean, it almost is as if everyone involved in this thread hasn't been saying this from the very beginning right? Oh wait, we have.
Thus I will ask again: your original claim was that upscaling "destroys hard lines" – compared to what?
And I have answered you 3 times now. We have been discussing frequency shifts caused by upscaling - specifically linear upscaling. So we have been comparing relative frequencies between an image and its upscaling counterpart.
You are trying to turn this discussion into a subjective wasteland. The problem is there are objective definitions involved. Frequency has been defined as the shift in intensities between pixels. Hard and soft are defined as high and low frequencies respectively. Blur has been defined as a suppression of high frequency.
All of these definitions are independent of pixel shape and monitor. That should explain to you why we can define line art as hard edged regardless of the monitor we show it on. Because the definition does not involve pixel shape or monitor.
What you are trying to argue is basically the equivalent to "certain broken monitors cannot show red. Therefore, red content in pictures can never be compared because it might not always show up!".
The definitions are there. They are independent of pixel shape. They are independent of monitor. I have shown that at least with linear upscaling, my original statement is correct. As a matter of fact, because of the general nature of the DFT proof, to claim my original statement was incorrect would require you to disprove one of the two assumptions. Here they are:
A) Pictures can be represented by a series of pixels
B) Those pixels are arranged in a grid
For the first, I think you are going to have a hard time proving that. For the second, there are indeed some applications where you can't make this assumption. Unfortunately, computer monitors and televisions don't count themselves among these applications.