BenSkywalker said:
Demalion-
You were serious with your prior examples? Ok then.
Ben, it is possible to hold a conversation without adding in gratuitous condescension, FYI.
By your usage, it seems a 16-bit texture with a gradient is more detailed than a 32-bit texture trying to achieve the same thing, because the contrast is greater and the number of distinct colors you can count is higher.
It depends on the particular image that is trying to be displayed.
No, for your proposals, the intended image data does not matter, only the contrast and distinct colors presented. Hence the examples that prompted the 16-bit and 32-bit example that fits your propositions, but your rewording does not do anything to address.
If take an image that is designed to have the high levels of contrast that a 16bit texture would and then up sample it, then downsample it utilizing a 24bit AA downsample then you will lose the contrast it is supposed to have.
Again, the contrast isn't the only data. That is why what you are proposing is equating "detail" to "aliasing" when they are in actuality capable of diverging.
If an image has less contrast, it has less contrast...that does not mean it therefore has less detail than another image that happens to only have black and white pixels.
The issue at hand is taking an image the already has a given level of contrast and then altering that. Comparing the image that you are seeing at raster time compared to the native texture.
Now you're comparing the full resolution texture specifically to the supersampled rendering?
You're not making sense...the context of comparison isn't the original texture, it is the sampling of that texture that would have happened without supersampling, compared to the same thing with supersampling.
You do this again with your Mona Lisa example
More samples-> more data.
You seem to find it consistent to say that a reduced resolution presentation of a higher resolution texture is better when it is not supersampled from a higher resolution than when it is, and that more samples is not more detail.
Graphical aid to a prior example you decided to dismiss as not "serious"
:
x=black
_=white
_x
x_
This is a 2x2 block in the texture, repeated throughout. We need to render it at 1/4 its native resolution. What you propose is that to maintain contrast, we sample black or white and use that. What happens to the white was there if we pick black? The black if we pick white? It is thrown away...the white or black that
might appear next to it to provide the high contrast you want is
different data, though its color value of white would be the same.
The problem is that because you are proposing contrast as the whole of "detail", you are contradicting your premise by actually deliberately dropping detail present in the texture. This is gross error introduction.
What should instead be done is to produce 50% black and 50% white (grey). However...you maintain that contrast was lost, so supersampling resulting in this did not add detail. I maintain that the contrast cannot be expressed at the same time you represent the original texture in the reduced resolution....
you lost the contrast by the reduction in resolution, not supersampling. The contrast you are producing by your approach is artificial...it is not detail, it is not the "particulars" of the texture, it is error.
However, the grey contains the data from both the white and black...their luma and chroma, and their positional relevance for the pixel being output. Just because the same shade of grey somewhere else might be insignificant doesn't matter...this particular grey is a representation of the information that should be there, and more information than black or white would have been in its place.
If the luma and chroma and position of the white and black are not detail, what are they?
You seem to be ignoring the idea of what the texture is trying to portray...a texture isn't just a bunch of colors and contrast, it is the position of those colors and contrasts as well. Even when supersampling isn't introducing new colors in screen rendering (does your analysis recognize that part?), it is possible that it will still introduce "positional data' (the term I'm making up, don't know of a better name yet).
My analysis is based on what actually happens. I've cited numerous examples of circumstances that reduce detail.
No, your examples focus on contrast.
Anyways, there are 2 discussions...one where you are maintaining super sampling can't introduce texture detail, and one where you are disagreeing with my "positional data" proposition. I'll cover this latter next.
Color positioning is only properly done on an anisotropic basis, other then that you are simply amplifying errors by taking an isotropic sampling pattern and then running it over another isotropic sampling pattern. You are compounding the problem in terms of positional data.
Another example:
Does a 1600x1200 image have more detail in each group of 2x2 pixels than a 800x600 image has in each one? Yes. Does it have more error? If your criteria is the point sample at the very center of the 800x600 pixel, then yes. If your criteria is the representative information of the screen covered by that pixel, then no. If you supersampled those pixels and represented the color in a 800x600 pixel, both the data and the error (depending on your evaluation) is included.
If the entire reason you have a point sample in the center of the 800x600 pixel is to be representative of the area covered by the pixel (which is true for screen rendering), then you would be missing the point by focusing on the "error" of deviation from that sample introduced by using the samples representative of the 4 1600x1200 pixels.
Are you comparing SSAA to a higher resolution, rather than to the image of the same resolution without SSAA applied?
No, there is no need to.
So, then you are comparing SSAA to the the image at the same resolution without SSAA and saying texture detail can't be introduced?
No I don't need a different line of reasoning, since all supersampling is not trilinear filtering.
It is simply a redundant isotropic filter, SSAA just adds yet another redundant inaccuate isotropic filter on top of it.
You are fixated on error introduction inherent in isotropic sampling, yet have no problem with error introduction from what you've proposed?
Mip map levels are additional detail, but if the sampling and mip map selection is introducing error, by sampling them more, you might be sampling more error instead of more detail.
Which outside of staring at a wall, isotropic filtering is introducing error anyway using your positional data standard.
The problem with isotropic bilinear texture filtering is failing to sample sufficiently to the resolution of the screen output due to sample pattern determination...but we are talking about supersampling beyond the resolution of the screen. They are not the same thing, despite your attempt to equate them by describing them both as isotropic.
No, AF is the better way to increase the accuracy and detail of textures, not the only way. Are you now saying supersampling doesn't increase accuracy?
In the particular instance of textures no it doesn't. It does in terms of image integrity(pixel popping most notably) and in terms of edges in particular.
You're equating the shortcomings of isotropic texture filtering with screen supersampling. You still haven't established how that makes sense.
Why are you making this "trilinear OR AF", and "SSAA OR AF"?
MSAA and AF are overall by far superior in terms of AA because they are based on what the image is actually supposed to be. SSAA and trilinear(by default) are isotropic and are inaccurate, SSAA in particular is a 2D filter.
You're ignoring the factor of the resolution for the sampling. Bilinear isotropic texture filtering fails because the resolution of sampling is below that of the representation relative to the screen. SSAA does not have that problem.
Isotropic sampling patterns work because the representation is isotropic...the screen. When the representation is of something 3 dimensional, they are inferior to anisotropic sampling (of sufficient degree). However, they are still better than no sampling of extra detail at all.
Not when you are sampling data in a manner that is contrary to what the image is supposed to portray. If you were trying to take a picture of the Mona Lisa would you do so with the camera 18" off the wall it is hanging on from ten feet away? If you took four of those pictures and blended them together do you think it would look better then one taken from head on standing a few feet back?
The fallacy of your example is that the picture is supposed to be of the Mona Lisa, 18" off the wall, 10 feet away. That's picture is the screen. What you are pointing out is that orienting the screen like that is not the best way to view the Mona Lisa in full detail, but the viewing of the Mona Lisa in full detail wasn't the point, it was orienting the screen.
For detail introduction, Anisotropic > super sampling for textures in a texture oriented away from the plane of the screen. However, Anisotropic + no sampling at all < Anisotropic + super sampling. Why? Because the supersampling is sampling extra anisotropic sampled data.
For the last case you listed, you are performing an isotropic filter on top of the anisotropic thus reducing the detail of the texture.
Reducing detail? How? You are not answering that, you are providing examples that don't seem to hold together well, and proposing that contrast and detail are synonymous, and therefore aliasing is also synonymous with detail.