Could activation of SSAA improve texture quality?

BenSkywalker said:
You're getting sampling confused with blur. You seem to have the idea that, if you are looking at a very detailed texture with only red and green texels, you should only see red and green pixels on your screen. In reality, brown is the correct thing to see when you are limited by resolution.

You are talking about what is tollerated as acceptable error, not what is correct.

Actually, he is talking about what is correct. Imagine you set up a similar real world case and snapped a photo ... you'd get brown pixels in between there too. Heck, even your eyes work that way.
 
BenSkywalker said:
Demalion-

You were serious with your prior examples? Ok then.

Ben, it is possible to hold a conversation without adding in gratuitous condescension, FYI.

By your usage, it seems a 16-bit texture with a gradient is more detailed than a 32-bit texture trying to achieve the same thing, because the contrast is greater and the number of distinct colors you can count is higher.

It depends on the particular image that is trying to be displayed.

No, for your proposals, the intended image data does not matter, only the contrast and distinct colors presented. Hence the examples that prompted the 16-bit and 32-bit example that fits your propositions, but your rewording does not do anything to address.

If take an image that is designed to have the high levels of contrast that a 16bit texture would and then up sample it, then downsample it utilizing a 24bit AA downsample then you will lose the contrast it is supposed to have.

Again, the contrast isn't the only data. That is why what you are proposing is equating "detail" to "aliasing" when they are in actuality capable of diverging.

If an image has less contrast, it has less contrast...that does not mean it therefore has less detail than another image that happens to only have black and white pixels.

The issue at hand is taking an image the already has a given level of contrast and then altering that. Comparing the image that you are seeing at raster time compared to the native texture.

Now you're comparing the full resolution texture specifically to the supersampled rendering?

You're not making sense...the context of comparison isn't the original texture, it is the sampling of that texture that would have happened without supersampling, compared to the same thing with supersampling.
You do this again with your Mona Lisa example

More samples-> more data.
You seem to find it consistent to say that a reduced resolution presentation of a higher resolution texture is better when it is not supersampled from a higher resolution than when it is, and that more samples is not more detail.

Graphical aid to a prior example you decided to dismiss as not "serious" :-?:
x=black
_=white

_x
x_

This is a 2x2 block in the texture, repeated throughout. We need to render it at 1/4 its native resolution. What you propose is that to maintain contrast, we sample black or white and use that. What happens to the white was there if we pick black? The black if we pick white? It is thrown away...the white or black that might appear next to it to provide the high contrast you want is different data, though its color value of white would be the same.
The problem is that because you are proposing contrast as the whole of "detail", you are contradicting your premise by actually deliberately dropping detail present in the texture. This is gross error introduction.
What should instead be done is to produce 50% black and 50% white (grey). However...you maintain that contrast was lost, so supersampling resulting in this did not add detail. I maintain that the contrast cannot be expressed at the same time you represent the original texture in the reduced resolution....you lost the contrast by the reduction in resolution, not supersampling. The contrast you are producing by your approach is artificial...it is not detail, it is not the "particulars" of the texture, it is error.
However, the grey contains the data from both the white and black...their luma and chroma, and their positional relevance for the pixel being output. Just because the same shade of grey somewhere else might be insignificant doesn't matter...this particular grey is a representation of the information that should be there, and more information than black or white would have been in its place.
If the luma and chroma and position of the white and black are not detail, what are they?

You seem to be ignoring the idea of what the texture is trying to portray...a texture isn't just a bunch of colors and contrast, it is the position of those colors and contrasts as well. Even when supersampling isn't introducing new colors in screen rendering (does your analysis recognize that part?), it is possible that it will still introduce "positional data' (the term I'm making up, don't know of a better name yet).

My analysis is based on what actually happens. I've cited numerous examples of circumstances that reduce detail.
No, your examples focus on contrast.

Anyways, there are 2 discussions...one where you are maintaining super sampling can't introduce texture detail, and one where you are disagreeing with my "positional data" proposition. I'll cover this latter next.
Color positioning is only properly done on an anisotropic basis, other then that you are simply amplifying errors by taking an isotropic sampling pattern and then running it over another isotropic sampling pattern. You are compounding the problem in terms of positional data.

Another example:

Does a 1600x1200 image have more detail in each group of 2x2 pixels than a 800x600 image has in each one? Yes. Does it have more error? If your criteria is the point sample at the very center of the 800x600 pixel, then yes. If your criteria is the representative information of the screen covered by that pixel, then no. If you supersampled those pixels and represented the color in a 800x600 pixel, both the data and the error (depending on your evaluation) is included.

If the entire reason you have a point sample in the center of the 800x600 pixel is to be representative of the area covered by the pixel (which is true for screen rendering), then you would be missing the point by focusing on the "error" of deviation from that sample introduced by using the samples representative of the 4 1600x1200 pixels.

Are you comparing SSAA to a higher resolution, rather than to the image of the same resolution without SSAA applied?

No, there is no need to.

So, then you are comparing SSAA to the the image at the same resolution without SSAA and saying texture detail can't be introduced?

No I don't need a different line of reasoning, since all supersampling is not trilinear filtering.

It is simply a redundant isotropic filter, SSAA just adds yet another redundant inaccuate isotropic filter on top of it.

You are fixated on error introduction inherent in isotropic sampling, yet have no problem with error introduction from what you've proposed?

Mip map levels are additional detail, but if the sampling and mip map selection is introducing error, by sampling them more, you might be sampling more error instead of more detail.

Which outside of staring at a wall, isotropic filtering is introducing error anyway using your positional data standard.

The problem with isotropic bilinear texture filtering is failing to sample sufficiently to the resolution of the screen output due to sample pattern determination...but we are talking about supersampling beyond the resolution of the screen. They are not the same thing, despite your attempt to equate them by describing them both as isotropic.

No, AF is the better way to increase the accuracy and detail of textures, not the only way. Are you now saying supersampling doesn't increase accuracy?

In the particular instance of textures no it doesn't. It does in terms of image integrity(pixel popping most notably) and in terms of edges in particular.

You're equating the shortcomings of isotropic texture filtering with screen supersampling. You still haven't established how that makes sense.

Why are you making this "trilinear OR AF", and "SSAA OR AF"?

MSAA and AF are overall by far superior in terms of AA because they are based on what the image is actually supposed to be. SSAA and trilinear(by default) are isotropic and are inaccurate, SSAA in particular is a 2D filter.

You're ignoring the factor of the resolution for the sampling. Bilinear isotropic texture filtering fails because the resolution of sampling is below that of the representation relative to the screen. SSAA does not have that problem.

Isotropic sampling patterns work because the representation is isotropic...the screen. When the representation is of something 3 dimensional, they are inferior to anisotropic sampling (of sufficient degree). However, they are still better than no sampling of extra detail at all.

Not when you are sampling data in a manner that is contrary to what the image is supposed to portray. If you were trying to take a picture of the Mona Lisa would you do so with the camera 18" off the wall it is hanging on from ten feet away? If you took four of those pictures and blended them together do you think it would look better then one taken from head on standing a few feet back?

The fallacy of your example is that the picture is supposed to be of the Mona Lisa, 18" off the wall, 10 feet away. That's picture is the screen. What you are pointing out is that orienting the screen like that is not the best way to view the Mona Lisa in full detail, but the viewing of the Mona Lisa in full detail wasn't the point, it was orienting the screen.

For detail introduction, Anisotropic > super sampling for textures in a texture oriented away from the plane of the screen. However, Anisotropic + no sampling at all < Anisotropic + super sampling. Why? Because the supersampling is sampling extra anisotropic sampled data.

For the last case you listed, you are performing an isotropic filter on top of the anisotropic thus reducing the detail of the texture.

Reducing detail? How? You are not answering that, you are providing examples that don't seem to hold together well, and proposing that contrast and detail are synonymous, and therefore aliasing is also synonymous with detail.
 
Three comments and a question, all to Ben:

2048xAF or not, you still need mipmaps for a correct image.

Look at a distant checkerboard at an angle, with a perfect AF, you still get gray pixels. (How would you get anything else?)

To be consistent with the idea that a correctly rendered checkerboard at any distance/size never should show any gray pixels, you also need to say that NO AA or AF should be done.

In what way should z-value be used in AA?
 
Humus-

Since I know how much you like all the quoting and replying, I'll get to your post first to avoid you having to read through the whole thing ;) Your eyes will, at some point, blend the colors together. I sit 3' away from my monitor, my eyes don't have issues with it at that distance. The absolute correct image would be infinite resolution. Since that is not available, we have hacks in place to deal with it. When you look at something, you see it based on the angle it is at. Details don't get removed because you are looking at it on an angle.

Joe-

Decrease detail relative to what. I'm saying he meant relative to not having mip-mapping and filtering.

That's what I assumed he meant.

Anisotropic filtering and "full scene" AA techniques are complimentary. I'm not really sure at this point what your main argument is.

They can be used in conjunction. My main point in this discussion is that SSAA reduces detail and sharpness in textures. Although I would without a doubt rather have AF over SS.

Sure it does. Every pixel in a (for example) 4x (2x2) supersampled image (thus every sub-sample of the resultant pixel) is calculated using x,y, and z.

Rendering at a higher resolution uses z, the AA filter is a 2D operation. Without AF, you aren't factoring z in your sampling of the texture giving you improper weighting in relation to how the image should look.

The result of applying SSAA is not any less accurate than not applying SS. (Apply SS to a point sample, bi-linear, tril-linear, or aniso filtered texture, and it's not less accurate than NOT applying SSAA to the texture.) Do you agree with that?

In terms of accuracy I can see your point, but I'm talking about detail.

I don't see anyone arguing that Ansio doesn't provide more accurate results that supersampling AA on top of Trilinear. It does appear that you are arguing that applying SSAA on top of filtered textures can cause a lack of accuracy. Please clarify.

Lack of detail is what I'm saying. This discussion is going very much like the last time we had it on the boards. Pretty much it was myself and one or two other people saying the same thing with almost everyone else in disagreement. Then Rev posted his article on the Pulpit about needing a LOD bias slider that allowed for adjustment beyond that which the increased resolution would use for the V5. Then 3dfx agreed and released the LOD bias adjuster in the drivers.

I've given examples of artifacts that SSAA can introduce, noone seems to have touched the sphere example yet.

Demalion-

Ben, it is possible to hold a conversation without adding in gratuitous condescension, FYI.

When did you figure that out? I haven't seen you demonstrate it yet, but I'll keep my eyes open and watch for it ;)

No, for your proposals, the intended image data does not matter

The intended image is everything, the only thing. All of the examples I have given have been based on the intended image.

Again, the contrast isn't the only data. That is why what you are proposing is equating "detail" to "aliasing" when they are in actuality capable of diverging.

Increasing contrast increases both as long as it is what is intended, as decreasing reduces both. You take two setups all else equal with one having twice the contrast that the other does. You will notice more detail and more aliasing with the higher contrast image.

You're not making sense...the context of comparison isn't the original texture, it is the sampling of that texture that would have happened without supersampling, compared to the same thing with supersampling.

It does make sense. If you take a texture that is off at an angle and compare the sampling pattern for an isotropic filtering implementation you are sampling a wider area then you would be without it. The samples that are taken for a given pixel could be pushed over a few pixels worth compared with the base texture. If both levels of filtering were weighted based on the angle that the texture takes up in 3D space to properly adjust then this wouldn't be an issue. Running anisotropic you weight the first portion correctly, then improperly weight the values for the actual AA.

You seem to find it consistent to say that a reduced resolution presentation of a higher resolution texture is better when it is not supersampled from a higher resolution than when it is, and that more samples is not more detail.

More samples take then proper way is more detail. More samples taken with a disregard for how the texture should be sampled in no way assures you of having more detail, simply more data.

This is a 2x2 block in the texture, repeated throughout. We need to render it at 1/4 its native resolution. What you propose is that to maintain contrast, we sample black or white and use that. What happens to the white was there if we pick black? The black if we pick white? It is thrown away...the white or black that might appear next to it to provide the high contrast you want is different data, though its color value of white would be the same.

In that example some of the pixels are accurate at least. If they were all gray, none of them would be.

I maintain that the contrast cannot be expressed at the same time you represent the original texture in the reduced resolution....

I agree if you are speaking in terms of 100% accuracy, there is no chance of that happening. If you blend all the colors, you have no pixels based on what the image is supposed to look like.

If the luma and chroma and position of the white and black are not detail, what are they?

Summation. If I listen to the closing arguments in a court case and here nothing else, do you think I have the details? If you were to take an image that was on a sub pixel basis alternating black and white and put it through a sampling filter you end up with gray. That doesn't give you any detail at all, it simply gives you a summary of what the pixel space contains as it relates to the sampling pattern. There isn't a perfect solution around this outside of infinite res, but just saying to blur it ASAP is far from the best we can do. By using summation there is always the possibility that there will be no detail of what is actually inside the sampling area.

Does a 1600x1200 image have more detail in each group of 2x2 pixels than a 800x600 image has in each one? Yes. Does it have more error? If your criteria is the point sample at the very center of the 800x600 pixel, then yes.

And in that exact instance running 3200x2400 you would have four pixels correct running bilinear filtering assuming you are talking about perfect allignment.

If you supersampled those pixels and represented the color in a 800x600 pixel, both the data and the error (depending on your evaluation) is included.

A summation of what is in the area. Depends on the angle how accurate it would be.

If the entire reason you have a point sample in the center of the 800x600 pixel is to be representative of the area covered by the pixel (which is true for screen rendering), then you would be missing the point by focusing on the "error" of deviation from that sample introduced by using the samples representative of the 4 1600x1200 pixels.

If it is alligned properly you can easily run bilinear or trilinear, the results for that pixel would be weighted over four pixels in the 1600x1200 example so you would lose detail in that particular instance. Although, as I mentioned previously, running 3200x2400 would have the accuracy displayed quite nicely.

So, then you are comparing SSAA to the the image at the same resolution without SSAA and saying texture detail can't be introduced?

I'm saying that detail is reduced. Give an example of where more detail, not just data, is added.

You are fixated on error introduction inherent in isotropic sampling, yet have no problem with error introduction from what you've proposed?

What I've proposed is better in that aspect, it isn't perfect by any stretch of the imagination.

The problem with isotropic bilinear texture filtering is failing to sample sufficiently to the resolution of the screen output due to sample pattern determination...but we are talking about supersampling beyond the resolution of the screen. They are not the same thing, despite your attempt to equate them by describing them both as isotropic.

Bilinear filtering is sub pixel in accuracy so I'm not sure what exactly you are implying on that front. SSAA is just as bad as bilinear filtering in terms of its isotropic nature, worse in some instances(which is why it creates haloing, although it is a necessary evil to eliminate edge aliasing). No matter what resolution you sample at, unless you are sampling the source texture based on the angle it is should be sampled you are creating a very imbalanced image in relation to how the human eye works. We have full depth perception, SSAA does not take that in to account.

The fallacy of your example is that the picture is supposed to be of the Mona Lisa, 18" off the wall, 10 feet away. That's picture is the screen. What you are pointing out is that orienting the screen like that is not the best way to view the Mona Lisa in full detail, but the viewing of the Mona Lisa in full detail wasn't the point, it was orienting the screen.

Exactly. With isotropic filtering you are creating an image like you are looking at it from dead ahead no matter what. If you want a picture from an angle, you take it from an angle.

Reducing detail? How? You are not answering that, you are providing examples that don't seem to hold together well, and proposing that contrast and detail are synonymous, and therefore aliasing is also synonymous with detail.

If there is more contrast, there is more detail. If someone writes a 1000 word post with one word repeated over and over you can summarize it in one word with accuracy. If the post contains 1,000 different words then you can't. My assumption is that when talking about the possibility of losing detail, we also assume that there is detail there to lose. If we assume there is no detail, which is the only way summation is ideal, then my points would fall apart rather quickly.

Basic-

2048xAF or not, you still need mipmaps for a correct image.

Look at a distant checkerboard at an angle, with a perfect AF, you still get gray pixels. (How would you get anything else?)

Those two contradict each other. I realize that there is no perfect solution, SSAA is regressive however.

To be consistent with the idea that a correctly rendered checkerboard at any distance/size never should show any gray pixels, you also need to say that NO AA or AF should be done.

Or the alternative and best solution(although impossible)- infinite resolution.

In what way should z-value be used in AA?

MSAA+AF. Both z based, and excluding old games with their alpha texture issues, vastly superior to SSAA.
 
BenSkywalker said:
Since I know how much you like all the quoting and replying, I'll get to your post first to avoid you having to read through the whole thing ;) Your eyes will, at some point, blend the colors together. I sit 3' away from my monitor, my eyes don't have issues with it at that distance. The absolute correct image would be infinite resolution. Since that is not available, we have hacks in place to deal with it. When you look at something, you see it based on the angle it is at. Details don't get removed because you are looking at it on an angle.

Um... the distance they've been talking about is the simulated distance within the game. Or do you think an object 10 miles away IN GAME SPACE should be perfectly defined, just because your head is 3' from your monitor?
 
BenSkywalker said:
They can be used in conjunction. My main point in this discussion is that SSAA reduces detail and sharpness in textures. Although I would without a doubt rather have AF over SS.
No, it doesn't. At the same resolution, SSAA (with proper LOD adjustment) will always improve texture quality.

If you're attempting to compare SSAA to increased resolution without AA, then that's a very outdated argument. Today's hardware is more than powerful enough for nearly any game to run at the highest resolutions available with at least 2x SSAA.

Additionally, AA and increased resolution tackle different issues, most particularly with sampling patterns that are not ordered-grid.
 
Tag-

Um... the distance they've been talking about is the simulated distance within the game. Or do you think an object 10 miles away IN GAME SPACE should be perfectly defined, just because your head is 3' from your monitor?

The point was in comparison to how the human eye works.

Chalnoth-

No, it doesn't. At the same resolution, SSAA (with proper LOD adjustment) will always improve texture quality.

What is your definition of proper LOD adjustment? If you set it to -6 running 4x then obviously you can increase detail.

If you're attempting to compare SSAA to increased resolution without AA, then that's a very outdated argument.

That was the question posed in this thread. I didn't start the thread nor determine the conversation, that was someone else :)

Additionally, AA and increased resolution tackle different issues, most particularly with sampling patterns that are not ordered-grid.

:?: Huh? The only thing I can deduce by this comment is that you are limiting your comments to the tiny fraction of DX7 boards on the market that support RGSSAA. If not, I'm not understanding what you are saying.
 
BenSkywalker said:
No, it doesn't. At the same resolution, SSAA (with proper LOD adjustment) will always improve texture quality.
What is your definition of proper LOD adjustment? If you set it to -6 running 4x then obviously you can increase detail.
Make the LOD more aggressive such that the maximum amount of aliasing is the same as with no AA. This is incredibly simple for ordered-grid SSAA: just use the LOD for the higher resolution.

And if you seriously believe that supersampling makes textures more blurry, you have certainly had your head in the sand for quite some time. The only video card generation that made textures more blurry (by default) with supersampling AA enabled was the Voodoo4/5 series. Every other video card, including the GeForce, GeForce2, Radeon 7500 and Radeon 8500, has shown a marked increase in texture clarity with supersampling enabled (as do the GeForce3/4/FX with their partial-supersampling modes).

All you need do is bother to look at a few reviews over the years of any of these video cards. The results are clear.

:?: Huh? The only thing I can deduce by this comment is that you are limiting your comments to the tiny fraction of DX7 boards on the market that support RGSSAA. If not, I'm not understanding what you are saying.
The comment still holds for ordered-grid AA, but it's more effective for those cards that don't use ordered-grid techniques. In any situation, AA helps to break up ordered patterns that are very ugly to the eye. Unless you are running at such high resolution that the display is blurry (i.e. either the monitor, monitor cable, or video card can't handle the resolution), "jaggies" will always be visible, and will always be very ugly for specific game situations. For these situations where edge aliasing is at its worst, even the worst form of edge AA (OGSSAA - worst when it comes to edge AA as far as performance/image quality is concerned) will be preferable to the equivalent performance at higher resolution without AA.
 
Make the LOD more aggressive such that the maximum amount of aliasing is the same as with no AA. This is incredibly simple for ordered-grid SSAA: just use the LOD for the higher resolution.

Using the default LOD for back buffer resolution in an OGSSAA implementation clearly leaves textures with a reduction in detail.

And if you seriously believe that supersampling makes textures more blurry, you have certainly had your head in the sand for quite some time. The only video card generation that made textures more blurry (by default) with supersampling AA enabled was the Voodoo4/5 series. Every other video card, including the GeForce, GeForce2, Radeon 7500 and Radeon 8500, has shown a marked increase in texture clarity with supersampling enabled (as do the GeForce3/4/FX with their partial-supersampling modes).

No, I'm not talking just about the new kids on the block in terms of SSAA, nor am I limiting it to hardware rasterizers either for that matter. I worked with 3D viz for several years, including years prior to the consumer level hardware implementations. As far as the boards you list, most of the examples of hardware rasterization that I used looking at blurring textures was a GeForce DDR although I also used a GF2 Pro and Kyro2 to demonstrate the problem. The xS modes on the NV2X boards also very clearly have issues with blurring textures, it is still useful to have for HL powered games though.

All you need do is bother to look at a few reviews over the years of any of these video cards. The results are clear.

The same people who say the R300 core boards have great AF :rolleyes: I wouldn't trust most reviewers to comment on a billboard's IQ from ten feet away. They miss texture aliasing on the R300 core boards along with out of place filtering on textures that are blurred considerably vs the textures adjacent to it. Hell, Anand couldn't tell the difference between bilinear and trilinear when running AF on the ATi boards. On the nV side of things, SeriousSam requires a modification to the cfg files running under GF4's to get the game to run the proper Z depth. There is constant Z fighting in the game without modifying the configuration file. These are major rendering artifacts that reviewers completely miss, I'm supposed to trust them on analyzing per pixel output? Another example of the great qualities some of these reviewers take is attempting to demonstrate AF quality by zooming in on a particular area of screen and blowing it up and using that to judge AF quality by without looking at the entirety of what it is supposed to be doing.

The comment still holds for ordered-grid AA, but it's more effective for those cards that don't use ordered-grid techniques.

Effectively that comment is limited to the V5.

In any situation, AA helps to break up ordered patterns that are very ugly to the eye.

By running an ordered pattern of their own for the most part. If we were talking about stochastic AA I could see your point here, but I don't know of too many people who are running that in hardware for games.

Unless you are running at such high resolution that the display is blurry (i.e. either the monitor, monitor cable, or video card can't handle the resolution), "jaggies" will always be visible

Well, around 8,000x8,000 res should clear it up for most people. Of course, your point about blurring taking care of aliasing is something I agree quite strongly with. :)
 
Not sure that it will help to have one more voice chime in, but...

Ben, I think that you're not fully considering the possible outcomes from combining additional samples. You seem to be placing a lot of emphasis on SSAA reducing texture detail by reducing contrast in the final image, without giving SSAA any credit for the times that it *increases* the apparent contrast.

The manner in which SSAA alters the result of a textured surface is entirely dependent on the nature of the information exposed by the improved sample coverage of the pixel-area. The additional samples are just as likely to reveal that a particular color was being under-represented (relative to non-SSAA rendering) within the pixel area as it is to reveal that a particular color was being over-represented.

In the black-and-white checkerboard texture example, a pixel with a texture value that had resolved via bilinear filtering to 50% white and 50% black without SSAA could very easily turn out to be 18.75% black and 81.25% white when 4x SSAA is used, with the resulting visual effect being a faster (higher contrast) transition between white and black in the final image.

Since anisotropic filtering only increases the number of texture samples along a particular axis of anisotropy, there is no assurance that anisotropic filtering would have generated the same higher-contrast texture value for the pixel in question. It's possible, for instance, that the axis of anisotropy falls along the line of transition between black and white in the texture, such that all the extra samples add an equal amount of black and white texels, retaining a 50% mix.
 
The manner in which SSAA alters the result of a textured surface is entirely dependent on the nature of the information exposed by the improved sample coverage of the pixel-area. The additional samples are just as likely to reveal that a particular color was being under-represented (relative to non-SSAA rendering) within the pixel area as it is to reveal that a particular color was being over-represented.

SSAA makes no attempt to sample the image in a manner that would enable it to offer proper representation. If it did end up doing so, it would be by blind luck. Any isotropic filtering method, unless applied to a wall you are staring at head on, is going to weight the pixel values improperly in relation to object space.

In the black-and-white checkerboard texture example, a pixel with a texture value that had resolved via bilinear filtering to 50% white and 50% black without SSAA could very easily turn out to be 18.75% black and 81.25% white when 4x SSAA is used, with the resulting visual effect being a faster (higher contrast) transition between white and black in the final image.

That is true, but you would still have gray pixels, although differing shades of gray would be an improvement.

Since anisotropic filtering only increases the number of texture samples along a particular axis of anisotropy, there is no assurance that anisotropic filtering would have generated the same higher-contrast texture value for the pixel in question.

There is no assurance, but there is a significantly greater probability. AF at least makes an attempt to sample in object space instead of using a straight screen space down filter with equal weighting given solely based on the x y coordinates.
 
madshi said:
Of course AF is perspective correct and SSAA is not.
What on earth gave you that idea? The sum of the sub samples will give a closer approximation of the correct sampling region of the texture - this takes into account the perspective transformation.
 
MfA said:
Watching your monitor would be a real bitch otherwise.
Perhaps Ben's eyes work in a different way to everyone elses? Perhaps his rods and cones are of infinitesimal area with a pin hole lens? :) (**Actually I think there are theories that some eyes evolved via a pin-hole lens)

Seriously though, for anyone still interested, IIRC the lens of the human eye acts as an (analogue) low-pass filter. If it didn't, the limited resolution of the sensory cells would result in us seeing aliasing. (Actually this only applies to the centre regions of the eye where the cells are densely packed - in the outer regions, the human visual system relies on jittered placement of the cells so that aliasing is converted into high frequency noise).

As for all the arguments on "blur" filters, these are probably really only blurring if you are only feeding in about the same number of pixels as you expect to get out. I won't go into details as Tagrineth has already complained (in another thread) of me not using English :)
 
BenSkywalker said:
Using the default LOD for back buffer resolution in an OGSSAA implementation clearly leaves textures with a reduction in detail.
Clearly? For my eyes it's a substantial increase in detail.
 
Ben, the trading of jibes is not something I find entertaining enough to motivate me to continue repeating concepts I don't think you are willing to listen to and have failed to address for a few times running, while deciding interjecting such jibes would be pleasing to you.

On a related note: thinking someone is wrong is not condescension. But going further and saying "oh, you were serious with those examples?" while failing to address several such examples, is. That is my opinion.

For brevity, I'll restrict my repetition of my statements to address your Mona Lisa example for now:
Your isotropic filtering stipulation is what is wrong with the sampling of bilinear isotropic filtering...supersampling AA is sampling isotropically from the camera's viewpoint, not from looking at the painting/texture dead ahead. The perspective of the isotropic filtering is the problem with bilinear filtering, not that it is isotropic filtering...anisotropic filtering corrects the perspective.

The screen is what is being sampled "dead ahead" by supersampling AA, not the texture itself...the screen samples represent perspective oriented samples of the texture already, and you seem dedicated to proposing that it is equivalent to bilinear sampling regardless of that.

It is possible to sample the textures better than isotropically...sampling 4 times "n" representative samples anisotropically would be better, but sampling 4 isotropically representative samples of "n" anisotropic samples is not worse than just 1 representative sample of "n" anisotropic samples from the perspective of the screen. It is not automatically worse because bilinear texture filtering is also isotropic filtering...the problem with bilinear filtering is the selection of the samples, and anisotropic filtering fixes that. It fixes it for SSAA too...SSAA doesn't cancel it out.

Bilinear filtering is dropping data when the texture is at an angle away from the screen, because it is sampling less data than the screen resolution. Supersampling AA is not dropping data, because it is sampling more data.

Instead of trying to illustrate why this is not the case, your argument seems to consist of stating that all isotropic filtering is bad, with support that still does not appear to make sense or seem to regognize this observation.

We can't progress with a fundamental disagreement on that point, and I don't think I've failed to provide sufficient illustration of why I disagree.

...

I've read your input on my positional data proposition, and find it uninformative due to the above issue. On that note, does anyone else have input specifically on the "positional data" part of my proposition? Some comments others have stated seem to agree with it, but it is something that is still jelling for me. I'd be interested in some other viewpoints on whether there are contrasts in your thinking with what I've proposed in association with it, while we're holding this discussion.
 
BenSkywalker said:
Basic-

2048xAF or not, you still need mipmaps for a correct image.

Look at a distant checkerboard at an angle, with a perfect AF, you still get gray pixels. (How would you get anything else?)

Those two contradict each other. I realize that there is no perfect solution, SSAA is regressive however.

Nope, no contradiction. I'll elaborate.

It seems as you think anisotropy is the only reason to choose a mipmap other than miplevel 0, when doing isotropic filtering. It isn't.

The footprint of a pixel onto a texture can be anisotropic. Two important parameters of this footprint is the length of its short and long axis. To get the best IQ, you want the final pixel to get it's color from this whole area, and nothing more. If you leave out a part of the footprint, you'll get aliasing. And if you blend in parts outside the footprint, you'll get blur.

Since aliasing textures removes more information than blur, the standard way is to use the long axis to determine the mipmap level when doing bi-/tri-linear filtering. You get blur, but no aliasing.

When doing anisotropic filtering, you'll have to look at both axises. The short axis determine mipmap level, and the long/short ratio is the needed level of anisotropy. The anisotropic filtering is typically done by doing bi-/tri-linear samples along the long axis, number of samples = level of anisotropy.
But if the needed level of anisotropy is higher than the currently enabled, you need a different mipmap selection rule:
Used level of anisotropy = max enabled.
Mipmap level is determined by long axis divided by max anisotropy, instead of the short axis.

So there are two reasons to use mipmaps, and the first one can happen even with infinite AF.

Since anisotropic filtering is done by blending a lot of samples along a line, and this line can cross a lot of black and white squares, the result will be gray.

So there's no contradiction between those two statements.

BenSkywalker said:
To be consistent with the idea that a correctly rendered checkerboard at any distance/size never should show any gray pixels, you also need to say that NO AA or AF should be done.

Or the alternative and best solution(although impossible)- infinite resolution.

Well infinite resolution is of course best, that's not the question here. The question here is how to make the most out of a given final resolution. More specifically, can SSAA make textures look sharper (at the same resolution).

But it seems as you tried to avoid the point there. The point was that your opinion against SSAA (that gray pixels are bad if the source only consist of black and white), could be applied against MSAA and AF too.

BenSkywalker said:
In what way should z-value be used in AA?

MSAA+AF. Both z based, and excluding old games with their alpha texture issues, vastly superior to SSAA.

Yes, MSAA use z-values in it's calculations to determine if subpixels are hidden (in the exact same way as SSAA, so no difference there).

In what way is AF z-based?
Are you refering to the "keystone correction"? That's not needed since the effect is insignificant at reasonable resolutions.
Or do you simply associate textures needing anisotropic filtering with "z-based" since they often occur at surfaces with a high z-gradient?



So will SSAA give sharper textures?
Yes, and no.
SSAA will give sharper looking textures when the LOD is changed accordingly and when this means that a more detailed LOD is used.

It will not give sharper looking textures, and even slightly blurier if you don't use a more detailed LOD. (Either because the LOD is bad like a default V5, or because there simply aren't any higher LOD to sample from.)


PS
What's your (Ben) opinion on madshi's fractals. Which image show most details in, say, the top right corner?

PPS
If AA give you "halos" with a color that doesn't fit in, then that's a sign that your gamma is incorrect.
 
BenSkywalker said:
Using the default LOD for back buffer resolution in an OGSSAA implementation clearly leaves textures with a reduction in detail.
Again, you're only considering FSAA vs. higher resolution. This is a mute argument with today's GPU's (though not many have any SS modes anymore, and for good reason).

The comment still holds for ordered-grid AA, but it's more effective for those cards that don't use ordered-grid techniques.

Effectively that comment is limited to the V5.
The Radeon 8500 also had a non-ordered-grid technique, though it was always fairly buggy.
 
Simon-

Perhaps Ben's eyes work in a different way to everyone elses?

I have depth perception, it would appear that most people here don't? My eyes do see things differently dependant on the angle I view them at.

What on earth gave you that idea? The sum of the sub samples will give a closer approximation of the correct sampling region of the texture - this takes into account the perspective transformation.

Are the samples weighted in relation to their Z depth? What calculations are being run on a SSAA filter to compensate for depth perception?

As for all the arguments on "blur" filters, these are probably really only blurring if you are only feeding in about the same number of pixels as you expect to get out. I won't go into details as Tagrineth has already complained (in another thread) of me not using English :)

This makes me curious, do you not consider Quincunx a blur filter?

Demalion-

If you look at something on an angle does your eye weight the colors equally across the entire object? Or looked at from a different angle, would it be possible to create something that could view an object at an angle in the real world that weighted the colors the same across the entire object if something were at an angle? If SSAA used say, an arctangent trig calc and took the results to weight the sampling then it would offer up better detail instead of summation. Can't think of a better way right now to do it, but that would certainly be a lot better then simply giving equal weighting to all the samples for certain(wouldn't be ideal). Not sure on this, but you may be able to even pull it off on current hardware, although I'm not sure what the speed of it would be.
 
Back
Top