Could activation of SSAA improve texture quality?

Basic-

Since anisotropic filtering is done by blending a lot of samples along a line, and this line can cross a lot of black and white squares, the result will be gray.

So there's no contradiction between those two statements.

This is actually what I was referring to in terms of contradiction. Anisotropic will turn the pixels gray on its own. Running 2048xAF your long axis won't be exceeding the anisotropy supported on current displays. I'm not seeing your first point, utilizing a different mip level isn't going to change your sampling positions, how does this relate? Not trying to be obtuse here, I'm just not seeing the relation.

But it seems as you tried to avoid the point there. The point was that your opinion against SSAA (that gray pixels are bad if the source only consist of black and white), could be applied against MSAA and AF too.

Of course it can be used against it. The issue is how many gray pixels will you end up with versus non AF SSAA.

Are you refering to the "keystone correction"? That's not needed since the effect is insignificant at reasonable resolutions.
Or do you simply associate textures needing anisotropic filtering with "z-based" since they often occur at surfaces with a high z-gradient?

You compare the axis of the texture, how is that determined both in terms of initial orientation and then calculation?

SSAA will give sharper looking textures when the LOD is changed accordingly and when this means that a more detailed LOD is used.

It will not give sharper looking textures, and even slightly blurier if you don't use a more detailed LOD. (Either because the LOD is bad like a default V5, or because there simply aren't any higher LOD to sample from.)

If you adjust the LOD bias enough then yes.

PS
What's your (Ben) opinion on madshi's fractals. Which image show most details in, say, the top right corner?

PS
What's your (Ben) opinion on madshi's fractals. Which image show most details in, say, the top right corner?

First are they updated? Comparing a 8bit v 24bit downsampled image there is no way the one will be close to the other. Then there is the issue with it being at the perfect angle for SSAA.

If AA give you "halos" with a color that doesn't fit in, then that's a sign that your gamma is incorrect.

With red and green you get brown, how would gamma correction fix that?(Honest question, never tried it out with my R9500Pro).

Chalnoth-

Again, you're only considering FSAA vs. higher resolution. This is a mute argument with today's GPU's (though not many have any SS modes anymore, and for good reason).

I've checked it out on a GF4, although it's a bit hard to figure out how to compensate for the LOD changes on that(xS modes), would 1024x768 run a comparable LOD bias to 1536x768..?

The Radeon 8500 also had a non-ordered-grid technique, though it was always fairly buggy.

Did it ever work right? I honestly forgot about it until you just mentioned it :oops:
 
BenSkywalker said:
Are the samples weighted in relation to their Z depth? What calculations are being run on a SSAA filter to compensate for depth perception?

There is no need, if something is closer to the camera it will cover a greater number of samples and use a higher resolution mipmap (which means that the texels from the original texture indirectly used for that sample, through filtering, give a larger contribution ... because less of them are used by the texture filter, while the resulting colour still is equally weighted with those from other samples further away).

Talking about LOD bias when talking about supersampling is wrong ... it is not about biasing LOD, it is about selecting the right LOD. If the hardware thinks for some reason each of the subpixel samples is actually a sample at screen resolution, and therefore uses the wrong LOD and needs to be biased, is just an implementation detail. Strictly speaking for non-uniform sampling with a "low" number of samples the general footprint of the subpixel sample should be taken into account not only for LOD, but also orientation and anisotropy. Just dealing with LOD is probably good enough for rotated grid though.
 
BenSkywalker said:
Simon-

Perhaps Ben's eyes work in a different way to everyone elses?

I have depth perception, it would appear that most people here don't?
?!
My eyes do see things differently dependant on the angle I view them at.
What on earth gave you that idea? The sum of the sub samples will give a closer approximation of the correct sampling region of the texture - this takes into account the perspective transformation.

Are the samples weighted in relation to their Z depth? What calculations are being run on a SSAA filter to compensate for depth perception?
Do you mind defining what you mean by Depth Perception? I've been in the industry for *cough* years and I don't recall that term. Do you mean depth cueing/fog?

As for all the arguments on "blur" filters, these are probably really only blurring if you are only feeding in about the same number of pixels as you expect to get out. I won't go into details as Tagrineth has already complained (in another thread) of me not using English :)

This makes me curious, do you not consider Quincunx a blur filter?
Ignoring NV's use of multisampling (which I don't particularly like), the "Quincunx filter" is a Quincunx supersampling pattern (i.e. 5 on a die) combined with a Bartlett/tent filter***. The Bartlett filter is a valid way of downsampling the image and is better than a box filter but not as good, say, as a windowed sinc function. Note that it's only 2x supersampling so it's not going to work miracles!


(*** although perhaps I should check the weights they used again.)
 
Do you mind defining what you mean by Depth Perception? I've been in the industry for *cough* years and I don't recall that term. Do you mean depth cueing/fog?

I said to compensate for depth perception, I didn't say an actual depth perception implementation. I already brought up one way in which it could work, use arctangent.

So you don't consider Quincunx a blur filter? I know what it does, and without a doubt consider it a blur.

Marco-

There is no need, if something is closer to the camera it will cover a greater number of samples and use a higher resolution mipmap

I'm talking per pixel so the mip map section of the equation is out(ignoring pixels that fall within mip boundary areas anyway). With the sampling utilized currently for SSAA all samples are weighted equal amongst a given pixel, no compensation is implemented for the weigthing it should have.

which means that the texels from the original texture indirectly used for that sample, through filtering, give a larger contribution ... because less of them are used by the texture filter, while the resulting colour still is equally weighted with those from other samples further away

That contribution is determined by screen space though, that isn't proper based on how our eyes, or light, works.

Talking about LOD bias when talking about supersampling is wrong ... it is not about biasing LOD, it is about selecting the right LOD.

And who does it 'right'? I know how it should be selected, but the IHVs all seem to have their own peculiarities when it comes to LOD selection. Judging by refrast I'd say nV is the closest with ATi more agressive and 3dfx was always too conservative, so who does it right IYO?(I know this is off the conversation, I just want to know what your perspective is) And who scales their LOD 'properly' for SSAA?
 
Vitall said:
If I turn on Super Sampling AA (lets say 8x), could it make textures look more sharp, more detailed. I'm not talking

about transparent textures, just ordinary ones. Thank u very much for your reply!
Answer of Clinton's graphic expert advisor: "That depends..." :D

Traditional expert: Supersampling = Supersamples, Oversamples... Yes, the data input allowed better and more accurate signal processing, resultant better quality produced at higher processing cost. Though if pixel pipeline dynamic range was not sufficient then the greater number of data processing steps caused lost of gradient colors from cumulative rounding errors. Resulting in bandings of gradient colors, more contrasty final pixels - Which for some meant even more sharp, even more detailed.

Nouveau expert: Supersampling = V5's RGSS... No, the data input in this case followed Traditional expert of Multisampling data having non-Supersamples or non-Oversamples data. For this Nouveau Supersampling case, the more samples the more bluriness.

Example of superior Supersampling = Kyro.
Example of Supersampling with inferior dynamic range = GF2.
Example of Multisampling more bluriness with more samples = GF3's 4X.
Example of Nouveau Supersampling more bluriness with more samples = V5.

Other extra data sampling features caused extra variations of resultant outputs: Grid rotation, adaptive, sparse, randomize, temporal... They can be applied to both Supersampling and Multisampling.
 
Ben:

By showing that two statements are right, you also show that there is no contradiction between them. (Because then one would have been wrong.)
See if you can find anything wrong in either of the two descriptions of the two statements you said was contradicting each other.

But two comments anyway:

Anisotropic will "turn pixels gray", I mentioned that because I got the impression that you thought that was a bad thing inherent in SSAA and wouldn't be there with MSAA+AF. (But in fact, when it occurs it is a good thing, becase that's the best it can do. Both for AF and SSAA.)

You may need mipmapping even when you can do infinite AF (read my last post about why). When using a different mipmap in AF you don't neccesarily change the sample positions (but you could throw out every second sample when using a lower res mip). The lower res mip is already filtered down from the base mip, so even though you're just sampling at points in a straight line in the center of the pixel footprint, the texture you're filtering has already values from "far away" in it.

If you would add, say, 2x2 OGSS on this (it's easiest to describe), the sub-samples could be at two positions along the footprint, and two positions along the short axis on the footprint. - Oh, no!, aren't we sampling a too wide area now? No, all the sub-sampes are within the footprint, and with 2x2 OGSS you should use exactly one step higher res mipmap than with no AA. It's about doing the downfiltering when generating the mipmap, or when filtering the sub-samples to the final image. The end result is that with the latter, you'll get a better fit between the texels from the base mip that contributed, and the pixel footprint.

(This is related to what demalion calls "positional data". "Position information" that could be lost in mipmap generation, can be saved if part of the filtering is postponed to the subsample blending.)


About "z-based" sampling and LOD:
I'll refer to what MfA said. And then comment on what you said to him.

Let's say you take a ot of samples within a pixel (be it for AF or SSAA), how should these samples be weighted? - They sould be weighted according to how much the colour from the (texture) area they represent will affect the pixel. How much will a texture area affect a pixel? - Proportional to the screenspace area it takes up. So screenspace supersampling is a way that don't need any compensation. You got it the wrong way. It's when you do AF with samples placed equidistant in texture space that you need to compensate with different weights.


madshi's fractals.
They don't need to be updated, they already are in their correct format.
Fractals like this (mandelbrot?) are index images that has been colorized by a palette. In such cases, an paletted 8bit image is an exact representation. Or see it as an lossles compression. Images with <=256 colors can be compressed losslessly with an 8bit palette. The SSAAed image gets more colors and can't be compressed to 8 bit.
(You'd know if you read his reply.)

Fractals are mathematical functions with infinite resolution (except for calculation errors). There is no equivalent to mipmaps, and there's nothing that is "straight on". You could see it as a point sampled texture of "infinite" resolution. So you can't blame "perfect angle for SSAA".


Gamma correction:
Even after changeing gamma, you'll still have a color that isn't red, and isn't green. But with the right gamma, you'll have a color that fits in perfectly as "right between red and green" and you'll just get a smooth contour.

If you have wrong gamma, and AA a contour that is mainly an intensity change, you'll get kind of a staircase as if the AA isn't working as it should. But since this still is an enhancement over non AA, it doesn't stick out that bad. (It smooths out the steps somewhat, but not optimally.)

Wrong gamma when AAing an edge that is mainly a hue change is more visible, since it will give colors that aren't straight between the two end points. It can also give changes in intensity, even if the two colours you're interpolating between has the same intensity.
 
BenSkywalker said:
With the sampling utilized currently for SSAA all samples are weighted equal amongst a given pixel, no compensation is implemented for the weigthing it should have.
<snip>
That contribution is determined by screen space though, that isn't proper based on how our eyes, or light, works.

Lets say we have a nice white square at 20 meters away, and we have another nice white square at 25 meters away with the size corrected to match up exactly with the previous one in perspective ... lets further say we close one of our eyes. Lets further say that they look pretty much the same ...

Our retinas are 2D surfaces, the greater number of receptors a colour gets mapped to the greater its contribution to a perceived image. Replace receptors with samples.

And who scales their LOD 'properly' for SSAA?

You are right, there is no correct way ... I should have phrased that differently.

What I meant to say is that when you say biasing it shows an implicit assumption about the supersampling implementation which has no real basis. When you implement supersampling on hardware without specific support through rendering at higher resolution and resizing you dont really need to do anything different about LOD selection at all for instance.

My point was that it is about selecting the LOD, not biasing/scaling/whatevering it. The way you should do texture filtering for a sample purely depends on that samples footprint (which is dependent on the sampling pattern) in texture space (after backward mapping). As far as the weighting of samples is concerned, if they have equal footprints in image space they should have equal contribution to the pixel colours.

I have quite often dug myself in very deep against strong opposition, I know how it goes ... but at some point you have to face that there is a good possibility that you are wrong no matter how strongly you believe your reasoning is correct, given the overwhelming opposition. At least I would ... after a good while :) Consider it and fight your way through some cognitive dissonance for a bit. It is neigh impossible to understand from here how you are reaching your conclusions, so it is hard to reason with you constructively apart from stating the same arguements over and over.
 
Basic-

See if you can find anything wrong in either of the two descriptions of the two statements you said was contradicting each other.

Here is my issue actually, apologies for not replying to the particular point in my last point-

Since aliasing textures removes more information than blur, the standard way is to use the long axis to determine the mipmap level when doing bi-/tri-linear filtering. You get blur, but no aliasing.

This is based on the removal and addition of data and not necessarily the same as definition. That comes back to summarization v detail.

The lower res mip is already filtered down from the base mip, so even though you're just sampling at points in a straight line in the center of the pixel footprint, the texture you're filtering has already values from "far away" in it.

If mip maps were generated on the fly with proper weighting given to adjust the filtering based on the angle they are currently in in relation to then they wouldn't be nearly as bad.

Let's say you take a ot of samples within a pixel (be it for AF or SSAA), how should these samples be weighted? - They sould be weighted according to how much the colour from the (texture) area they represent will affect the pixel. How much will a texture area affect a pixel? - Proportional to the screenspace area it takes up.

We aren't determining the relation between the total texture data and what screen space it will occupy however, we are only seeing the exacting point where the samples are being taken from. Even then, the samples are taken based on x, y ignoring the focus that should be in place based on how light and eyes work. Create a filter using arctangent based on the viewpoint and run a per pixel check to see exactly how each sample should be weighted. Even that is less then ideal, as that doesn't take into account the differing properties of the reflectiveness of differing colors in terms of light, but that is a bit outside of this discussion.

Fractals like this (mandelbrot?) are index images that has been colorized by a palette. In such cases, an paletted 8bit image is an exact representation. Or see it as an lossles compression. Images with <=256 colors can be compressed losslessly with an 8bit palette. The SSAAed image gets more colors and can't be compressed to 8 bit.
(You'd know if you read his reply.)

What if I were to compare the Nature scene in Game4 of 3DMark2K3, running under FP32 without AA and running 16bit with SSAA, how do you think would compare? Until he posts the AAd image as 8bit the showing is invalid. We aren't talking about enabling FP32 with SSAA v 32bit with AA. If this is a conversation worth having, then it should be held on a level playing field. I have not advocated making SSAA run in 8bit color to compensate for its additional bandwith while we allow non AA to remain 32bit.

Even after changeing gamma, you'll still have a color that isn't red, and isn't green. But with the right gamma, you'll have a color that fits in perfectly as "right between red and green" and you'll just get a smooth contour.

In other words you are stuck with a brown line running down the middle in the example I gave. From what I have read on how gamma correction is implemented, it would do noting at all to help here, and you backed that up. You still have haloing issues(although they are reduced under the majority of instances). I think the tradeoff is worth it as most of the time the artifacts aren't that bad, red and green are worse case and brown doesn't appear like either of them. Actually, yellow and purple do the same but at least purple appears closer to brown then the others.
 
BenSkywalker said:
In other words you are stuck with a brown line running down the middle in the example I gave....red and green are worse case and brown doesn't appear like either of them.

The thing is, Ben, a "brown line" is exactly the most accurate representation. Red is not more correct or accurate. Green is not correct or accurate. It is not more accurate to display a brown pixel that "should be" 50% Green and 50% Red, as 100% Green or 100% Red. If the resolution is not sufficient enough to "display the contrast" you seem to be looking for, then simply put, the resolution is not sufficient enough.

In the end, your entire line of reasoning seems to be based on a false premise that "loss of contrast" is actually less correct, because it is subjectively (or perhaps intuitively) less correct to you.
 
Also aliasing for edges inside textures is no more or less desirable than aliasing of polygon edges ... they are 100% equivalent, so to reason brown should not be present is to reason edges shouldnt be AA'd, and even further that all textures should be point sampled. Since none of that stands to reason the original premise cant be correct.

Marco

Actually now I look at that part of your arguement (I had chosen a part which looked at least slightly interesting to deal with, ie. not patently obvious in its absurdity, and ignored the rest) it is hard not to consider the possibility that you are trolling.
 
Vitall said:
If I turn on Super Sampling AA (lets say 8x), could it make textures look more sharp, more detailed. I'm not talking about transparent textures, just ordinary ones. Thank u very much for your reply!

Yes , of course . Why else do you think everybody was asking for the possibility of using SSAA as an aditional option in R3xx drivers .
 
Marco-

lets further say we close one of our eyes.

So you are half blind? If that is the case then I guess you forget most of my disagreement with you. Anyone else who has lost use of one eye also, I am not disagreeing that when you remove parts of the human anantomy my discussion holds up.

Our retinas are 2D surfaces

Placed on the front of our heads to facilitate depth perception which is nature's way of evolving predators. If we were herbivores through our evolution then our eyes would be on the side of our heads and thus my point here would be negated :)

What I meant to say is that when you say biasing it shows an implicit assumption about the supersampling implementation which has no real basis.

I tend to look at it versus refrast whenever possible. I think nVidia ends up the closest, but they aren't perfect either. So, I tend to refer to IHVs LOD selection as LOD bias.

The way you should do texture filtering for a sample purely depends on that samples footprint (which is dependent on the sampling pattern) in texture space (after backward mapping). As far as the weighting of samples is concerned, if they have equal footprints in image space they should have equal contribution to the pixel colours.

Isotropic or anisotropic?

I have quite often dug myself in very deep against strong opposition, I know how it goes ... but at some point you have to face that there is a good possibility that you are wrong no matter how strongly you believe your reasoning is correct,

We have had this discussion here previously, back in the early 2K timeframe IIRC. At that point I had plenty of screenshots and server space to back my assertions up when people were attempting to deny what SSAA did(Haloing is one point I recall in particular that was denied, a simple wireframe screenshot using SSAA was all it took to end that).

It is neigh impossible to understand from here how you are reaching your conclusions, so it is hard to reason with you constructively apart from stating the same arguements over and over.

Working with 3D viz, having to stare at final output frames, observe how differing filtering implementations worked and what was wrong with them and trial and error trying to rectify them. It's easy to ignore small rendering errors when it doesn't mean you have to wait ten minutes to an hour to get a single frame back before you can start rendering out an animation. I heard it over and over again about how great the R300's AF implementation was here and in many other spots, then I had a board for a month and found out it actually pretty much sucked(well, it was very fast and better then no AF to be sure so perhaps sucked is too strong a word). When it comes to IQ issues, I believe my own eyes far more then I do anyone else that I can think of.

I could launch into a critique of Doom3's lighting. Compare it to what else is available in real time 3D and it looks great, without a doubt. But it is still a joke comparitively.
 
MfA said:
(I had chosen a part which looked at least slightly interesting to deal with, ie. not patently obvious in its absurdity, and ignored the rest)

On the flip side, I technically know just enough to only be able to deal with the "patently obvious in it's absurdity", so I guess we compliment each other nicely here. :p
 
Joe-

The thing is, Ben, a "brown line" is exactly the most accurate representation.

OK, major rendering artifacts are accurate by your standards, I can appreciate your viewpoint now. That particular example is a no win demonstration(ignoring resolution), there is no good way to deal with it. I use it as the brown line ends up being a lot more noticeable then the aliasing it replaces.

Marco-

Also aliasing for edges inside textures is no more or less desirable than aliasing of polygon edges ... they are 100% equivalent, so to reason brown should not be present is to reason edges shouldnt be AA'd, and even further that all textures should be point sampled. Since none of that stands to reason the original premise cant be correct.

I've already covered this previously. Haloing sucks but overall it is worth the tradeoff.

Actually now I look at that part of your arguement (I had chosen a part which looked at least slightly interesting to deal with, ie. not patently obvious in its absurdity, and ignored the rest) it is hard not to consider the possibility that you are trolling.

I provide repeatable examples to demonstrate what I am talking about. I'm met with using 8bit vs 32bit color and close one eye and I'm trolling? I have already stated how you could make the sampling in SSAA a much better implementation then what we have seen. I am met with rehashing of the SOS about acceptable trade offs. Last time we had this discussion numerous people tried to deny haloing happened with SSAA and I wasn't listening, now we have gamma corrected AA to deal with the situation that didn't exist. Last time we had this discussion I went on a rant about textures being way too blurry on 3dfx hardware but I didn't know what I was talking about; and then 3dfx came out with the LOD bias slider after Rev's article was posted. I've been in a very similar discussion previously, and it went a lot like this one.
 
BenSkywalker said:
Joe-

The thing is, Ben, a "brown line" is exactly the most accurate representation.

OK, major rendering artifacts are accurate by your standards, I can appreciate your viewpoint now.

Huh?

Where did that come from? The brown line is the most accurate. It's the best you can do. It is LESS of an "artifact" (artifact defined as non-accurate), than not doing any AA at all.

That particular example is a no win demonstration(ignoring resolution), there is no good way to deal with it.

Anti-Aliasing is exactly the way to "deal" with the no-win situation that is not having enough resolution. This is not a question of some situation in which there IS a "win." It's a question about which solution is better AA or NO AA.

I use it as the brown line ends up being a lot more noticeable then the aliasing it replaces.

So then (as per Marco's suggestion), why bother AAing at all? All it is is creating a "bunch of brown lines" that are, according to you, less noticable than contrasting jaggies.
 
Where did that come from? The brown line is the most accurate. It's the best you can do. It is LESS of an "artifact" (artifact defined as non-accurate), than not doing any AA at all.

The brown line is the most noticeable artifact you can end up with(hands down). On screen it doesn't look close to the green or the red while at least the aliasing does. The brown line represents an average, not accuracy.

Anti-Aliasing is exactly the way to "deal" with the no-win situation that is not having enough resolution. This is not a question of some situation in which there IS a "win." It's a question about which solution is better AA or NO AA.

And could you find me the artist who created those spheres that wants them to have brown lines running down the middle?

So then (as per Marco's suggestion), why bother AAing at all? All it is is creating a "bunch of brown lines" that are, according to you, less noticable than contrasting jaggies.

I probably forgot to mention before(was easier last time when I had the screenshots up) they were neon green and neon red spheres, and it was a dark brown line. Is that supposed to be less noticeable then aliasing to you?
 
BenSkywalker said:
The brown line is the most noticeable artifact you can end up with(hands down). On screen it doesn't look close to the green or the red while at least the aliasing does. The brown line represents an average, not accuracy.

I guess there is just no way you will be convinced of the facts, no matter how many times it's explained to you.

The average is the most accurate.

And could you find me the artist who created those spheres that wants them to have brown lines running down the middle?

Or the artist that wants thicker / jagged lines?

The problem is, the artist doesn't have enough resolution.

Again, you are confusing your subjective opinion about what you believe is most "appealing", vs. what is actually most accurate.

I probably forgot to mention before(was easier last time when I had the screenshots up) they were neon green and neon red spheres, and it was a dark brown line. Is that supposed to be less noticeable then aliasing to you?

Subjectively less noticable to you does not equate to less accurate.

There is no way of getting around the fact that with too little resolution, the resulting image will be DIFFERENT than the source. In any given situation, one method may be more or less subjectively noticable to any given person.

Take a screen, 4x4 pixel grid.

Now, imagine "the artist" wants a white background with a "black line", that is supposed to be 0.9 pixels wide that goes vertically striaght down the middle.

What's more "acceptable" to the artist?

1) a completely white screen (no line at all)
2) a 1 pixel wide black line, not down the middle, but in the 2nd of 4 columns
3) a 1 pixel wide black line, not down the middle, but in the 3rd or 4 coulmns
4) a 2 pixel wide gray line, down the middle.

They all pretty much suck, don't they? Yes, it sucks to lack some minimum resolution that you really need. But who are you to say that number 1 is most "correct" (point sampling), or number 2 or 3 is most correct (some other algorithm that "preserves contrast") to any given individual? Who cares what you, as an individual, have a preference for.

Does the artist have a preference to preserve the "contrast"? Or preserve the "location"? He can't do both...not enough resolution.

All we know for a fact is that number 4 (AA) is statistically most accurate. It is a comprimise of BOTH location and contrast.

So unless you suggest that there should be some "mind reading" algorithm that can magically determine which is "most subjectively pleasing" for any given situation for any given viewer, I suggest we settle on the most accurate.
 
BenSkywalker said:
That particular example is a no win demonstration(ignoring resolution), there is no good way to deal with it.

Yes there is, add some motion ... you can take edge creep, Ill take brown. It is the best looking, as well as the most accurate in a MSE sense.

Marco-

Also aliasing for edges inside textures is no more or less desirable than aliasing of polygon edges ... they are 100% equivalent, so to reason brown should not be present is to reason edges shouldnt be AA'd, and even further that all textures should be point sampled. Since none of that stands to reason the original premise cant be correct.

I've already covered this previously. Haloing sucks but overall it is worth the tradeoff.

Oh ok, I was under the impression that you thought brown was unacceptable as a colour to be used for anti-aliasing edges when the colours opposing the edge were red and green.

Actually now I look at that part of your arguement (I had chosen a part which looked at least slightly interesting to deal with, ie. not patently obvious in its absurdity, and ignored the rest) it is hard not to consider the possibility that you are trolling.

I provide repeatable examples to demonstrate what I am talking about.

You provide examples which show that limited resolutions representations are not ideal (well DUH). Not examples of your method getting around those in our eyes fundamental and unavoidable limitations. There is some hand waving going on, but I dont see any animations.

I'm met with using 8bit vs 32bit color and close one eye and I'm trolling?

Hey no need to get offended, really considering the alternative it is more of a compliment ;)

I have already stated how you could make the sampling in SSAA a much better implementation then what we have seen.

We have stated you are wrong ... it is up to you to proove us wrong in that statement by positive proof. We are in no position to proove you wrong, it is hard to proove a negative.

Extraordinary claims require extraordinary proof. Attacking some straw men which said you were wrong in the past when you werent is far from that.

Last time we had this discussion numerous people tried to deny haloing happened with SSAA and I wasn't listening, now we have gamma corrected AA to deal with the situation that didn't exist.

Can hardly be referring to me, I havent in my life even used the term haloing in this context ... and I dont plan to start making a habit of it either. Lack of gamma correct AA mostly has the effect of shifting the percepted edge location inside pixels in a nonlinear way depending on coverage and colors (ie. causes a form of staircasing of its own). I dont see how haloing describes that.

Last time we had this discussion I went on a rant about textures being way too blurry on 3dfx hardware but I didn't know what I was talking about; and then 3dfx came out with the LOD bias slider after Rev's article was posted. I've been in a very similar discussion previously, and it went a lot like this one.

I for one was questioning the performance hit using correct LOD for supersampling would give on the VSA-100 when all we had was the white paper.

Using the wrong LOD was never really so much a question of lack of biasing, although it does work like that at the hardware level, but moreso that needing much more external memory bandwith for textures than other supersampling approaches would have hurt their benchmarks. They chose to do it wrong (all assuming they were not incompetent).

Marco

PS. didnt notice the other reply ... sorry.

BenSkywalker said:
Marco-

lets further say we close one of our eyes.

So you are half blind?

No, but I dont have a stereoscopic monitor.

The way you should do texture filtering for a sample purely depends on that samples footprint (which is dependent on the sampling pattern) in texture space (after backward mapping). As far as the weighting of samples is concerned, if they have equal footprints in image space they should have equal contribution to the pixel colours.

Isotropic or anisotropic?

What? The footprints of the samples in image space or the texture filtering? The texture filting should be anisotropic to approximate the footprint in texture space obviously. As for the image space samples ... since selecting optimal sampling locations based on the visible surfaces inside a pixel isnt possible for obvious reasons, yes they will usually have an isometric footprint.

When it comes to IQ issues, I believe my own eyes far more then I do anyone else that I can think of.

Then how can you have such rock solid faith in a method which you have not proven in practice?
 
Ben,

A lot of your arguments at this point appear to be unfairly criticizing SSAA for what is in reality a problem with sampling in general. Returning to your response to me....

SSAA makes no attempt to sample the image in a manner that would enable it to offer proper representation. If it did end up doing so, it would be by blind luck. Any isotropic filtering method, unless applied to a wall you are staring at head on, is going to weight the pixel values improperly in relation to object space.

The entire goal here is to generate a 2D image (the raster), very much as if you were staring head-on at a picture up on a wall!

The process of building that 2D image involves taking samples for each raster position, generating the samples by mathematically projecting a three-dimensional data set onto a two-dimensional plane. By nature, these samples do not cover an area, but instead are color values that are accurate for a particular infinite point.

With "normal" rendering, the final color value for each raster element is determined on the basis of a single infinite-point sample, take from somewhere within the raster element's area, a process which is clearly prone to a variety of aliasing effects... you could say that if this single sample reveals anything accurate about the ideal color value for the raster element, it happened to "be by blind luck". With SSAA, the situation is improved by taking multiple infinite-point samples from various points across the raster element's area, and having each of these sub-samples contribute to the final determination of the raster element's color value.

Granted, while SSAA improves the situation, it doesn't instantly solve the fundamental challenges of the sampling process: taking 4 infinite-point samples per raster element in 4x SSAA still leaves a pretty poor approximation of the raster element's entire surface, and the raster element is still going to be too large to allow representation of all the detail that is desirable in the 2D image. However, how can one fault SSAA, specifically, on these points?

A key thing to keep in mind is that SSAA is anti-aliasing the *raster*, not the objects within the three-dimensional data set. Details like anisotropy, z-coordinates, and perspective are unimportant to the raster, which is simply a two-dimensional collection of color samples.
 
BenSkywalker said:
Where did that come from? The brown line is the most accurate. It's the best you can do. It is LESS of an "artifact" (artifact defined as non-accurate), than not doing any AA at all.

The brown line is the most noticeable artifact you can end up with(hands down). On screen it doesn't look close to the green or the red while at least the aliasing does. The brown line represents an average, not accuracy.
I'm really confused trying to understand you. What do you think you should see? (BTW the "brown" colour should really be said to be more yellow (non-linear behaviour etc etc) but that's getting a bit picky).


Anti-Aliasing is exactly the way to "deal" with the no-win situation that is not having enough resolution. This is not a question of some situation in which there IS a "win." It's a question about which solution is better AA or NO AA.


And could you find me the artist who created those spheres that wants them to have brown lines running down the middle?
You do realise, don't you, that your eyes are of limited resolution (in fact of varying resolution depending on the angle of incoming light)? If an artist physically constructed these spheres and stuck them on a pedestal in a gallery and you looked at them from a distance, you wouldn't see his desired effect either.

Hmm perhaps you are sitting too close to your monitor for the given rendering resolution. Also you can't enlarge an image by pixel replication and expect to see the correct result. Pixels need to be approximated by (at least) the central lobe of the sinc function and a square pulse is far from ideal.

To get back to the Quincunx filter:
  • I said it was only doing 2x supersampling and you can't expect miracles! It's a case of "Sow's ear/silk purse" or "you can't polish a tu..." etc.
  • A bartlett filter doesn't have a terribly good frequency cut-off and so, yes, it will be tuned so that it starts removing some frequencies too early (i.e. slightly blurring) while still letting through some that will still generate aliasing, but it's still better than a box filter and, as I said, there are much better filters that could be used. One "ideal" approach, given a fixed super sampling rate, would be render the entire scene at the higher resolution, transform it using a FFT into the frequency domain, cull out the illegal frequencies that would generate aliasing, transform back into the spatial domain, and then subsample. I can't see that being implemented (in the near future) in real-time hardware though :)
 
Back
Top