Could activation of SSAA improve texture quality?

"you can't polish a tu..."

Not strictly true, Simon. But then who wants a shiny tu..!

I've only skim-read this whole topic, but it appears to me that Ben should probably take the hint that some of the most knowledgeable posters on these boards disagree with what he is saying. If it's an argument with one or two people, fair enough, but when nobody else is supporting your side of the argument, it does call into question what you are saying.
 
I'll try this since the other end of the conversation is going in circles.

If you were to weight all of the samples for a given pixel based on its relation to the viewpoint, use a vector from the viewpoint and measured the intersection point of each sampling point and also checked the exact center of the pixel to calculate out the falloff you should use for each weighting how would it no be superior to the isotropic filtering we have now?
 
BenSkywalker said:
...how would it no be superior to the...
Och! You're sounding a bit Scottish there...

If you were to weight all of the samples for a given pixel based on its relation to the viewpoint...
Which view point? The camera location?
... use a vector from the viewpoint and measured the intersection point of each sampling point and also checked the exact center of the pixel to calculate out the falloff you should use for each weighting
That sounds like you are simply applying a weighted filter. That's what I was saying - the subpixel position of each sample can determine the weight applied to that sample. Anyway, there's no need to go mucking about with finding individual vectors - the overall projection of the scene at a higher rendering resolution will take care of it all.
how would it no be superior to the isotropic filtering we have now?
I'm possibly being a bit thick but what exactly are you asking?
 
Which view point? The camera location?

Yes.

That sounds like you are simply applying a weighted filter. That's what I was saying - the subpixel position of each sample can determine the weight applied to that sample.

I'm not talking about a fixed weighting such as 50%/12.5% ala Quincunx, I'm talking about utilizing a weighted sampling based on the relative intensity of focus in relation to the camera angle.

Anyway, there's no need to go mucking about with finding individual vectors - the overall projection of the scene at a higher rendering resolution will take care of it all.

I'm not asking if you think it is good enough, I'm asking if you are saying it wouldn't be any better.

I'm possibly being a bit thick but what exactly are you asking?

Do you think using the current isotropic filtering implementation as it relates to AA would be equal to or greater then a sampling implementation that was weighted based on angle in relation to the camera?
 
BenSkywalker said:
Do you think using the current isotropic filtering implementation as it relates to AA would be equal to or greater then a sampling implementation that was weighted based on angle in relation to the camera?
Ben, we all agree that infinite resolution would be the perfect solution, right? So I think we also agree that a 1600x1200 resolution screenshot is better than a 800x600 resolution screenshot, correct? So let's the graphics card to that, namely render a 1600x1200 screenshot, preferrably with anisotropic filtering. That would be nice, would it not? Is there anything wrong with that 1600x1200 screenshot - compared to the 800x600 screenshot? I think we both agree that there's nothing wrong? Do you also agree that this 1600x1200 image after all the rendering is now a simple 2D image, painted on a 2D screen? Just a like a photo or a digicam snapshot would be? A photo also contains 3D elements, but the photo itself is just a 2D canvas, and resampling a photo has nothing to do with angles or the camera, do you agree?

Now let's say we have a LCD with 1600x1200 pixels. That 1600x1200 screenshot on our 2D LCD screen would look very nice, right? Now let's say we have a LCD with only 800x600 pixels. What would be the best possible way to downsample the 1600x1200 screenshot to 800x600? There's nothing angle specific anymore. So the only question left is for the best downsampling algorythm. So which algorythm is the best one in your opinion?

Let's further guess that there's a green/red raster in that screenshot which can be shown correctly in 1600x1200, but which is too fine for 800x600.

According to what you said in all your previous posts you seem to think that we should now out of a 2x2 pixel raster take 1 pixel of the 1600x1200 screenshot and drop 3 pixel for the final 800x600 image. This way we would only get green and red pixels, but no brown pixels. This would according to your previous comments give the most accurate/detailed image, because we get no brown pixels this way. Is that your opinion? Do you understand that the result of this operation would be (almost) identical to directly rendering at 800x600?

Please understand that most SSAA implementations do what I described here, namely rendering a frame at e.g. 1600x1200 instead of 800x600 and then downsampling the final 2D "foto". The downsampling algorythm used for SSAA is usually bilinear interpolation (I think). Here's a page which shows an image comparison for different downsampling algorythms:

http://www.smalleranimals.com/isource/isreduce.htm
 
A photo also contains 3D elements

It doesn't though, and that is pretty much the basis of my end of the discussion. The downsampling itself is entirely a 2D function, I'm saying that it would be superior if we were to sample actual 3D data so that the sample elements could be based on their proper perspective in 3D instead of the pure 2D solution we have now.

Out of the current solutions available the best downsample would be a stochastic MSAA implementation with AF.
 
BenSkywalker said:
The downsampling itself is entirely a 2D function, I'm saying that it would be superior if we were to sample actual 3D data so that the sample elements could be based on their proper perspective in 3D instead of the pure 2D solution we have now.

Out of the current solutions available the best downsample would be a stochastic MSAA implementation with AF.
This is NOT what you were arguing before - namely, that SSAA makes an image worse, not better........so whats the deal?
 
This is NOT what you were arguing before - namely, that SSAA makes an image worse, not better........so whats the deal?

Could you find where I said that? I said you lose texture detail using SSAA, and it makes it worse in certain aspects. In my last post I was talking in particular about the current discussion with Simon, the other one just keeps going in circles. I want to know if anyone thinks that a weighted sampling based on depth would be equal to or inferior to a straight 2D down sample.
 
BenSkywalker said:
Out of the current solutions available the best downsample would be a stochastic MSAA implementation with AF.
We are talking about textures, not about edges. MSAA does nothing to textures. So MSAA is as good as doing no AA at all (to textures).

You say: There might something (even) better for texture quality than SSAA, namely downsampling with taking into account the additional depth information we have. This *might* be true, I'm not sure. But only because there might something (even) better than SSAA, that doesn't result in that we shouldn't use SSAA at all. So let's get back to the topic: Are SSAAed textures more or less accurate/detailed than non-SSAAed textures? Please reread my previous big comment and tell me: Do you still think that using 1 pixel and dropping 3 pixels out of a 2x2 pixel raster is your prefered downsampling method (which would equate to MSAA or to directly render at the destination resolution). Can I get a clear yes or no to this question, please?
 
There might something (even) better for texture quality than SSAA

AF is, hands down.

Do you still think that using 1 pixel and dropping 3 pixels out of a 2x2 pixel raster is your prefered downsampling method

What you are suggesting is to place all of the weighting in to one corner of the downsampled pixel, imporper weighting is my main argument against SSAA. So the answer is absolutely not. Your conditions are a bit odd, you are proposing that 1)- SSAA must be done and 2)- You can't do it the way I've been suggesting. Under that case nothing will work properly.
 
BenSkywalker said:
AF is, hands down.
AF is no downsampling algorythm. It's used *during* rendering of the 3D scene when drawing a texture in a specific angle. SSAA is applied *after* rendering the frame on the 2D final rendered "photo". That is also the reason why SSAA does not need to care about angles.
BenSkywalker said:
imporper weighting is my main argument against SSAA
Improper? When you downsample a photo, is that also improper weighting? You don't seem to be *willing* to understand that after the rendering is done we have a 2D image just like a photo. Okay, additionally to that we also have depth information for each pixel. By making use of that it would maybe (but I'm not even sure about that) be possible to find an even better downsampling algorythm than those usually used for photos. But there's nothing "improper" with the weighting.

If you take a photo of a building with a digicam. And if you then go away some steps from the building to take another shot with the building half as big as before. This is something like downsampling, isn't it? How do you think the pixels are weighted during that "downsampling"? Do you think they are weighted according to the "z value" of the objects!? You're not serious, are you?
 
AF is no downsampling algorythm. It's used *during* rendering of the 3D scene when drawing a texture in a specific angle.

It's used to sample the texture. If you take a texture that is 512x512 and it is only occupying 120x84 pixels on your monitor, then AF acts as a downsample of sorts.

Improper? When you downsample a photo, is that also improper weighting?

Depends on the downsample applied. That said, this isn't talking about 2D graphics. Right now due to display limitations we are limited to as much, but the least possible operations should be done in 2D. It is inferior.

By making use of that it would maybe (but I'm not even sure about that) be possible to find an even better downsampling algorythm than those usually used for photos. But there's nothing "improper" with the weighting.

How many different downsamples have you seen in your life done imporperly? I've seen plenty.

If you take a photo of a building with a digicam. And if you then go away some steps from the building to take another shot with the building half as big as before. This is something like downsampling, isn't it? How do you think the pixels are weighted during that "downsampling"? Do you think they are weighted according to the "z value" of the objects!?

Yes, that is exactly what is happening. When you move away from the steps the photo that you are taking by default will be adjusted for interaction of light in relation to the camera and that interaction will create the new image. I state using Z simply to get across the point that depth should be taken into account. The above example you just gave does exactly that. SSAA does not.
 
BenSkywalker said:
Could you find where I said that? I said you lose texture detail using SSAA
And that's what most people here disagree with.

As for the weighting: How do you think this weighting should work? Increasing with the actual (3D) area covered by that subsample, or decreasing?
 
2x2 supersampling using bilinear texture filtering gives you approximately the same effect as 2x anistropic filtering inside textures (in general it will approximately provide the equivalent of texture filtering with 2x the max anistropy of the actual texture filtering used). The MSAA+AF versus SSAA+AF arguement is pretty bogus, apart from higher max anistropy and higher accuracy along polygon edges (internal and external) they are pretty much the same.

As for focus, we are working with a pinhole camera here ... everything is in focus. Even if it wasnt it is useless to try to deal with focus inside a pixel, DOF effects have a much longer range than that ...
 
Back to red-green ...

Ben, which of these images do you think looks best?
redgreen.png
 
Colourless said:
Ah the wonders of Gamma corrected AA, even makes the example 'bad' cases of AA look good.

What do you think is a bigger factor in ATI's superior AA? Gamma correction or sampling pattern? Just curious if one aspect dominates or if they are both provide an equal step up.
 
Deflection said:
What do you think is a bigger factor in ATI's superior AA? Gamma correction or sampling pattern? Just curious if one aspect dominates or if they are both provide an equal step up.
If you compare NVidia's 2x AA with ATIs 2x AA, the difference is there, but not too big. But if you compare NVidia's 4x AA with ATIs 4x AA, the difference is very evident, especially at near 90° angles. So I think the sampling pattern is the bigger plus.
 
Deflection said:
What do you think is a bigger factor in ATI's superior AA? Gamma correction or sampling pattern? Just curious if one aspect dominates or if they are both provide an equal step up.
The sampling pattern is the bigger plus, if only because ATI's gamma-correct FSAA is not adjustable (each monitor is different, and so each monitor should use a different gamma correction value...).
 
But Gamma Corrected AA only works on the edges rather than the whole screen so would it matter that much from monitor to monitor.. can't say I have noticed the differences on several different monitors (old and new).

But I would definitely think it was the sample patterns used.
 
Back
Top