9700 benchmarks

OpenGL guy said:
Chalnoth said:
To my knowledge, game developers usually don't ever touch LOD settings, and the hardware/drivers absolutely must set some sort of LOD.
This is absolutely incorrect. Hardware and drivers provide a baseline LOD, but applications can tweak that to suit their needs.
I concur. Just take a look at most driving and sports sims and see how aggressive a LOD the developers of these games use.
 
Chalnoth said:
To my knowledge, game developers usually don't ever touch LOD settings, and the hardware/drivers absolutely must set some sort of LOD.

Not particularly true. LOD bias is part of the APIs. True, LOD must be set somewhere, and that is calculated in HW, but the bias is normally 0.

Hehe, getting off topic with the rest of your argument there. I understand that algorithms vary, and am quite aware of the visual variances. What I was getting at is developers design for the most common denominator, and using a card with drivers that push the LOD bias for performance can explain why other cards may look too aliased. Whilst the complaint of Radeon 9700 having too aggressive an LOD may be justifiable (personal preference and all), it may potentially be explained by a human factors approach, and nothing more. You get used to seeing a blurry image, so you think the blurriness is normal.
 
misae said:
Dave in your next article can we see some of the videoshader effects?

Me is most interested in that aspect of the card assuming it is enabled in the drivers :)

So do I.

I am very very interested in the video processing engine on 9700 PRO, although just the 3D capability (playable 6xFSAA at 1024x768x32, ha ha ha) is already attractive enough to get me pay for it.
 
OpenGL guy said:
This is absolutely incorrect. Hardware and drivers provide a baseline LOD, but applications can tweak that to suit their needs.

Yes, the absolutely can adjust the LOD in software. My question is, do they?

Another problem is that there are generally performance benefits to making things blurrier.

Yes, there are, obviously.

Why don't you make one? Then you'll see that what is pleasing to you, isn't pleasing to everyone. Also, when you write yours, you can run it on the D3D ref rast and compare results to the output of different video cards and different drivers.

I have definitely been thinking about it...and I may just do that soon. And yes, such a test certainly would not test blurriness. For example, it would completely fail if the hardware decided to just generate its own MIP maps instead of using the ones supplied by the program. I'd have to find a way to test for this.

What I'm talking about is having a program to test for aliasing, and only aliasing. Once aliasing is tested for, and showed to be nearly identical between cards, only then should the image quality be compared, because only then can you accurately assess which one has better filtering quality (because, as others have said, the LOD BIAS settings are up to the user).
 
Chalnoth said:
OpenGL guy said:
This is absolutely incorrect. Hardware and drivers provide a baseline LOD, but applications can tweak that to suit their needs.

Yes, the absolutely can adjust the LOD in software. My question is, do they?
Yes. But, as someone else stated earlier, if your development platform has an LOD that defaults to slightly blurry then you might believe their are no aliasing problems.
Another problem is that there are generally performance benefits to making things blurrier.

Yes, there are, obviously.
This is why some drivers tend to make things blurrier by default.
For example, it would completely fail if the hardware decided to just generate its own MIP maps instead of using the ones supplied by the program. I'd have to find a way to test for this.
Generally it's the driver that does mipmap generation. However, mipmaps tend to reduce aliasing, so I don't see why you would care if this happened.
What I'm talking about is having a program to test for aliasing, and only aliasing. Once aliasing is tested for, and showed to be nearly identical between cards, only then should the image quality be compared, because only then can you accurately assess which one has better filtering quality (because, as others have said, the LOD BIAS settings are up to the user).
I think you first need to sit down with the OpenGL/D3D specs and figure out what the default LOD should be. Then you can see who obeys and who doesn't.
 
OpenGL guy said:
Generally it's the driver that does mipmap generation. However, mipmaps tend to reduce aliasing, so I don't see why you would care if this happened.
If Chalnoth could restrict the automatic mipmap generation, he'd be able to control the mip level and LOD bias the engine would use. It would provide an accurate baseline to test against when comparing the "blurriness" of different filtering algorithms by the various IHVs.

Chalnoth, correct me if I'm off base.
 
OpenGL guy said:
Generally it's the driver that does mipmap generation. However, mipmaps tend to reduce aliasing, so I don't see why you would care if this happened.

No, it wouldn't be a problem in normal rendering situations. It would just not work with the program I suggested.

I think you first need to sit down with the OpenGL/D3D specs and figure out what the default LOD should be. Then you can see who obeys and who doesn't.

Well, if you want to put it that way, and you're talking about plain bilinear/trilinear, it's obvious that the Radeons don't meet the specs (as they don't use a proper semicircular MIP boundary). Whether or not the GeForce cards meet spec, I'm not certain.

As far as anisotropic filtering goes, I don't know how the various cards can adhere to a spec other than one based on the final visual result, as the optimal MIP map levels could easily change with the anisotropic filterin algorithm used (which is not defined in any spec).
 
Chalnoth said:
Well, if you want to put it that way, and you're talking about plain bilinear/trilinear, it's obvious that the Radeons don't meet the specs (as they don't use a proper semicircular MIP boundary). Whether or not the GeForce cards meet spec, I'm not certain.

As far as anisotropic filtering goes, I don't know how the various cards can adhere to a spec other than one based on the final visual result, as the optimal MIP map levels could easily change with the anisotropic filterin algorithm used (which is not defined in any spec).
Why does everything have to be a damn contest over ATI with you?
anyways, LOD selection!=mipmap boundaries shape.
 
Althornin said:
anyways, LOD selection!=mipmap boundaries shape.

It most certainly is. The LOD selection algorithm is what determines the shape of the MIP map boundaries. LOD BIAS just moves the boundaries around.

I don't see it as being remotely possible set a standard LOD BIAS without setting a standard boundary shape, at least not in a simple sense. The only way, to my knoweldge, is to set something like, "The maximum range that MIP map 1 can be away from the viewpoint in scene X is Y." You can't say something as simple as, "A LOD Bias of 0 is standard." That's just a rediculous statement, as the LOD Bias is always relative to the drivers' inherent MIP map selection algorithm.

By the same token, saying that ATI's LOD selection is too aggressive is absolutely no different from saying that nVidia's LOD selection is too conservative. The root of the problem is a lack of standardization.

And, I do have to say, that after skimming the OpenGL specs, I have yet to see any reference to such a thing, and the DX8 SDK that I am currently using doesn't appear to go into the hardware side of things.
 
Lod selection should be simply 'clear' be that -0.5, -1 etc...You should see the detail on the walls, ground..it should not be blurred. If I want blurred ground textures I will play with my kids N64...this does not automatically mean texture aliasing either.

Having lower LOD and then running benchmarks is one of the main reason why 'screenshots' should be Mandatory doing video reviews.
I refer to the Comanche 4 shots, what would be the frame increase be if the 9700 was a low on LOD setting as the Ti4600...or even the Parhelia which has LOD similar to the 9700.

Is that a really a 'fair comparison' ?? I would guess the 9700 or Parhelia would gain possibly as much as 20-30 fps.
Artists spend alot of time making games look real, and I for one like to see it.
 
Chalnoth said:
Well, if you want to put it that way, and you're talking about plain bilinear/trilinear, it's obvious that the Radeons don't meet the specs (as they don't use a proper semicircular MIP boundary).
Sorry, but you are incorrect.
http://www.opengl.org/developers/do...ec1.1/node83.html#SECTION00681000000000000000

From [url said:
www.opengl.org][/url]
Therefore, an implementation may approximate the ideal rho with a function f(x,y) subject to these conditions:

1. f(x,y) is continuous and monotonically increasing in each of |du/dx|, |du/dy|, |dv/dx|, and |dv/dy|,
2. Let m_u = max { |du/dx|, |du/dy|} and m_v = max {|dv/dx|, |dv/dy|}. Then max {m_u, m_v} <= f(x,y) <= m_u + m_v.
The spec is somewhat flexible on this point.
 
Doomtrooper said:
Lod selection should be simply 'clear' be that -0.5, -1 etc...You should see the detail on the walls, ground..it should not be blurred. If I want blurred ground textures I will play with my kids N64...this does not automatically mean texture aliasing either.

No, not automatically. You need to have detailed textures to get texture aliasing. If all of your textres are a single color, you won't get any texture aliasing at all.

Also, the aliasing will be less noticeable if the textures in the game aren't very high-contrast. But that doesn't mean it isn't there. If you have more than one color per texture, then any LOD setting (assuming the default is a proper setting) less than 0 will result in texture aliasing.

You see, there are two types of texture aliasing that can result from texture filtering. One is inherent in the filtering methods used, and cannot be fixed by adjusting the LOD. The other type of texture aliasing results from lack of pixel coverage. That is, if there are texels from a texture that are missed between two pixels, then there will be aliasing. Proper default LOD will never fail to render a texel that is in the MIP map displayed (and only just barely...and I am naturally talking about bilinear filtering here...trilinear is a natural extension of this...).
 
Chalnoth,

ATI does have a higher default LOD...I've owned Nvidia based cards before and was one of the main things that stood out to me...CLEAR.
You seem to know alot about 'texture aliasing on ATI hardware' without ever owning one..you state screen shots show your evidence...
If you could go through the 9700 reviews and point me out some to justify your arguement... :-?

All I'm stating is I like to see 'clear'...not blurred..
 
Doomtrooper said:
If you could go through the 9700 reviews and point me out some to justify your arguement... :-?

All I'm stating is I like to see 'clear'...not blurred..

If you really want me to, it's easy.

Just on that previous shot with the helicopter, look at the flag. Notice how some stars are visible, while others are not. This is a clear indication of texture aliasing. The reason? As I stated before, texture aliasing occurs when the video card "misses" some texels. In other words, for no aliasing to occur, either all of those stars should be there, or none of them should be. If only some of them are there in a screenshot, then that means that they will shimmer, i.e. texture aliasing.
 
Chalnoth said:
Just on that previous shot with the helicopter, look at the flag. Notice how some stars are visible, while others are not. This is a clear indication of texture aliasing.
It's not as simple as that. In this case, the flag has ripples in it which can cause different mipmaps to be used for different parts of the flag.
 
Chalnoth said:
Doomtrooper said:
If you could go through the 9700 reviews and point me out some to justify your arguement... :-?

All I'm stating is I like to see 'clear'...not blurred..

If you really want me to, it's easy.

Just on that previous shot with the helicopter, look at the flag. Notice how some stars are visible, while others are not. This is a clear indication of texture aliasing. The reason? As I stated before, texture aliasing occurs when the video card "misses" some texels. In other words, for no aliasing to occur, either all of those stars should be there, or none of them should be. If only some of them are there in a screenshot, then that means that they will shimmer, i.e. texture aliasing.
sorry, wrong.
It doesnt mean they will shimmer. Maybe you can see some of them, and maybe those that you can see are the onse you continue to see.
Either way, aliasing only shows up in MOVING images, really (texture aliasing) - at least for the most part.
Now assume you take the camera view int hose pics, and slowly move TOWARDS the flag.
Your precious GF4 will have worse aliasing than ATI's because of the sudden change to seeing all the stars (according to you, only ATI's method is flawed enough to only show some fo the stars). This sudden change i would say is definately noticeable, where as with the radeon, the effect of stars suddenly popping into being would not be there.

I mean, come on? How rediculous do you want to get, chalnoth?
You are extrapolating huge flaws from no actual experience with hardware, so no experience of it in motion - yet you make claims about how it will look worse in motion, etc.
Get real.
 
DaveBaumann said:
I can't say I've had any aliasing leap out at me...

Of course this is subjective. If you had Chalnoth sitting next to you at the same time then I am sure he would spot all that nasty aliasing.

I guess that is the reason you swith anit aliasing on? And let me guess.. AA doesnt remove texture aliasing now? Well subjectively speaking and after playing many games I conclude it does help.

Can texture aliasing be exxagerated by other factors too? For example colour settings, gamma, saturation etc?
 
Back
Top