Has Nvidia Found a Way to get a Free Pass in AA comparrisons

RussSchultz said:
While I agree, there is a potential for subterfuge and we essentially have nothing to rely on but to trust NVIDIA that their snapshots are doing the right thing, your post just struck me as a unfortunate diversion from the usual modus operandi.

yet the way i read his post the issue was directed generally at trusting any company that claims the snapshots are not doing the right thing and the confusion that can arise from that. hence, this is why i raised the question to your comment above. i did understand that the request was not in response to what i said, however from where i stand it looks to me as if you are trying to daw up some new protocols which i take to be nonsensical. sense your position was stated in a public forum i found it with my rights as a member of the public take interest in them. i still don't understand what you were getting at though RussSchultz; perhaps it was just a matter of confusion?
 
There are plently of web sites to hang out at so i have no fear of the consiquesnes of me saying this (if any).

Edited by JR: This post serves no purpose.
 
OpenGL guy said:
Next we'll see SIS claim that screenshots don't accurately reflect what their "turbo texture" modes and their "jitter-free jittered AA" do.

Could mention the 8500s FSAA here :p ;)

Is it not possible to take the output of the image if its blended later?

Last time I took some screenshots on the 8500 in q3 the q3 screenshot button would only show the top left quarter of the screen for 4xAA but the prt sc button on the keyboard followed by a paste into paint shows the full AA'ed screenshot. It won't show up gl_overbrightbits settings though which the q3 screenshot key does. Suppose what I'm trying to ask is how these methods differ and if its possible to actually grab the image thats being displayed? :)
 
yet there is not comment from you on the insinuative remarks in the recent intervew that is posted on this site. should that not strike me as odd?
 
When was that interview posted?

Oh, could it have been on the 3rd?

Hmmm, could I have been out of town skiing for 5 days to return to a fully dissected interview?

No, I was simply keeping quiet as the dark lord commanded.
 
Bambers said:
Suppose what I'm trying to ask is how these methods differ and if its possible to actually grab the image thats being displayed? :)
Don't know and don't know. I don't work on the 8500, nor the OpenGL driver, so I have no idea how captures are done. I would guess that "screenshot" in Quake 3 is using glReadPixels() and that "Print Screen" does a GDI lock on the surface.

I can say that the results captured on the 9700, barring driver bugs, accurately reflect the images displayed on the screen. But you can see that for yourself.
 
Seems like a comparison could be made once it has been verefied it is a accurate representation of the on screen image. I don't see a problem here if there is software to do this. Now is there or isn't a way to capture Nvidia GF FX output?
 
RussSchultz said:
When was that interview posted?

Oh, could it have been on the 3rd?

Hmmm, could I have been out of town skiing for 5 days to return to a fully dissected interview?

No, I was simply keeping quiet as the dark lord commanded.

there is no reason to get defensive as i was just pointing out inconstancies, not looking for excuses or insinuating your involvement in some mephistophelian plot. ;)
 
well, one of the hardware sites (i forgot which one, maybe [H]?) that reviewed the gffx said that in games the fsaa looked better than it did in the screen shots, but not that much. they said the 9700 still had much better quality.

i am actually very dissapointed with the gffx fsaa. ive used my voodoo5 for years, and that spoiled me with great fsaa quality. my gf4 doesnt even hold a candle to the quality of the v5's fsaa (quality, not perf). i was hoping they would use some tech that 3dfx was working on for rampage (a free fsaa method) in the nv30, , but it loks like the nv30 barely uses any 3dfx tech, and the only real thing nvidia got from 3dfx were the engineers. it sucks to see so much great technology buried.
 
Yeah: it's so sad that five years after the introduction of FSAA for graphics cards the NVIDIA is still using the same methods with same results.

What comes to recreating the GF FX (real?) screens: at least you can verify them on analog level. This is NOT easy, but there are parties that could (and perhaps would?) do this. Mainly the technical magazines for eg. The real problem is that they are not that much more trustworthy than any other source, they can very well be partial.

However I think it quite outrageous from NVIDIA to start claim unverifiable. Together with forbidding to compare their products to the competition and using certain testing tools (3DMark?) this sounds that they are really losing the competition, and using their "strong" marketing department to try counter this...
 
I think one thing missing from every single review so far is their comments on gaming (with AA and Ansio enabled) with the GeForce FX.

Sure they can post screens of the GF FX's AA and analyze them to death, but do they play those screens? And if the AA is added post-filter, then they're not analyzing what people will actually be seeing, are they? The true question is whether or not what the review is seeing on their screen when they play is comparable to ATi's methods. After all, if I had a GeForce FX, I'd care what it looked like on the screen I'm looking at right now, not what it does on someone else's screen. If it looks good while gaming, I don't see what the big deal would be. I suppose we're just waiting for consumers to give us that information then aren't we?
 
Re: Has Nvidia Found a Way to get a Free Pass in AA comparri

Hellbinder[CE said:
]Anyone with eyes can see that the 2x and 4x modes are clearly behind the power curve when directly compared against the R300's AA modes. Yet Nvidia is claiming a post process Filter effect that "improves" the IQ for the final product that cannot be seen in screen shots. Quincunx for years has used a post filter Blur. Yet it is visible in screen shots. How is it that Nvidia has a new mysterie method of AA filtering that is supposed to Enhance the Quality of the AA? What could they be doing in the final few steps before the Frame hits the screen that will significantly, or noticably improve the AA.
No matter what is done while the samples are being combined in the DAC, the sample positions still cannot be changed, meaning that 4x is still quite inferior to ATI's. The only possibility is that perhaps nVidia is also doing gamma-correct combination in the DAC (would only be for fullscreen apps), but this is not apparent in screenshots. This might be verified by comparing a game that both has a wireframe mode and can be run in a window between windowed and fullscreen (since we know that there is no gamma-correct combination in windowed apps).

Of course, given that nVidia has not announced any sort of gamma-correct FSAA, this is highly unlikely.
 
TKorho said:
However I think it quite outrageous from NVIDIA to start claim unverifiable.

I don't think that they have claimed at any stage that the results are 'unverifiable' - just that the initial results were incorrect. I think that now with the appropriate tools we should be seeing the real output, and hopefully IQ comparisons going forward will be accurate.

It is down to the online journalists to do appropriate comparisons and analysis and to work with IHVs such as ourselves and nVidia to ensure that they are getting accurate information - reviews are a collaboration to some extent because journalists rely on IHVs to answer their questions, and IHVs rely on journalists to bring issues to their attention (preferably before publishing so they can get an informed response).

- Andy.
 
RussSchultz said:
3dfx had the same thing with the V5.

There are two physical buffers that contain the two sets of samples. The RAMDAC grabs samples from each and combines them before outputting to the monitor.

This avoids having the GPU do an explicit downfilter operation and saves that bandwidth.

Why they don't have some method of providing screenshots, I don't know, but the method of FSAA is valid and if capturing a screenshot only returns one of the buffers then it will not be an accurate representation of whats on the screen.

I don't believe 3dfx ever used this method for FSAA, and in fact they started using the post filter with the V3 for their 16/22-bit mode--not for FSAA, of course, since the V3 didn't have any. The V5 used the T-buffer for all of its FSAA if you'll recall, both 2x and 4x.
 
RussSchultz said:
The t-buffer method is exactly that: using the ramdac to blend the samples on the output stage.


Sigh, wrong--I see I am speaking to someone who knew nothing about it when it shipped. The T-buffer employed hardware jittering of pixels--the only one of its type to ever have done that--just ask Carmack, as he wrote up a nice little ditty on it when the V5 shipped. Unlike what companies like nVidia did, which later attempted to copy 3dfx's introduction of FSAA into the popular 3D marketplace, 3dfx used multiple copies of pixels *at the same resolution* and blended them for output--and the V5 was a true 32-bit product--unlike the V3--and therefore needed no post-filter blending to approximate higher color accuracy. The only time the the V5 used the post filter for blending was in its 16/22-bit output mode, which was a display option carried forward from the V3. In 32-bit mode the V5 had no need to use the post filter for blending, as was done for the V3's 16/22-bit display mode (no FSAA.)

FYI, it was originally impossible to grab screen shots from the V3--which again had no FSAA at all--which were accurate representations of what you saw on the screen--until screen shot software was used which captured post filter blending. The difference was far more dramatic than what you see with the nVidia 2x and QC FSAA modes, and affected the entire image, which again was not AA'ed at all. No, 3dfx did not use the post filter for FSAA and the post filter was *not* the T-Buffer.

Essentially, what the V5 did was to take a 640x480 image (for instance) in local memory, and double it at 640x480, and then jitter it in hardware , and then blend the pixels together in the T-buffer (*not* the post filter at all)--and it took two VSA-100's to do that. Working at a full 32-bits internally and capable of a 32-bit display, again, it did not need to use the post filter for blending like the V3, which had no 32-bit dipslay capability.
 
I'll have to back Russ on this one. ;)

Walt....much of what you said (multiple buffers etc) is true, but doesn't exclude the way that the buffers are combined.

AFAIK, the V5 worked exactly how Russ said it did. Note, that it's not a "blur" filter, and that's not what Russ is saying. You can use the ramdac to do a blur or other post filter, but with AA and the VSA-100 running in AA mode, it was also used to combine the samples from the separate chips.
 
Back
Top