What was that about Cg *Not* favoring Nvidia Hardware?

Chalnoth said:
DaveBaumann said:
I doubt its using two sided stenciling either.

Rev - was this done with Trilinear or Bilinear?

That's true, but even without two-sided stencil, if the test were fillrate-limited, you would think that the R9700 would be doing far better.
why is that exactly?
IS the game CPU limited?
System memory bandwidth limited?
DO you know?
Have you tested it?
No?
Then do not be so quick to assume.
 
interestingly enough.. In Nvidias own released benchmarks showing doom 3 performance they *Claim* that the Nv30 is only about 10 FPS faster than the 9700pro. Their tests were done without the benefit of Doubblesided stencil and other D3 optimizations... i happen to know this for a fact. its going to be a little while before the OpenGL driver with these things is released.

Given the GFFX's simply Gigantic Fillrate advatage.. and the lack of Ati optimizations.. It pretty much completely discounts Chalnoths entire point imo.

I also feel that there is every indication that Nvidias Color compression works very much the same as ATi's. There is simply no evidence at all to support any other position. Especially since Nvidia has publically stated that their color compression DOES NOT work full time, but only during FSAA. Just like the R300.

At any rate.. This thread has no resebleance at all of the origional post..

[personal thought]
I saw a coupple posts about how myself and 2 other people have totally ruined the board... how do you figure? in this thread alone I only have about 4 posts total in about 400. It is true that i do go after some of chalnoths statements, and a few others. But it is usually in a defensive posture, not looking to favor ATi.. but simply counter the Blantant overstatements to one side some people make. If i make one post.. and then 50 people follow with comments of their own. its not like I had the biggest ammount of input is it??? I am certainly not spamming the threads with ATi propaganda. And yes, I have been wromg a few times.. Sorry Rev...
[/end]
 
DaveBaumann said:
Rev - was this done with Trilinear or Bilinear?
Trilinear. The following are the relevant Tenebrae cvars (the italics are those that control use of stencil, the following show stencil-enabled tests):

gl_picmip 0
gl_playermip 0
gl_texturemode GL_LINEAR_MIPMAP_LINEAR
gl_finish 0
gl_flashblend 0
gl_polyblend 1
gl_triplebuffer 1
r_shadows 1
r_wateralpha 0.5
sh_entityshadows 1
sh_infinitevolumes 0
sh_visiblevolumes 0
sh_worldshadows 1
gl_watershader 1
sh_playershadow 1
sh_noscissor 1
mir_detail 0
mir_forcewater 0
sh_glares 0
 
If the Zbuffer optimisations causes problems then it's a faulty implementation. A card with HyperZ or equivalent technology should generate the exact same picture as does a card without these optimisations, otherwise it's non-conformant and faulty.

... or it's an optimization that trades off IQ for performance.. :) 3dfx cleverly named this "depth precision" which basically allowed one to adjust Z-buffer precision->performance with a slider. The faster you set it, the more z-buffer errors you obtained.

But I do consider such systems as "faulty" if by default a system breaks depth precision and it's not sold or adjusted as some sort of customizable "feature"..

Now I don't play a whole lot of games, but I don't see why 16bit Zbuffers would cause any problems, unless you have a huge world that is, in which case you shouldn't be using a 16bit Zbuffer.

Try just about ANY game on the 9700 Pro with 16-bit Zbuffer (i.e. 16-bit framebuffer). The Zbuffer problems are very visible. A good example is Half-Life. Set 16-bit color and start the one player game. You can see through most walls in the game. I'd say about 60% of my software library, when set to 16-bit, illustrate very obvious ZBuffer failures. These dont occur on the 8500 or GF4.

It's really a non-issue for me though as 32-bit is almost entirely "free" on the card, and the handful of games that only support 16-bit, ATI has been kludging the drivers to fix these and their Z issues. (DAoC and zig-zagged shorelines, etc.etc.)
 
why not just stick to the DX specification or OGL specification?


Wasn't DX and OGL created to alleviate all this programming for a specific card crap?

we do not need yet another standard to adhere to now do we?


didn't think so
 
Darn dave.. im looking for that link. It was in one of teh latter Tech previews stateing that Nvidia had *Clarified* to them that the color compression was not used except with FSAA...

ill get back to you.. :?
 
I personally don't recall ever seeing any article where nVidia said color compression was not enabled "full time."

I do believe I recall at least one articles or interview where nVidia themselves explained/admitted that it was only of significant advantage when AA was enabled.
 
Back
Top