What was that about Cg *Not* favoring Nvidia Hardware?

Reverend said:
You're mentioning the ifs and the ifs... until we know (and if NVIDIA will let us know) the full details of the GeForceFX in this regard, you're speculating more than you really should be. Furthermore, tell us what the effects (and differences) are if the GeForceFX can do what you said and if the R300 can't do those. No, really!

You know nothing about the GeForceFX's performance in such situations and you know nothing about DOOM3's performance in either hardware's situations.

Right. All that we know for sure is that ATI has a problem here, and it is a significant one.

But, for a few reasons, I would be incredibly surprised if the GeForce FX's memory bandwidth savings techniques failed to take into account JC's shadow volume algorithm. Why? Well, his algorithm posted at nVidia's developer site quite some time ago. That and the GeForce4 doesn't appear to have problems with its z-buffer bandwidth savings (though it does have less-sophisticated techniques) in my games that use stencil shadows.
 
Right. All that we know for sure is that ATI has a problem here, and it is a significant one.

In your head it may be of massive significance, but we are only talking about one element of HyperZ, being the hier-Z culling. And, as Rev points out, a total absence of knowlege of how the application being talked about will handle this scenario or other hardware.

That and the GeForce4 doesn't appear to have problems with its z-buffer bandwidth savings (though it does have less-sophisticated techniques) in my games that use stencil shadows.

We don't know that GeForce 4 has anything similar to Hier-Z. All we do know is that it has early Z reject.
 
DaveBaumann said:
In your head it may be of massive significance, but we are only talking about one element of HyperZ, being the hier-Z culling. And, as Rev points out, a total absence of knowlege of how the application being talked about will handle this scenario or other hardware.

It is significant for anybody that wants to play DOOM3. But you're right, once we have more full knowledge of the GeForce FX, there will be a better idea as to just how significant the problem is.
 
Chalnoth said:
But, for a few reasons, I would be incredibly surprised if the GeForce FX's memory bandwidth savings techniques failed to take into account JC's shadow volume algorithm.
This is not the first time you've mentioned "GeForceFX's memory bandwidth savings techniques".

Pray enlighten me on such techniques. My guess is that there really isn't much (of an advancement of the GF4's LMA), based on NV's need to go 0.13 to attain high clockspeeds but I could be mistaken and you're "in the know" about this, so I'd appreciate any clarification or inside knowledge that you can impart.
 
It is significant for anybody that wants to play DOOM3.

So, whats type of performances will DoomIII see:

a.) Without any HyperZIII elements
b.) With HyperZ bar HierZ
c.) Full HyperZ

Until you can quantify them I don't see how you can place a measure of significance [assuming there is an issue with this application, assuming that other hardware doesn't display similar tendancies].
 
For everybody else, this is what we're talking about from the paper:

By far the most important part of HYPER Z is the hierarchical depth testing. This technique allows culling of large pixel blocks very efficiently based on the hierarchical view of the depth buffer that is stored on the chip. Unlike previous chips, RADEON 9500/9700 performs multiple hierarchical depth tests at the various places in the pipeline, making this technique even more effective.

There are a couple of rules that have to be followed to reap the benefits of the Hierarchical Z.
First, and the most important, do not change sense of the depth comparison function in the course of a frame rendering. That is if using D3DCMP_LESS depth function, do not change it to D3DCMP_GREATER for some part of a frame.
Second, D3DCMP_EQUAL and D3DCMP_NOTEQUAL depth comparison functions are not very compatible with Hierarchical Z operation, so avoid them if possible or replace them with other depth comparisons such as D3DCMP_LESSEQUAL.
In addition, few other things interfere with hierarchical culling; these are - outputting depth values from pixel shaders and using stencil fail and stencil depth fail operations.
Last but not least, for the highest Hierarchical Z efficiency place near and far clipping planes to enclose scene geometry as tightly as possible, and of course render everything front to back.
 
Why does EVERY freakin thread have to degenerate into this crud...

Same people spouting the same nonsense over and over, Chalnoth is it so hard to THINK before you type -or better yet bite your tongue once in awhile when you dont know what your talking about. see what the thread topic is

Better yet look at the top of the damn page once in awhile to refresh your memory as to what the thread is about.

Sorry for the slight rant, but I am seriously ready to start puling my hair out(or somebody elses.
 
The contents of the thread have deviated so far from the subject. Can this thread be closed/locked already? Within the past 20 pages, there have been some valid content. Unfortunately it's burried between crap.

If you want to talk about potential limitations/issues with hier-z on the ATI R300 or anything not remotely related to Nvidia's CG, please start a new thread. You're likely to get more participation and have a more focused conversation.
 
Seems every thread with Nvidia or geforcefx in the title gets locked, the least someone could do is find an ORIGINAL way to bash them.

Locking threads doesnt do much but cause the same crap to be continued in another existing open thread...

Create a bottom feeders forum and throw threads like this in there, let the usual players chase their tails there instead of continually dragging these forums down.
 
Keep this up Reverend and either Doom and Hellbinder will make you an honorary ATI club member! ;)
 
Diespinnerz said:
Seems every thread with Nvidia or geforcefx in the title gets locked, the least someone could do is find an ORIGINAL way to bash them.

Locking threads doesnt do much but cause the same crap to be continued in another existing open thread...

Create a bottom feeders forum and throw threads like this in there, let the usual players chase their tails there instead of continually dragging these forums down.

Ok Mr. 5 posts ;)
 
I also think this thread has derailed some time ago. Summary of this thread is that CG will support ps1.4.
 
Reverend said:
This is not the first time you've mentioned "GeForceFX's memory bandwidth savings techniques".

Pray enlighten me on such techniques. My guess is that there really isn't much (of an advancement of the GF4's LMA), based on NV's need to go 0.13 to attain high clockspeeds but I could be mistaken and you're "in the know" about this, so I'd appreciate any clarification or inside knowledge that you can impart.

We know about the z-buffer compression (which is also in the GeForce3/4), and the color buffer compression. The early occlusion testing is also there.

Most likely, the GeForce FX has nothing more than the GeForce4 except for the color buffer compression, though it will have more efficient designs of the GeForce4's memory bandwidth savings techniques. Anyway, I'm mostly being vague because nVidia has always been vague on what their savings techniques actually are.
 
DaveBaumann said:
So, whats type of performances will DoomIII see:

a.) Without any HyperZIII elements
b.) With HyperZ bar HierZ
c.) Full HyperZ

Until you can quantify them I don't see how you can place a measure of significance [assuming there is an issue with this application, assuming that other hardware doesn't display similar tendancies].

I don't know. The only thing that I do know is that the only two games that I have currently that use global shadowing with the stencil buffer, Neverwinter Nights and the Tenebrae mod for Quake, have poor performance on the Radeon 9700 with respect to the GeForce4 (That is, it doesn't perform as much higher than the GeForce4 in these games...it still does perform better).

Given that I just dug up that the Radeon 9700 turns off some of its z-buffer bandwidth saving techniques when a stencil depth test fail is enabled, it seems highly likely that this performance deficiency will carry over to DOOM3.
 
Side note:

I don't get it. I expose a very real current problem with the Radeon 9700, uncover some evidence that it will likely also apply to DOOM3, and people get upset? It's a problem. Problems exist to be fixed. Hopefully ATI will fix it.

If you don't like the nVidia slant that I naturally tend to place in my comments, just ignore it. It doesn't make my observations any less accurate.
 
Cal, I'm only upset in that what you dug up should be in its own thread. That's all. The subject of this thread has outlived its usefulness.
 
"If you don't like the nVidia slant that I naturally tend to place in my comments, just ignore it. It doesn't make my observations any less accurate"


This has nothing to do with any slant, jeesh.

BTW, nothing could make your "observations" any less accurate. Heh. No you obviously dont get it...sigh. The comments havent had ANYTHING to do with the usual ati vs nvidia crapo that goes around here -why try to drag that into it.
 
Chalnoth said:
Side note:

If you don't like the nVidia slant that I naturally tend to place in my comments, just ignore it. It doesn't make my observations any less accurate.

How about you LEAVE IT OUT?
And if you dont think an obvious bias doesnt make your observations less accurate, or at least affect the "conclusions" you draw from them, think again.


Chalnoth, you seem to delight in coming up with "problems" for ATI.
What makes you think this problem wouldnt exist in the GFFX, other than blatant "my favorite company will r0x0r" f@nboyism?
And, who says its a "bug" - isnt it just a possible problem with Hier-Z implementations in general?
Your failure to consider anything other than the negative sides for your companies enemies are what make your posts less accurate. Your bias destroys any validity your posts have - no one can trust you, your info, or your conclusions. Instead of telling everyone else to ignore your bias, how bout you do yourself a favor, and drop it?
 
Back
Top