Tomb Rider AOD with Cg

cho

Regular
Tomb Raider AOD with Cg

nocg.jpg


cg1.1.jpg


it is need to install the "CgSetup.exe"(download from nvidia site ) to make the "Cg compiler" option could be active.
 
Last edited by a moderator:
Ok, I have a question. Were TB:AoD's shaders hand written or was the DX9 HLSL used :?:

Performance difference = :oops: Good thing NVIDIA is letting up on pushing Cg. If we had lazy developers who only used Cg it would kill everyone else's performance.
 
Cho - is there any chance you could put up a couple of screenshots or details of the settings you are using, as I'm getting nowhere near the same as you with my 9800 Pro.

Thanks

Mark
 
cho said:
yes, they are almost same, but... :D

RADEON 9800 PRO 256MB:

<pic>

<pic>
Ah. . . A clear indication of Cg prioritized lower register usage over lower instruction counts. Heh. . . It doesn't even appear to have any positive effect for the GeforceFX, in this case, but screws over other video cards.
 
Cool, they wrote a shader to emulate memory errors on a manhole cover. :D

MuFu.
 
This clinches it... Cg is evil. :devilish:

Of course, some of us already knew that... ;)



edit: I've thought of an alternate explanation in which evil is not required... maybe Cg just simply doesn't work. nVidia may be backing away from it becuase it's not worth the effort to try to fix it. :?:
 
This is exactly the type of examples Russ needs to look at, where people that questioned CG's use were told they were looking at conspiracy theories. :LOL:
 
Doomtrooper said:
This is exactly the type of examples Russ needs to look at, where people that questioned CG's use were told they were looking at conspiracy theories. :LOL:
You never quit, do you?

I mean, this is OBVIOUS PROOF that NVIDIA was out to screw competitors.

It couldn't be that the Cg backend optimizes for something different than what the R300 finds optimal.

Nope, proof that NVIDIA is evil.

It also couldn't be that the Cg compiler is somewhat sub-optimal in general?

Nope, its proof that NVIDIA is evil.

:rolleyes:
 
Wait wait!
You forgot to argue that they were using FRAPS. According to Nvidia this application is defective right? :LOL:
 
RussSchultz said:
It also couldn't be that the Cg compiler is somewhat sub-optimal in general?
So what's the benefit of using Cg? If the compiler generates inferior code, then there seems like there's little reason to select it over HLSL.

P.S. I'd call the results on R300 a bit more than "somewhat" sub-optimal.
 
RussSchultz said:
You never quit, do you?

I mean, this is OBVIOUS PROOF that NVIDIA was out to screw competitors.

It couldn't be that the Cg backend optimizes for something different than what the R300 finds optimal.

Nope, proof that NVIDIA is evil.

It also couldn't be that the Cg compiler is somewhat sub-optimal in general?

Nope, its proof that NVIDIA is evil.

Russ, wasn't it you who started this thread:

http://www.beyond3d.com/forum/viewtopic.php?t=1764&highlight=

And I'll quote from that thread:

RussSchultz said:
Ok, put up or shut up. I'm tired of hearing the incessant bleating of "Cg is optimized for NVIDIA hardware" without any proof than little smily faces will eyes that roll upward.

Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.

Moderators, please use a heavy hand in this thread and immediately delete any posts that are off topic. I don't want this thread turned into NV30 vs. R300, NVNDIA vs. ATI, my penis vs. yours. I want to discuss the merits or de-merits of Cg as it relates to the field as a whole.

So, given that: concisely outline how Cg favors NVIDIA products while putting other products at a disadvantage.


Funny how you have no problem using "those little smily faces will eyes that roll upward" that you so detest when someone actually shows you an example of Cg being a detriment to a card other than Nvidia's.

:rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes:
 
Sigh. Are we going to have the same discussion every time?

1) No, this example, or the last one, or the next one, does not show the _language_ is somehow geared toward screwing anybody.
2) No, this example, or the last one, or the ntext one, does not show that the _idea_ of pluggable backends is geared toward screwing anybody.
3) No, this example, or the last one, or the next one, is not a technical argument as to how Cg (the language or the idea) is only good for NVIDIA and a detriment toward others.

It does show the implementation (e.g. the backend) of the is geared toward NVIDIA hardware. How? We can presume they optimize for fewest registers required, rather than shortest number of assembly instructions.

If ATI would (though they won't) develop their own backend, then this problem wouldn't exist. Their backend would optimize for whatever is best for them. But they won't, so it does.

So, please, just let it rest. Its apparent Cg is dead on the vine, but that doesn't mean it was:
a) A bad technical idea
b) An idea borne to control the market by putting others at a disadvantage
c) Inherently evil
 
nooneyouknow said:
CG was used for Nvidia boards and HLSL used for ATI boards.
If you're speaking from the developer's point of view, this is wrong.

D3DX is default. Cg is an option for ALL boards. Core Design said (this is from the benchmarking readme of this game that I wrote) :

Core Design said:
The Cg compiler, as supplied with the game, is not as good as the D3DX one. But, when nVidia release a new compiler you can just drop the DLLs into the \bin\ directory and you get an instant upgrade. The D3DX compiler can only be upgraded with a new version of DirectX and a game patch.

I am at a lost as to why users need to download and install the Cg runtime package to enable the use of the Cg compiler (which is bundled with the game although it is an outdated one) in this game. This shouldn't be the way it works. Looks like NVIDIA doesn't want this to work as intended for a reason.
 
So, please, just let it rest. Its apparent Cg is dead on the vine, but that doesn't mean it was:
a) A bad technical idea
b) An idea borne to control the market by putting others at a disadvantage
c) Inherently evil

As you like to say: Prove to me that it wasn't any of the things you've said above. You've asserted lots of things here, now prove them. ;)

The only thing I see from this particular example is that it does nothing but lower ATI's fps. It doesn't change the FPS on the GFFX at all. I would think that the GFFX card would have at least seen some benefit from using CG, yet that's not the case. So what we have here in this example is that using cg does nothing to help the gffx card yet dramatically lowers the competition's cards FPS. **Cues the x-files theme music*** :p
 
jjayb said:
As you like to say: Prove to me that it wasn't any of the things you've said above. You've asserted lots of things here, now prove them. ;)

The only thing I see from this particular example is that it does nothing but lower ATI's fps. It doesn't change the FPS on the GFFX at all. I would think that the GFFX card would have at least seen some benefit from using CG, yet that's not the case. So what we have here in this example is that using cg does nothing to help the gffx card yet dramatically lowers the competition's cards FPS. **Cues the x-files theme music*** :p
Well, as Russ said: the problem is simply that there is no profile for ATI cards. Of course he defnitely knows that there's no way in hell that ATI will provide any support for Cg unless it becomes a standard accepted by a bunch of other vendors. Given that ATI and NVidia are the only ones who have DX9 cards out, it's unlikely. The only other reason why ATI might support Cg is if either Microsoft or the ARB officially endorses it as a standard HLSL. Given that both APIs now have their own, though. . . Again: support isn't ever to be likely.

Didn't someone say that DX HLSL will soon have the capability to prioritize lowering register usage for NVidia cards? If that happens, then the only purpose Cg will serve is to provide functionality to explicitly make use of the GeforceFX's multiple precisions -- specifically FX12.
 
Back
Top