Tomb Rider AOD with Cg

Reverend said:
I am at a lost as to why users need to download and install the Cg runtime package to enable the use of the Cg compiler (which is bundled with the game although it is an outdated one) in this game. This shouldn't be the way it works. Looks like NVIDIA doesn't want this to work as intended for a reason.
Whoops, a combination of my misunderstanding and Core Design's mistake.

Game bundles Cg compiler. Two DLLs are supplied - cg.dll and cgD3D8.dll ... cgD3D8.dll was bundled by mistake... it should've been cgD3D9.dll . Next patch should fix this. No need to download/install NVIDIA's compiler/runtime package (after the next patch of course).
 
cho said:
yes, they are almost same, but... :D

RADEON 9800 PRO 256MB:

nocg_9800p.jpg

I don't know if it's the lack of 256 MBs on my board but I just tried the game with my 9800 Pro 128 MB and there's no chance in hell I ever reached 85 fps. I checked with fraps and it was running at 35-55 fps, never went above 60 fps.

Running the PS2.0 settings, 1024x768, no AA/AF, vsync off. Catalyst 3.6, nForce2, Athlon XP 2600+, 512 MB PC2700 etc.
 
Ostsol said:
Didn't someone say that DX HLSL will soon have the capability to prioritize lowering register usage for NVidia cards? If that happens, then the only purpose Cg will serve is to provide functionality to explicitly make use of the GeforceFX's multiple precisions -- specifically FX12.

Presently with the current DX9 HLSL, FX cards are FASTER running that now vs CG..as shown with the new tenebrae mod author AND now this example.

I might add that Russ was wrong, but he is too stubborn to admit it, and that is a fact :D
 
Wrong about what, exactly?

That Cg is some sort of conspiracy? Well, no, I haven't seen any PROOF of that.

That Cg, _the language_, is designed to hurt competitors? Nope, no proof of that either.

That Cg, _the idea_, is designed to hurt competitors? Nope, no proof of that either.

Like I said, you can come up with a whole bunch of examples that show how by using the Cg system, competitors are disadvantaged, but it won't prove anything. They're disadvantaged by their own lack of work, not because of the design, the language, or the idea. Until somebody comes up with their own backend that is tuned for their own piece of hardware and shows that they're being screwed, all the examples in the world won't show what you're asserting it does.

But, its obvious that'll never happen, so its all a complete circle of mental masturbation.
 
Like I said, you can come up with a whole bunch of examples that show how by using the Cg system, competitors are disadvantaged, but it won't prove anything. They're disadvantaged by their own lack of work, not because of the design, the language, or the idea. Until somebody comes up with their own backend that is tuned for their own piece of hardware and shows that they're being screwed, all the examples in the world won't show what you're asserting it does.

From the example above, CG does not benefit the GFFX card. Does it not have it's own backend? What makes you think an ATI backend would help ATI cards if CG doesn't even help the GFFX with it's own backend?
 
Why don't you just say "I hate NVIDIA", rather than coming up with these examples that don't prove what you're trying to assert?

Yes, the compiler seems to stink.

No, that doesn't show what you're asserting it does.
 
RussSchultz said:
Why don't you just say "I hate NVIDIA", rather than coming up with these examples that don't prove what you're trying to assert?

Yes, the compiler seems to stink.

No, that doesn't show what you're asserting it does.


Why don't you just say "I like to argue" rather than coming up with these lame ass excuses ??



Ok, put up or shut up. I'm tired of hearing the incessant bleating of "Cg is optimized for NVIDIA hardware" without any proof than little smily faces will eyes that roll upward.

Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.<-- Proven with evidence

Moderators, please use a heavy hand in this thread and immediately delete any posts that are off topic. I don't want this thread turned into NV30 vs. R300, NVNDIA vs. ATI, my penis vs. yours. I want to discuss the merits or de-merits of Cg as it relates to the field as a whole.

So, given that: concisely outline how Cg favors NVIDIA products while putting other products at a disadvantage. <-Proven with evidence

Now lets stop playing this game, and say hey I was wrong, it isn't that hard :!:
 
Never let logic get in the way...

You haven't shown through any technical argument or empirical evidence that Cg (the language, or the idea behind it) is somehow only good for NVIDIA hardware, and is a detriment to others.

You also haven't outlined how Cg (the language, or the idea behind it) favors NVIDIA products while putting others at a disadvantage.

These examples don't do that either.

All these examples have shown that some part of the Cg compiler isn't very good, and/or the standard profile isn't optimal for the R3xx core. Both of which have nothing to do with the language, or the idea behind it favoring NVIDIA and handicapping competitors.

It does not offer any proof of your assertion, nor does it somehow make me wrong.
 
Why don't I just step in here and end this.

1) There's nothing "theoretically" that would make Cg a bad thing. In a non-competitive, altruisitic environment, Cg could have even been "the best thing for everyone."

2) Practically speaking, in a competitive landscape, including one with 2 other "standards" for HLSL development framework, Cg is just a bad, stinking pile of dog poo of an idea. To make matters worse, the state of the compiler suggests that the bad idea has been been compouded with crap execution.

Everyone happy?
 
Why don't you just say "I hate NVIDIA", rather than coming up with these examples that don't prove what you're trying to assert?

People in glass houses shouldn't throw stones.
 
AnteP said:
I don't know if it's the lack of 256 MBs on my board but I just tried the game with my 9800 Pro 128 MB and there's no chance in hell I ever reached 85 fps. I checked with fraps and it was running at 35-55 fps, never went above 60 fps.

Running the PS2.0 settings, 1024x768, no AA/AF, vsync off. Catalyst 3.6, nForce2, Athlon XP 2600+, 512 MB PC2700 etc.

Same here. Frame rate never goes above 60. On my system Cg only reduce frame rates by 3-5 FPS average. Running 35-50 FPS average, 60 FPS indoors at 1024x768x32 4x AA and 8X AF, vsync on (same with off). Somethings's fishy...

System:
P4 2.53@3.02 GHz
Asus P4P800
512 MB DDR 400 Dual Channel
Radeon 9800 Pro 128 MB, Omega 3.6a
 
DaWizeGuy said:
AnteP said:
I don't know if it's the lack of 256 MBs on my board but I just tried the game with my 9800 Pro 128 MB and there's no chance in hell I ever reached 85 fps. I checked with fraps and it was running at 35-55 fps, never went above 60 fps.

Running the PS2.0 settings, 1024x768, no AA/AF, vsync off. Catalyst 3.6, nForce2, Athlon XP 2600+, 512 MB PC2700 etc.

Same here. Frame rate never goes above 60. On my system Cg only reduce frame rates by 3-5 FPS average. Running 35-50 FPS average, 60 FPS indoors at 1024x768x32 4x AA and 8X AF, vsync on (same with off). Somethings's fishy...

System:
P4 2.53@3.02 GHz
Asus P4P800
512 MB DDR 400 Dual Channel
Radeon 9800 Pro 128 MB, Omega 3.6a
What's really fishy is people making up conspiracies. Bored, I guess?

P.S. Maybe people should wait until someone with a 256 MB board posts results before jumping to conclusions?
P.P.S. Do you even know for certain that you're running the same settings as the reviewer?

Edited to fix typo.
 
OpenGL guy said:
DaWizeGuy said:
AnteP said:
I don't know if it's the lack of 256 MBs on my board but I just tried the game with my 9800 Pro 128 MB and there's no chance in hell I ever reached 85 fps. I checked with fraps and it was running at 35-55 fps, never went above 60 fps.

Running the PS2.0 settings, 1024x768, no AA/AF, vsync off. Catalyst 3.6, nForce2, Athlon XP 2600+, 512 MB PC2700 etc.

Same here. Frame rate never goes above 60. On my system Cg only reduce frame rates by 3-5 FPS average. Running 35-50 FPS average, 60 FPS indoors at 1024x768x32 4x AA and 8X AF, vsync on (same with off). Somethings's fishy...

System:
P4 2.53@3.02 GHz
Asus P4P800
512 MB DDR 400 Dual Channel
Radeon 9800 Pro 128 MB, Omega 3.6a
What's really fishy is people making up conspiracies. Bored, I guess?

P.S. Maybe people should wait until someone with a 256 MB board posts results before jumping to conclusions?
P.P.S. Do you even know for certain that you're running the same settings as the reviewer?

Edited to fix typo.

OK, I found out why the FPS were limited to 60. It is because "frame rate compensation" is ebabled by default. When you tun it off frame rates go beyond 60. My apologies to the original poster for any innuendo :oops: I reckon it is an option similar to the frame rate limiter in Vice City to eliminate or alleviate visual artifacts. It is still rather odd that the Radeon 9800 Pro is over twice as fast as the 5900 Ultra. I guess nVidia's Pixelshaders 2.0 are even weaker than I thought.
 
RussSchultz said:
Never let logic get in the way...

You haven't shown through any technical argument or empirical evidence that Cg (the language, or the idea behind it) is somehow only good for NVIDIA hardware, and is a detriment to others.

Where is PS1.4 support?
 
Joe DeFuria said:
Why don't I just step in here and end this.

1) There's nothing "theoretically" that would make Cg a bad thing. In a non-competitive, altruisitic environment, Cg could have even been "the best thing for everyone."

2) Practically speaking, in a competitive landscape, including one with 2 other "standards" for HLSL development framework, Cg is just a bad, stinking pile of dog poo of an idea. To make matters worse, the state of the compiler suggests that the bad idea has been been compouded with crap execution.

Everyone happy?
Joe, what do you know about the "state of the compiler" that leads you to think what the compiler suggests to you?
 
Reverend said:
Joe, what do you know about the "state of the compiler" that leads you to think what the compiler suggests to you?

1) Doesn't support PS 1.4
2) Based on the evidence in this thread...doesn't improve FX performance over the HLSL compiler in some cases, while severely crippling performance of other architectures.
3) Generall "buginess" and "twitchiness" of the compiler I've read about (on CG web-boards) since Cg's release, especially compared to HLSL
4) Russ said "it seems to stink". ;)
 
I'd have more faith in the compiler iff:
1) It actually improved the FX benchmarks over HLSL (like its supposed to)
2) It didn't degrade the R300 performance to such a degree. (like its not supposed to)

From those two facts, it APPEARS that the compiler isn't doing a terribly good job of generating tight code.

The PS1.4 thing I'm not sure is a compiler-being-bleh thing as much as a lack of impetous on the part of NVIDIA (no desire to support it in the backend because none of their parts use it very well)
 
High level shading languages in general are not perceived by developers using them to provide any performance advantages.

High level shading languages are generally perceived as a time-saving "feature" that allows developers to flex their creative skills without spending as much time as assembly.
 
Back
Top