Using Cg for benchmarking

Would using Cg shaders be a reasonable option to benchmark shading hardware?

  • Yes, but don't use it too much, because it isn't well representative of future games.

    Votes: 0 0.0%
  • No, it is way too nVidia biased.

    Votes: 0 0.0%
  • No, there are better ways to benchmark shading capabilities ( please specify )

    Votes: 0 0.0%

  • Total voters
    296
elchuppa said:
HLSL and Cg are really the same thing for all intents and purposes. The only difference is one is marketed as the microsoft DX9 shading language, the other is nVidia's Open shading language that can compile to OpenGL or DX9. I really don't see why people have a problem with Cg...
Where did you hear that Cg was an open shading language?
 
Doomtrooper said:
All the other IHV's are supporting HLSL and GLslang, that is what us consumers need to support also.
PS 1.4 support is important to users of the 100's of thousands of 8500/9000/9000M/9100/9200 and now the integrated ATI chipset.

Nvidia isnt supporting HLSL? erm, what is Cg again? I personally dont support the glslang in it's current form. Oh, and dont you have a problem with 3dlabs designing it on their own (yes yes I know it's now up for vote by the ARB, and is under a working group)?

Doomtrooper said:
Sorry it is better, it should be used...not to mention being part of the DX 8.1 spec.

Well, it doesnt take much more effort to add PS 1.4 code anyway, and still use the Cg runtime (you dont even have to use that if yo dont want to).

Doomtrooper said:
CG is far from perfect.

Neither is the next closest solution.
 
AndrewM said:
Nvidia isnt supporting HLSL? erm, what is Cg again? I personally dont support the glslang in it's current form. Oh, and dont you have a problem with 3dlabs designing it on their own (yes yes I know it's now up for vote by the ARB, and is under a working group)?

CG is Nvidias HLSL, no one else supports..the entire industry is not backing it or we'd have ATI writing a backend along with the other players.


I personally dont support the glslang in it's current form

Thats nice, I'll ensure I'll keep a eye on your project for avoidance. ;)
 
Doomtrooper said:
Endorsing my opinion ?? The graphs are there, try reading them...or do you always skip to the conclusion :D
So you think five graphs (or better: four of five graphs) are enough to form your opinion about HLSL compilers that are still in their early childhood?

As for the topic, it's really hard to answer. A bit like, if a (shader) programmer has two ways of writing a piece of code, one is known to run better on hardware A and the other one is better for hardware B, which should he use?
The most sane answer is IMO, both, if it doesn't result in too much work. But if it does, the programmer must choose the most desirable situation and decide which performance drop is more acceptable.

And that applies to HLSL compilers, too.
 
The idea behind HLSL was to reduce work, so far CG is not doing that...ohh wait a minute yes it is, for Nvidia cards. Optimizing for ATI cards would require more work, and since I ownly possess a Nforce 2 board and don't have a Nvidia card powering my PC, what CG is going to do for me is all I'm concerned about.

Ideally one HLSL/API should be enough, otherwise project lengths will grow as developers will be spending time on 4 different HLSL.

IMO this discussion should not even be happening, and as I stated one year ago what the true reason CG is here, and it is not to better the PC game industry..in fact it is segragating it.
 
Doomtrooper said:
The idea behind HLSL was to reduce work, so far CG is not doing that...ohh wait a minute yes it is, for Nvidia cards. Optimizing for ATI cards would require more work,
Until the advent of the PS_2_a compile target, you could've exchanged NVidia and ATI in this sentence and apply it to DX9 HLSL.

and since I ownly possess a Nforce 2 board and don't have a Nvidia card powering my PC, what CG is going to do for me is all I'm concerned about.
Then why are you concerned about Cg not reducing work for developers?

Ideally one HLSL/API should be enough, otherwise project lengths will grow as developers will be spending time on 4 different HLSL.
I disagree. It's good to have a choice as there will never be a solution that is optimal for any given case. There is no "best" shading language or API, just like there is no "best" programming language.

IMO this discussion should not even be happening, and as I stated one year ago what the true reason CG is here, and it is not to better the PC game industry..in fact it is segragating it.
Of course you are the enlightened one able to see the one and only reason Cg is there, while all others are struck by blindness ;)
 
Nope just a educated consumer :LOL:

I do appreciate your Input Xmas, always have.


Then why are you concerned about Cg not reducing work for developers?

It is not optimizing nothing for my hardware, when I already know HLSL is and doing a better job at it.
I would really like to see a R300 running CG vs. HLSL, let us take a look at the results...then once we see the difference, why would I be happy that a developer could have used a industry standard API HLSL and given me better performance.
 
The idea of using a vendor controlled/biased shading language for benchmarking is laughable. Industry standards are set for a reason.
 
I have a question.
If a gamedeveloper codes a cool "DX9 game!" with all the features and so on.
If he codes on a R3xx with HLSL and on NV3x also with HLSL.
Hence my question what benefits/performance etz would the NV3x card have if he had coded with Cg? ...And if he #just# coded with Cg on the NV3x and compiled it to the R3xx how would it affect the ATI-card.

I know there are a lot of wisdom on this board, and i surely may be mistaken saying this/not getting the whole picture.

But why do you create two "scrips" doing the same thing and make it harder and more timeconsuming for developers when the idea is to make it as easy as possibly to get new features fast??
 
What would end the skepticism would compile shaders on a 9800 Pro with CG and HLSL and if the performance is the same then 'hey I'll eat my shoe'.

Since HLSL was outperforming CG on Nvidia hardware, I have a hard time grasping the fact CG would perform better on my present card than the standard HLSL.

I always get flamed on this subject, have since the original thread about CG was posted a year ago, but the 1st DX9 title that used CG didn't even run on a 9700 Pro in December but ran on a Geforce 3.

Gunmetal Demo.
 
we buy a graphics card for game play.

if the game company said they have a benchmark that can be used for evaluate the graphics performance of the game , why not use it?

with the 7.90 driver, ati cards can run fine with the Gunmetal benchmark and ati have not said this benchmark was not fair.

cg and ps 1.4 , these two thing all be used in real game , if you need absolute fair, i think there are too many benchmark can not be used .
 
coding in Cg and benching with FRAPS?...

well according to nvidia, FRAPS is evil and should not be trusted as it does not display accurate performance :rolleyes:
 
cho said:
we buy a graphics card for game play.

Correct and when I pay $75 for a damn game, It should work/look the same as anyone else if the hardware is capable.

cg and ps 1.4 , these two thing all be used in real game , if

Cg doesn't support PS 1.4, one of the great features of ATIs DX8.1 cards..so right there the compiler is not getting the best performance out of R200 class cards, while DX9 HLSL would
 
Back
Top