Using Cg for benchmarking

Would using Cg shaders be a reasonable option to benchmark shading hardware?

  • Yes, but don't use it too much, because it isn't well representative of future games.

    Votes: 0 0.0%
  • No, it is way too nVidia biased.

    Votes: 0 0.0%
  • No, there are better ways to benchmark shading capabilities ( please specify )

    Votes: 0 0.0%

  • Total voters
    296
Cg doesn't support PS 1.4, one of the great features of ATIs DX8.1 cards..so right there the compiler is not getting the best performance out of R200 class cards, while DX9 HLSL would

yes, i know Cg can not support ps 1.4 right now although they had promise there wolud be a ps 1.4 profile in next version compiler .

but Cg was used for coding the shader code in real game, that is meant the relation between some(many? :rolleyes: ) ISV and NVIDIA is ok. if NVIDIA is ok on this, i(we?) think ATi should be ok on this too no matter the game developer use Cg, HLSL or GLshading . :rolleyes:
 
I think what needs to be questions is what proportion of the game developer community will actually be using Cg off their own back. Futuremark did a survey on what HLSL's developers would be using and very few came back with Cg as a reply, which is one of the reasons why they just didn't use it for 3DMark.
 
DaveBaumann said:
I think what needs to be questions is what proportion of the game developer community will actually be using Cg off their own back. Futuremark did a survey on what HLSL's developers would be using and very few came back with Cg as a reply, which is one of the reasons why they just didn't use it for 3DMark.

so can you tell me who choice Cg in their survey? :LOL:
 
DaveBaumann said:
I think what needs to be questions is what proportion of the game developer community will actually be using Cg off their own back. Futuremark did a survey on what HLSL's developers would be using and very few came back with Cg as a reply, which is one of the reasons why they just didn't use it for 3DMark.
Didn't use CG or HLSL ;)
 
Hmm, I too was under the impression FM only used assembly language. Eh...

Anyway, the idea of this benchmark was more to PROOF that the NV3x is inferior to the R3xx even when put under its best light, but since I guess you people don't like it, I guess I won't try to do it :)

Thanks for the feedback,


Uttar
 
Doomtrooper said:
Why someone couldn't try Pocketmoon_'s shaders on a R3.xx using CG and HLSL ??.

The biggest issue with both Cg and HLSL at the time was the optimisation that the compilers were capable of. Some of the compiled output varied greatly in terms of instruction/register counts and in turn performance. IMO it's far better to test raw performance using low level shaders - which I cant be arsed to write anymore :)

NB none of those shaders were particlularly long or complex. I'd love to try my volumetric fog shader an ATI hardware though !

mroom.jpg
 
sorry to repeat myself, but I think people have failed to fully take in the statement I made earlier. HLSL and Cg are the same thing, porting a shader to HLSL involves doing nothing. They are the same.. the same language. (just wanted to drill that in :) ), If a developer choses to write a HLSL shader in effect they have written a Cg shader, the obvious benefit for them is that on an nvidia card that shader will be openGl compatible as well.
There is nothing I see that prevents ATI from developing a compiler for it, what the hell they can just call it a HLSL to OpenGL compiler (assuming M$ is cool with that) and be done with it.
I'll be perfectly honest and mention that I like Cg/HLSL, it is logical and very easy to learn because it is basically C and very much similar to renderman.
 
I only read 2 pages of this and stopped reading because it was too funny reading it.

Here is the bottom line folks:

HLSL compiler was written with input from all IHV's (Nvidia and ATI at the very least). CG compiler was written with input ONLY from Nvidia.

MS compiler guys are FAR better than Nvidia compiler guys.

MS is an independant controller of HLSL, Nvidia is the SOLE controller on CG, a competing IHV.

HLSL supports ALL versions of Pixel and Vertex Shaders. CG does not.

HLSL will supply better shader code than CG.

AND FOR THE LOVE OF GOD, HLSL is NOT the same as CG!!!!! Microsoft will tell you that any time you ask!!
 
nooneyouknow said:
I only read 2 pages of this and stopped reading because it was too funny reading it.

Here is the bottom line folks:

HLSL compiler was written with input from all IHV's (Nvidia and ATI at the very least). CG compiler was written with input ONLY from Nvidia.

MS compiler guys are FAR better than Nvidia compiler guys.

MS is an independant controller of HLSL, Nvidia is the SOLE controller on CG, a competing IHV.

HLSL supports ALL versions of Pixel and Vertex Shaders. CG does not.

HLSL will supply better shader code than CG.

AND FOR THE LOVE OF GOD, HLSL is NOT the same as CG!!!!! Microsoft will tell you that any time you ask!!

see i don't know much but i can pretty much agree with all you have said here.
 
nooneyouknow said:
HLSL compiler was written with input from all IHV's (Nvidia and ATI at the very least). CG compiler was written with input ONLY from Nvidia.
So? I don't see how this means much of anything. Other companies can write their own back-ends, if they choose to do so.

MS compiler guys are FAR better than Nvidia compiler guys.
Why? Do you know who was on both teams? Are you aware of how many resources Microsoft and nVidia dedicated to their compilers? Do you know how much experience with compilers the relevant engineers had?

Just because Microsoft also develops other compilers doesn't mean that they'd do better at this one.

MS is an independant controller of HLSL, Nvidia is the SOLE controller on CG, a competing IHV.
Cg is still a language, and any other IHV can make their own back-end. The front end is also open-source. Another IHV could add anything they wanted to.

HLSL supports ALL versions of Pixel and Vertex Shaders. CG does not.
Cg supports OpenGL.

HLSL will supply better shader code than CG.
Where did you get this from?

AND FOR THE LOVE OF GOD, HLSL is NOT the same as CG!!!!! Microsoft will tell you that any time you ask!!
The languages are close to identical.
 
nooneyouknow said:
I only read 2 pages of this and stopped reading because it was too funny reading it.

Here is the bottom line folks:

Don't you just hate it when people say that ?

Cg and HLSL are, if not twins, close cousins. Cg was developed to be as close to HLSL as possible. Cg is capable of supporting OpenGL on all platforms that support the required ARB (note - not Nvidia) extensions. It gives you the added ability to squeeze the best out of Nvidia hardware and the ability to carry your shaders, unchanged, from OpenGL the DX and back again.

My findings when comparing the compiled output is that Cg and HLSL ane very close in terms of optimisation. Some shaders HLSL nudged ahead, other Cg. There were one or two cases where CG blew HLSL out of the water and vica versa! Personally I'd like to see Intel write a back-end for Cg, they really know how to write a compiler :)
 
oh, I forgot to respond to the "open" thingy..I use open in the sense that other people can implement compilers for it. i.e. Nvidia has open sourced the compiler technology.

as to HLSL and Cg not being the same language, for my purposes they are. My Cg shaders work as HLSL shaders. If my shaders compile on the HLSL compiler, forgive me for thinking they are written in HLSL. Perhaps it is a subset... but how many differences are there really? Semantically and syntactically they are very hard to differentiate.
 
Can we stop this from starting all over AGAIN?

With DX9HLSL, no developer will want to use Cg unless they are simply genuinely interested in discovering Cg or NVIDIA DevRel/PR did a good job. No developer in their right mind would use Cg if, while they're working with it with their title, they discover results/behaviour that is not what they expect if the results are negative.

If a developer uses Cg, the only things they must/should worry about is that it works, that it works on non-NVIDIA hardware and that the IQ results are the same regardless of hardware. If using Cg results in noticeably better performance on NVIDIA hardware against other hardware that are reasonably and logically same spec-for-spec, then it is up to the developer to determine if it is (more) important that using DX9HLSL results in almost same performance on all spec-for-spec hardware regardless of company. If they don't, that means they don't care. Just like they think many users don't care either, except for the minority that sticks to "ethics" no matter what.
 
We allready have a big problem with shadercode on DX9 hardware.

NV3X chips do not run well with code that is build with the *s_2_0 profiles.
NV3X chips prefer code that is build with the *s_2_A profiles or Cg NV3X targets.
R(V)3XX chips prefer the *v_2_0 profiles.
R(V)3XX can not run every code that is build with *v_2_A profiles or Cg NV3X targets.

There will be other chips and other profiles (*s_2_B) right around the corner.

We will end with a different shaderset for each chip family. If you allready have to compile a high level shader for each chip family there is now big difference if you use one compiler with different profiles or using different compilers to do this job.

Even if you want to compile at runtime you can implement the use of different compilers in less the 100 lines of code.

last but not least. Cg is at the moment they only possible solution to write an engine that support DX and OpenGL and high level shaders at the same time without the need to write every shader two times.

PS: Excuse my bad english it is not my first language. I am better with programming languages.
 
Back
Top