From the docs, it looks like it is compatible with Microsoft's D3DX HLSL and that NVidia and MS collaborated on it. That is, Cg is an implementation of HLSL but instead of just generating VS and PS code, it can generate OpenGL code as well. There are a few incompatibilities right now because HLSL is still evolving, but NVidia says in the end, they will merge and be one in the same.
This seems similar to the NVASM tool they released, where you can use MS's tools to compile vertex/pixel shaders, or NVidia's.
Of course the main difference is that Cg will probably output better code for NVidia cards than D3DX's HLSL compiler. This means of course, that if you want an optimal code path for NVidia, you use Cg to generate NVidia code, and D3DX code to generate the default case.
If ATI is smart, they will release their own HLSL compiler that generates optimal code for R200/R300
The only negative thing is that NVidia doesn't appear to have a pluggable compiler, so ATI et al can't reuse the Cg front-end, but must write their own from scratch if they want optimal output. NVidia should release Cg as LGPL license with source. That would promote way more optimal implementations of HLSL. And of course, if the NV30 is really powerful, NVidia would do much better if people were writing very intensive shaders as it would demonstrate the NV30 and show how other cards are slower or need fallbacks.
This seems similar to the NVASM tool they released, where you can use MS's tools to compile vertex/pixel shaders, or NVidia's.
Of course the main difference is that Cg will probably output better code for NVidia cards than D3DX's HLSL compiler. This means of course, that if you want an optimal code path for NVidia, you use Cg to generate NVidia code, and D3DX code to generate the default case.
If ATI is smart, they will release their own HLSL compiler that generates optimal code for R200/R300
The only negative thing is that NVidia doesn't appear to have a pluggable compiler, so ATI et al can't reuse the Cg front-end, but must write their own from scratch if they want optimal output. NVidia should release Cg as LGPL license with source. That would promote way more optimal implementations of HLSL. And of course, if the NV30 is really powerful, NVidia would do much better if people were writing very intensive shaders as it would demonstrate the NV30 and show how other cards are slower or need fallbacks.