Boy this is sad. Not one of the Cg detractors can point out materially how Cg differs from DX9 HLSL. It's speculation in absense of information.
The only publically available info about the concrete difference is the introduction of "profiles" that NVidia added for backwards compatibility with DX8 and OpenGL fragment shaders. Microsoft might pickup these features and roll them back into DX9 HLSL proper, which means Cg would simply be NVidia's implementation of DX9 HLSL.
No one in this forum can point out any NV30 specific features in Cg that don't already exist in DX9 HLSL or will exist in the final version.
If Cg ends up being NVidia's trademark on their DX9 HLSL compiler and toolset, who the hell cares? As I explain ample times in the past, if I am a developer I have the following choices:
1) Using Visual Studio, I can write, by hand, vertex and pixel shaders for DX9
2) I can choose to use DX9 HLSL instead and use the tools provided by MS to do the compilation/generation step
3) I can choose not to use MS's tools, but prefer NVidia's, since they might be easier to use, generate more optimal code
4) I might want to use DX9 HLSL, but generate code for OpenGL. I use NVidia's tool.
5) I use RenderMonkey instead of MS's tool to compile DX9 HLSL
6) If I am targeting both NV30 and R300, then I will use Cg to generate optimal code from DX9 HLSL for the NV30 code path, and ATI's compiler to generate code for the R300 optimally. I will then use MS's "generic" compiler to handle everything else.
This issue is being blown way out of proportion. Even if I NVidia invented a completely different language, I might still want to use their tool, the same way that many developers avoid talking to DirectX directly and instead use higher level third party libraries like RenderWare.
Finally, RenderMonkey appears to have its own proprietary XML syntax for describing shaders and shader parameters, so would be aggressively going after them as well because they are not going through the ARB to standardize them?