NVIDIA Stepping Back from Cg?

I wonder if MS finally agreed to pluggable backends to their HLSL compiler?

That, and I wonder who's going to be spearheading common HLSL between DX and OGL?
 
RussSchultz said:
I wonder if MS finally agreed to pluggable backends to their HLSL compiler?

That, and I wonder who's going to be spearheading common HLSL between DX and OGL?
I'd bet on a compiler that can compile DX9 HLSL to OpenGL shader assembler and OpenGL HLSL to DX9 shader assembler. Perhaps that'll eventually get into RenderMonkey (which hasn't been updated in a while).
 
OpenGL doesn't use assembly as their high level language, or as their low level language, so you'd have to develop a "translator" between DX-HLSL to OpenGL HLSL.
 
RussSchultz said:
OpenGL doesn't use assembly as their high level language, or as their low level language, so you'd have to develop a "translator" between DX-HLSL to OpenGL HLSL.
True, but it could still potentially be compiled down to an assembler language.

On reflection, however, I think I might be missing your original point about an API independant HLSL entirely. . . I'm guessing that you were implying an API independant HLSL that does not compile down to any assembler code, but the binary machine code that the GPU utilizes directly (like what OpenGL HLSL does and D3D HLSL can do).

Is this correct? If so, then I'm guessing that it won't happen any time soon -- especially if OpenGL and DirectX shader capabilities start to differ significantly. An example example would be if one API starts putting depth and stencil operations in the programmable pipeline or unifies vertex and fragment shading significantly before the other API does.
 
I can't imagine them diverging architecture too wildly, since the same hardware has to run both. Or at least it does in 99% of the hardware out there. (Excepting a few crazy super high end professional cards, everthing needs to be DX capable)
 
WaltC said:
I would rather think nVidia might be relieved to shed the burden and the expense. (Kind of like 3dfx shedding GLIDE, in a way, except in the degree of market penetration.) At the time nVidia started work on Cg, D3d/OpenGL HLSLs were not even apparent on the horizon. Now that they are becoming a reality there isn't much need for nVidia to continue to invest in supporting Cg, IMO.

I for one would welcome nVidia shifting back to working with M$ along with other IHVs to formulate the APIs and the HLSLs as well. Consumers and developers both will benefit, and the irony is, so will nVidia.

IMHO the whole Glide thing is exactly what [N] was going too. 3DFX did well until they went away from the specific API but if 3DFX didn't do it the developers would of made them. Dev's program for the widest audience, but here I am not telling you anything you didn't already know.
The developers smacked em before they even got started this time. I just don't know how [N] can swing it with HLSL being their PS2.0 performance is so dissmal. I hear the Vidiots saying how the Det 50's will work miracles with PS2.0 whatever... You can't make up that much of a performance delta with drivers..

/end Rant. :oops:
 
Perhaps this means that NV40 is going to be a little bit less 'unique' (read 'hard') to program for? I always got the impression that one of the main reasons nVidia pushed CG so hard was because it was pretty much the only way to wring decent performance out of NV3x cards.
 
Dio said:
RussSchultz said:
OpenGL doesn't use assembly...
ARB_fragment_program is just about assembly-level...

Yes but in opengl 2.0 ARB_fragment_program will be legacy old stuff it won't support the staff that GLSlang will.


Okay now hear is a question does CG support PS_3.0 and VS_3.0 ??? if not then CG back end would have to be upgraded for those render targets and I doubt CG in it self currently has all the features of PS_3.0 exposed so new function would have to be added ect. I could be wrong and then of course there is simple the fact that its gonna preform like *#$@ when having a render target of GLSlang.
 
There has been talk that the DX9-SDK Summer Update (Beta2 out now) has a new back-end profile which optimises for low register usage and allows ps_2_x (including predication etc.).

Sound familiar?
 
[maven said:
]There has been talk that the DX9-SDK Summer Update (Beta2 out now) has a new back-end profile which optimises for low register usage and allows ps_2_x (including predication etc.).

Sound familiar?
So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.
 
Ratchet said:
[maven said:
]There has been talk that the DX9-SDK Summer Update (Beta2 out now) has a new back-end profile which optimises for low register usage and allows ps_2_x (including predication etc.).

Sound familiar?
So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.
Yup.

<sigh>

Well, it's better than making developers code two paths at least....

<sigh>
 
Well, if developers pre-compile the HLSL code, then two paths will definitely be used. If, however, it is not precompiled then all the devs have to do is make sure the low-register usage profile is used for NVidia cards. Otherwise, the HLSL code itself will remain the same for both cards. If the devs want to support PS2.x as well, that's a reasonable argument for coding an additional render path. In that case it's more of technology-level-specific path than a vendor-specific-path.
 
Ratchet said:
[maven said:
]There has been talk that the DX9-SDK Summer Update (Beta2 out now) has a new back-end profile which optimises for low register usage and allows ps_2_x (including predication etc.).

Sound familiar?
So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.

MS is doing its job - the want their code to run best across a wide range of hardware. Its not a good advert for HLSL is it runs exceptionally well on one architecture and very poorly on another.

I have heard what maven is saying, however I've heard that they have thus far done it in a fashion that doesn't imapct of R300 performance, but does make more NV30 friendly register usage.
 
Ratchet said:
So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.
You sound like you think that's bad. That's why we have high-level languages!
 
History of Cg vs. glslang

At the time nVidia started work on Cg, D3d/OpenGL HLSLs were not even apparent on the horizon.
There was a discussion about this recently on the OpenGL.org boards. The OpenGL HLSL proposal predates Cg.

That is not correct. glslang was first presented on OpenGL BOF at Siggraph 2001. At that time Bill Mark (CG's lead designer) was still working at Stanford as a researcher on the Stanford Real-Time Programmable Shading Project, it wasn't until October 2001 when he joined NVIDIA (From Oct 2001 - Oct 2002, I worked at NVIDIA as the lead designer of the Cg language).

The original "GL2" whitepapers were presented to the ARB meeting on September the same year and made public on December 2001.

CG wasn't offered to the arb until a year later or so:


quote:
--------------------------------------------------------------------------------

"Cg" discussion
NVIDIA wanted to discuss their goals with Cg (although they are not offering Cg to the ARB).

--------------------------------------------------------------------------------


June 2002 ARB meeting.
 
Myrmecophagavir said:
Ratchet said:
So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.
You sound like you think that's bad. That's why we have high-level languages!
no no, not bad. I just find it difficult to digest. Hardware should be fixed to meet specifications, not the other way around. That's why they're called specifications afterall.
 
Not to really delve into this again, but we have C compilers that compile down to different machine code for different processors.

Why? Because there's lots of ways to skin a cat. This is for the betterment of all hardware--it will give IHVs more lattitude to make better designs.
 
RussSchultz said:
Not to really delve into this again, but we have C compilers that compile down to different machine code for different processors.

Why? Because there's lots of ways to skin a cat. This is for the betterment of all hardware--it will give IHVs more lattitude to make better designs.

The problem is we are compiling to the same machine code because it is compiling into pixel shader assemble. Now its alright if it optimises the HLSL to pixel shader assemble that runs better on a targeted platform. BUT HLSL IS OPTIONAL! if users choose to use to pixel shaders in assemble then this is where we get a problem because it will effect the speed greatly in some cases on some platforms now if nvidia made their cards run well on 90% of pixel shader instruction combinations then users who choose to use assemble would be alot better off but microsoft is making it easier for nvidia not to do this by producing a new backend on HLSL compiler.
 
The idea is that you don't compile into the arbitrary machine code that MS has devised, but something that fits your hardware more appropriately.

The openGL HLSL mechanism does it this way, and I, for one, think its the right thing to do.

This, of course, doesn't alleviate the need to have a DX9 assembly to whatever micro-op compiler, but its a great step forward to allowing creativity in the design for the forward looking case.

Edit: Speaking speculatively, of course. I'm not sure whatever changes are in the works for DX9 allow this.
 
Back
Top