What was that about Cg *Not* favoring Nvidia Hardware?

RussSchultz said:
And how could they do that? Suddenly changing the language specification and every lemming would just go along with them?

You're being awfully naive if you can't see what relying on a competitor's tools means. In fact I think you're being purposely facetious, so I'm not going to follow this line of argument any further.

Tell me, if you think the language has so little purpose or control, why did NVIDIA create Cg over following the DX and OGL HLSLs (albiet with their own compiler/tools), which would have been better for everyone in the industry.
 
RussSchultz said:
I think you're wrong. NVIDIA is the only one who has control over the language specification, not the implementation.

Though this argument has been beat to death, do you really think the language syntax can be biased?

Cg is more than a language syntax; it is a software ecosystem with different implementations, sets of tools, and a constituency of developers and their code. By controlling the language syntax, NVidia controls to a limited extent the entire ecosystem--it can make portions of it obsolete at will. It is the only entity in the ecosystem with that kind of power.
 
Because it offers things that neither DX-HLSL or OGL2.0 offer.

And NO, i'm not being purposely facetious. I don't see the over reliance that you do.

And, sorry, I've got to run, so no more comments for a while from me.
 
Because DX9 HLSL was not yet available, and OpenGL 2.0 is still not available. Cg is currently the only HLSL available for OpenGL, and now should work properly on all hardware (haven't yet tried it on my Radeon 9700 in OpenGL, though the DX targets do work).

And Cg will only be "controlled by nVidia" as long as no other hardware vendor supports it.

One last thing, it is nVidia's interest to make the ARB-extension and generic DX9 targets as optimal as possible. For example, if the generic DX9 targets are not at least as optimal as Microsoft's own HLSL, there will be little reason for developers to use Cg. But if nVidia manages to do a better job...then, well, developers will want to use Cg.
 
Chalnoth said:
No, I don't think PS 2.0 is that much different than PS 1.1 (Though it does, of course, have hugely more instructions and resources). From what I can tell, every instruction in PS 1.1 is directly-correlated to one is PS 2.0. Please let me know if I'm wrong (A very specific example would be best).

Here's a comparison between ps1.4 and ps2.0, the difference between 1.1 and 2.0 is MUCH bigger, but I had this comparison handy, and don't feel like writing out a few shaders just for the sake of argument

Compare

Not only has the syntax and instructions changed, but the way everything is done has. Considering that there's only so much that CAN change at the assembly level when you have only basic operations, I think the recent hardware changes from 1.1 to 2.0 have just about covered the spectrum.
 
Chalnoth said:
No, I don't think PS 2.0 is that much different than PS 1.1 (Though it does, of course, have hugely more instructions and resources). From what I can tell, every instruction in PS 1.1 is directly-correlated to one is PS 2.0. Please let me know if I'm wrong (A very specific example would be best).

Please find me the directly correlated PS2.0 instruction to TEXM3X3VSPEC.
 
andypski said:
Please find me the directly correlated PS2.0 instruction to TEXM3X3VSPEC.

Okay, thanks. There apparently isn't one, though it appears that this instruction can be easily emulated with two PS2.0 instructions (m3x3 and one of the texture lookup instructions). And this does show my point rather well, actually. This change means that some programs made for DX8 hardware will have to run with emulation in DX9 hardware. Today's DX9 hardware is vastly more powerful than that which came before, so it doesn't make much difference. This may not be the case in the future, however, as the assembly gets more and more complex.

Said again, it's better to have a low-level assembly that takes into account things like cache sizes, memory management, and other very low-level things, all controlled by the compiler, than it is to have a higher-level assembly that must be interpreted by the driver every time the program is run.
 
790 said:
Bjorn said:
And, i think that for MS, the consistent part comes first and optimizations later.

You're wrong. The shader generates optimal assembly, and optimal assembly is just that, there's only one optimal path for both IHVs -- and that's the fewer instructions the better. Futhermore, Microsoft work very closely with NVIDIA, ATI, and everyone else to get everything working at maximum performance. If NVIDIA felt HLSL was not fast enough, they could have put their resources into helping Microsoft with the backend. _OR_ at the extreme they could have written their own backend, but they didn't do that, they created their own language, not just their own compiler.

Yep. And the question is, why? Even nVidia doesn't have the answer to that one. I suppose it just seemed like a good idea would be start. ;)
 
Chalnoth said:
andypski said:
Please find me the directly correlated PS2.0 instruction to TEXM3X3VSPEC.

Okay, thanks. There apparently isn't one, though it appears that this instruction can be easily emulated with two PS2.0 instructions (m3x3 and one of the texture lookup instructions). And this does show my point rather well, actually. This change means that some programs made for DX8 hardware will have to run with emulation in DX9 hardware. Today's DX9 hardware is vastly more powerful than that which came before, so it doesn't make much difference. This may not be the case in the future, however, as the assembly gets more and more complex.

Actually, the equivalent of this instruction on a PS2.0 architecture would be rather more complex than this - texm3x3vspec is the third instruction in a matrix operation, and does an additional reflection calculation. The whole PS1.1 chain for this instruction is -

Code:
texm3x3pad t(m), t(n)
texm3x3pad t(m+1), t(n)
texm3x3vspec t(m+2), t(n)

The two texm3x3pad instructions just map directly to dot products, so they map 1:1 with PS2.0 ALU operations (so we'll ignore these).

The vspec instruction does the third row of the matrix multiply followed by:

2 * ((N.E) / (N.N)) * N - E

where E is the eye vector, and N is the normal.

So this one instruction expands overall to something like 4-5 instructions in a PS2.0 shader.
 
RussSchultz said:
790 said:
It's just nuts, because no matter how harmless it seems, it's a dog eat dog world, and if ATI promoted Cg, their eggs would all be in NVIDIA's basket, and NVIDIA is free to tip it over whenever they want.

And how could they do that? Suddenly changing the language specification and every lemming would just go along with them?

Even the small differences that exist between DX9 HLSL and Cg now make things difficult for developers that have tried Cg but don't want to be wedded to nVidia tools. nVidia could make things difficult for rival IHV's trying to support Cg, without even seeming to try to.
 
# Support for new profiles
* vs_1_1 for DirectX 8 and DirectX 9
* vs_2_0 and vs_2_x for DirectX 9
* ps_1_1, ps_1_2 and ps_1_3 for DirectX 8 and DirectX 9 <---- Doh !! PS 1.4 Radeon 8500/9000/9100 owners robbed..no support yet !! Its as if DX 8.1 didn't exist :)
* ps_2_0 and ps_2_x for DirectX 9
* arbvp1 [OpenGL ARB_vertex_program]
* arbfp1 [OpenGL ARB_fragment_program]
* vp20, vp30 [NV_Vertex_program 1.0 and NV_Vertex_program 2.0]
* fp30 [NV30 OpenGL fragment programs]
* fp20 [NV_register_combiners and NV_Texture_shader)[/quote]

So if a developer cross codes with CG all the 8500/9000/9100 owners would not be supported..sure these card can do the older Pixel Shader versions..yet that would completley eliminate one of the nice features of ATI's DX8 cards....so then a developer would have to write some form of hand written code if they wanted to support PS 1.4...where is the time savings there.

Lets not forget the market penetration of the 8500/9000 cards now.
 
Sometimes evaluating the merits of a tool can be aided by a history of the tool’s creation. Here is my guess. I remind you that no actual facts where used in the formulation of this guess…

How did Cg come to be?

(Speculation Mode)

1. nVidia is designing a highly programmable vertex and pixel shader hardware engine. They recognize the need for a HLSL.

2. Microsoft recognizes the need for a HLSL.

3. NVidia offers to create the DX9 HLSL for Microsoft knowing that without a viable HLSL there will be very little market (differentiating game development) for the NV30 hardware. According to the original target release date for NV30, it would be the first DX9 hardware available. Microsoft says: “go ahead, but no guaranteesâ€

4. NVidia works feverishly to finish both NV30 and DX9 HLSL. Design/manufacturing tradeoffs for the hardware take their toll and prevent the HLSL from being completed until the design is frozen (or exists as testable hardware).

5. Spring 02 ends… Summer 02 arrives and drags on…

6. ATI shows up at Microsoft’s door with working, testable hardware. ATI looks at the current state of the HLSL from the viewpoint of ATI’s hardware and make a few suggestions.

7. This event coupled with Microsoft’s lack of desire to make OGL programming any easier, leads Microsoft releases a HLSL that is largely identical to nVidia’s but has several smallish but important differences.

8. nVidia now holds the result of a large amount of effort and intellectual property that has just been orphaned. They have already sunk the effort and cost… so what to do?

9. Push Cg at developers anyway. They have already built it so why not?

(/Speculation Mode)

Regards, Chris.

p.s. This is not a jab at nVidia. I see nothing nefarious in a company pushing developers to develop in a way that would generate demand for that company’s products. Corporations are self-serving and act accordingly. No animals were harmed in the writing of this post.
 
This is NOT like the x86 situation, because if you didn't notice, ps/vs specs are completely changing in every revision. ps2.0 is nothing like ps1.1, nor is it required to be. So how are IHVs constrained, when they are free to reinvent the spec in every hardware revision?

Yes it is... Vertex Shaders and Pixel shaders are largely optional rendering paths within DX. Just like MMX, SSE, SSE2, 3Dnow!, 3Dnow! Enhanced and 3Dnow Professinal (read: SSE) are optional extensions to x86.

NO IHVs at all, will EVER write backends for Cg, just like no IHVs wrote backends for Glide.

How would one write a "back-end" for Glide? It's an API not a compiler. Also, unlike Glide which was closed to 3Dfx, Cg is open to others... You can write a profile yourself, you necessarily need somebody else to do it for you.

No, the crux of the argument is NOT that it's vendor-biased; It's that it's vendor-controlled (at least the language&tools are). Don't you realise what it would mean for ATI to put the future of their development into NVIDIA's tools? It's like Apple switching to Visual Basic or C# if Microsoft gave them a backend.

Bad analogy (aside from VB and C# sucking), Apple has it's own internal language support and API investments. ATi doesn't. Nor are they switching to anything, because they're not going from an existing language, just endorsing a new one (DXP HLSL).

It's just nuts, because no matter how harmless it seems, it's a dog eat dog world, and if ATI promoted Cg, their eggs would all be in NVIDIA's basket, and NVIDIA is free to tip it over whenever they want.

If it is, it's just as nuts as endorsing 2 3D APIs. Supporting Cg doesn't entail "putting all their eggs into one basket", at worst it's just distributing them around to hedge your bets.

Likewise they could build their own Cg compiler with support for themselves, and even go one step further with preserving the Nvidia profiles, and possibly wrest control of Cg from Nvidia due to better implementation (couldn't use the Cg name though). It's be interesting to see what Nvidia would do such circumstances. It's not unheard of, take a look what's happened with Java...

Cg is currently the only HLSL available for OpenGL

That's not quite true. SGI's ISL has been around for several years now (although AFAIK it's specific to SGI hardware). Actually I think ESMTL also compiled to OpenGL as well but that a little older and more limited.

Really all this hullabaloo is kind've funny since HLSL's have been around for years (Hell Renderman has been around for ages, although it doesn't emphasize real-time performance). The only reason it's getting all this attention lately is simply because consumer level hardware is becoming capable enough to run them...

* ps_1_1, ps_1_2 and ps_1_3 for DirectX 8 and DirectX 9 <---- Doh !! PS 1.4 Radeon 8500/9000/9100 owners robbed..no support yet !! Its as if DX 8.1 didn't exist 

Well 8.1 does exist. It's PS1.4 that doesn't exist... :p You don't exactly see 3Dlabs, Matrox, SiS, et al cranking out PS1.4 parts do you?
 
Well 8.1 does exist. It's PS1.4 that doesn't exist... You don't exactly see 3Dlabs, Matrox, SiS, et al cranking out PS1.4 parts do you?

There is enough OEM demand form Dx8.1 parts from ATI that they asked them not to discontinue the 8500 and clock it higher as a 9100...
I know locally here in Cananda the 8500 OEMs can be had for $100 Canadian or $60 US and they sell well...
So as I said before the Market Penetration is there for support (not to mention the ever popular mobility 9000)...it Should not be left out and won't be with MS HLSL.
 
3Dlabs P9 & P10 are/could be.

Heh, yeah. The operative word here is could be. AFAIK the current drivers only support up to PS1.2 and I don't get the feeling their going to be putting a whole lot of effort at going beyond that with DX. I imagine their driver teams are more focused on OpenGL, ISV certification and eventual OpenGL 2.0 support.

There is enough OEM demand form Dx8.1 parts from ATI that they asked them not to discontinue the 8500 and clock it higher as a 9100...
I know locally here in Cananda the 8500 OEMs can be had for $100 Canadian or $60 US and they sell well...
So as I said before the Market Penetration is there for support (not to mention the ever popular mobility 9000)...it Should not be left out and won't be with MS HLSL.

Of course not in DX9, it's in MS's best interest to support as many vendors possible. However the burden of PS1.4 support in Cg however does fall upon ATi. It's not Nvidia's responsibility to write support for other vendors hardware (although it would sure be nice). Besides what would you do if the ARB had ratified Cg tomorrow instead of 3Dlabs efforts? And it's not like you can't use the PS 1.1, 1.2, 1.3 profile with ATi hardware. You can of course easily write a demo in Cg that'll run on a 9700/9500 that'll bork on any currently available Nvidia hardware...
 
Well this conversation surely has reduced itself to the 20+ page arguments over Pro-Cg/Anti-Cg, as expected. :)

People have to understand, they aren't going to be able to convince some people out of their loyal notions one way or another and just call it a moot point.

I look at Cg as a tool- a good tool, but with questionable need (at this juncture) and a conflict of interest from it's source. The same reason why court houses have a 3rd party, unbiased judge make decisions in trials versus letting one side's attorney make the judgement is the same reason why compilers, languages or programming syntax should be owned/managed by a 3rd party. People can defend all day long how "unbiased" a particular prosecuting attorney may be, and maybe even pull out a couple examples where the "unbiased" party made a fair judgement- but it doesnt speak for the conflict of interest at the heart of the issue... nor what future situations may prevail at a later date.

I think the only information the consumer has to date is:
1) Cg doesn't have any form of PS1.4 profile available
2) The only Cg game demo of code we have doesnt run on all IHV's hardware.

Although #2 has little to do with Cg itself, one might question what perception something touting NVidia features with an NVidia shader language that only runs natively on NVidia hardware could be suggesting. It definately builds enough of a case to see what direction things can be used for in the future, even if this example is purely for the uneducated (as those in the know are quite aware of what this is truly about).

I'll embrace Cg the moment it is owned/managed and maintained by a 3rd party, or accumulation of 3rd parties (like the ARB) where peer review and syntax changes require a majority vote to ensure there is no compromise made for a particular architecture. I can't see a reasonable unbiased argument against such a structure since it would remove any doubt of favoritism to the language. The only negative here would be the instant recognition that we already have such a thing with DX9's HLSL and OGL's as well.

Those that are against such a stipulation can retain their individualist opinions, but it surely wont buy any subjective consideration. To desire for a language/compiler to be managed and maintained by an involved partner only can be destructive in the end, no matter how ethical and objective one may insist the initial design may be.
 
DX is going to get far more interest within 3Dlabs from now on. The OGL side of things is good, now they are consumer oriented as well (support for Creative products) DX will have more attention than it has done previously.
 
DaveBaumann said:
DX is going to get far more interest within 3Dlabs from now on. The OGL side of things is good, now they are consumer oriented as well (support for Creative products) DX will have more attention than it has done previously.

I get the feeling that reading in between the lines and your mention of 3dlabs in this thread and some other threads that you, DaveBaumann, know something of the future of 3dlabs direction. Something that is not really known to the rest of us mere mortals.

Any idea of an ETA when the rest of the world will be let on? :D

Or am I reading the tea leaves completely and utterly wrong again? :-?
 
archie4oz said:
It's not Nvidia's responsibility to write support for other vendors hardware (although it would sure be nice).

This has nothing to do with ATI hardware... CG is supposed to support DX8 and Dx9....DX9 support is PS 1.1-3.0 AFAIK...of course ATI is the only vendor in the DX8 class hardware supporing it..yet its still a part of DX 8.1.

Besides what would you do if the ARB had ratified Cg tomorrow instead of 3Dlabs efforts? And it's not like you can't use the PS 1.1, 1.2, 1.3 profile with ATi hardware. You can of course easily write a demo in Cg that'll run on a 9700/9500 that'll bork on any currently available Nvidia hardware...

With ATI a full member of the OpenGL arb and 3Dlabs I didn't think that was going to happen, I also think Nvidia burned some bridges with their Licensing Fees for extensions that could easily be Generic ARB.
 
Back
Top