How Cg favors NVIDIA products (at the expense of others)

Chalk one up for the 'good guys' :LOL:

If it didn't favor one company, performance would be the same...even HLSL is better than CG on a 5600.
Your reply was typical, considering you would never just say 'maybe I was wrong'.
arge.gif
 
RussSchultz said:
-The idea of pluggable backends is inherently "fair" to anybody who wants to make a backend.

This isn't true--or rather, it's true only if you don't consider development costs. A pluggable back-end imposes a development and support cost on an IHV, and only benefits those IHVs that aren't well-served by the "standard" HLSL. Other IHVs would incur this development cost to their own detriment, since it would serve to promote a technology that is of benefit only to its competitors.
 
Doomtrooper said:
Chalk one up for the 'good guys' :LOL:

If it didn't favor one company, performance would be the same...even HLSL is better than CG on a 5600.
Your reply was typical, considering you would never just say 'maybe I was wrong'.
arge.gif
Christ. Please reread what I posted.

I'm talking about the language.

You're talking about the backend implementation.

They are not one and the same. The crappy C compiler for the DSP I use does not mean that C is crap, it just means that the compiler is crap.

Yes, this implementation seems to be detrimental toward non GF-FX platforms. I agree with you. It certainly didn't occur to me that their generic 2.0 implementation would optimize for register useage and not instruction count.

But that has nothing to do with the language or the concept.

However, this data does show that a vendor specific back end for the shader compiler is a valuable addition. (Particularly for NVIDIA)
 
antlers said:
RussSchultz said:
-The idea of pluggable backends is inherently "fair" to anybody who wants to make a backend.

This isn't true--or rather, it's true only if you don't consider development costs. A pluggable back-end imposes a development and support cost on an IHV, and only benefits those IHVs that aren't well-served by the "standard" HLSL. Other IHVs would incur this development cost to their own detriment, since it would serve to promote a technology that is of benefit only to its competitors.
I don't understand what you're saying. Why would ATIs backend for <unnamed HLSL> benefit Trident? (for example)

Or are you simply retreading the whole "why would any IHV support Cg, since it obviously helps NVIDIA".

Ignore that its Cg, and talk about the idea of a pluggable backend for the shader compiler.

Is it a good thing or not?
 
RussSchultz said:
Ignore that its Cg, and talk about the idea of a pluggable backend for the shader compiler.

Is it a good thing or not?

You talking from the developers' point of view or consumers'? Those are rather remarkably different things. :) Assume consumers, then;

If speaking about pluggable backends in general , then yes, probably a good thing. (Pluggable backends? Why does this make me think of sodomy all of a sudden? :D)

If controlled by one specific IHV as in the case of Cg, then no. Why? Well obviously Nvidia's not going to put 2 bits together to support other manufacturers' hardware. If those other manufacturers don't care either, any title which uses Cg would suffer from a performance-standpoint, and suffer in a way that is unneccessary if the title had used a standard HLSL instead.

Also, even if other IHVs provide their own rear-ends to be plugged (oops! :)), titles released BEFORE the hardware gained specific Cg support would not benefit from any optimization.

From the consumer POV, it's almost exclusively a lose-lose proposition unless one owns Nvidia hardware. As there is talk from more than one software house about adding specific features only for Nvidia cards, it seems we're moving back into the non-standardized dark ages again, and Cg can only serve to speed this up. Decidedly ungood for us actual game players.


*G*
 
The ideal situation is that Microsoft would provide an API like in GCC for writing the backend. Let Microsoft take care of parsing and construction of the abstract syntax tree, and even intermediate level optimizations, let the vendors take care of instruction scheduling, peephole optimizations, register allocation, etc Then, in addition to the DX9 device driver, and ICD, you'd also install a backend, and games would invoke the compilation once during startup, or during install, or during runtime.
 
When NVIDIA introduced it, I thought Cg was really a good tool. Now I don't think so :(

Cg is mainly a tactical move of NVIDIA and after it is a tool designed to help developers. If GeForce FX line has been a great success, Cg could have been nice. But actually, NVIDIA uses it to hide FX problems with DX9 and DX8.1. Why NVIDIA didn't authorize ps_1_4 in Cg ? Because the FX units of every GeForce FX can't work with ps_1_4. So when developers use ps_1_4, GeFFX have to deal with their rather slow FP units… NVIDIA wants developers to use ps_1_1 and a just a little ps_2_0 a la Cg.

I think Cg had to be an advanced framework designed to help developers. Cg had to use HSLS compiler for DX9 and NVIDIA could have worked with Microsoft to make an optional HLSL profile designed to optimise registers usage before instructions count. Every compiler has many profiles. Why HLSL can't have a profile optimised for saving register usage ?

Why NVIDIA didn't optimise its shader engine (in the driver) to be more efficient even with HLSL code ?

I've made many tests with shader performances of many graphic cards. I saw that ATI's shader engine is really good. It is able to deal greatly with many shading ways and optimises every shader for co-issue. NVIDIA shader's engine didn't optimise even one of the most trivial case : MUL + ADD stay always MUL + ADD even with a * b = c and c + d = e. Of course, every good developer will use MAD form himself. Of course I also saw some good points in NVIDIA's shader engine. For example, MS define SINCOS as a macro of 8 instructions (8 cycles) and GeForce FX can make it in only 1 cycle (I think Radeons make it in 6 cycles).

There is another problem with Cg : NVIDIA can break some DX9 rules. I don't know if they do it or if they'll do but it is really possible. DX rules are a very good point for 3D. They are the main point for stability and good competition in the 3D market.

NVIDIA wants to control (alone) the 3D evolution, what people want and what developers could do in their engines. Of course it can't be good for a majority of people. I think (and I always thought) that Microsoft and the OpenGL ARB are two policemen who have to drive the 3D evolution. I don't like to see 3D partially driven by the money of one IHV.
 
Tridam said:
Why NVIDIA didn't optimise its shader engine (in the driver) to be more efficient even with HLSL code ?

Because writing good compilers is hard. As soon as NVidia released the source code to Cg, I saw it had a rather trivial architecture for a compiler, like it had been hacked together quickly. Recognizing a MAD in all cases where it can be used requires the compiler to be really good at instruction selection, which in a tree-based intermediate representation typically relies on optimal tree covering algorithms.

No need to infer malfeasance where incompetence will do. In NVidia's case, I don't even think it it has to do with incompetence, it's more with rushing Cg to market before it was ready.

Microsoft was able to do a better job because they have a huge team dedicated to writing compilers and have been doing it for far longer.
 
RussSchultz said:
I'm not going to retread these waters.

14 pages haven't brought up any architectural reasons why the language favors one architecture over another, but keep in mind:

-The language syntax is essentially identical to HLSL.
-The idea of pluggable backends is inherently "fair" to anybody who wants to make a backend.

Those two items lend heavy credence for me that the language and the idea, cannot favor one platform over another on technical terms.

OK, so where in these 14 pages do you see a reason to use Cg? I can't think of a single reason to choose it over M$ HLSL or OpenGL. As you say, nVidia has nothing on these other approaches in terms of ideas or language. So....hard to see any reason at all to use Cg. Right?

If I was nVidia I wouldn't waste my time on something like Cg unless I could tie it to my hardware--indeed, that would kind of make the whole effort worthwhile. Obviously, nVidia has no monopoly on ideas or on "moving the industry forward" so Cg offers nothing unique from that perspective.

I think it's maybe a little off center to ask people to show you "technical reasons" Cg shouldn't be used, while you manage to avoid the question of why anyone would want to use it in the first place ahead of the other approaches...;) How about some technical reasons why it should be used instead of the other approaches? That would make for an interesting discussion. In fact, when the DX9 and Ogl2 HLSL's come down--heck, nVidia could probably bow out of the whole HLSL picture, right? Cg would at best seem redundant at this stage of the game, IMO.
 
Well, NVidia could still produce compilers for DX9 HLSL and OpenGL2.0, since there is no way a single compiler is going to produce optimal code for all architectures.

Right now, MS's FXC produces nearly optimal code for a generic DX9 virtual machine, but the assumption that the driver can make the best backend optimization choices given only the DX9 assembly is somewhat suspect.

In fact, almost all vendors for OGL2.0 will have to produce their own compilers.
 
DemoCoder said:
Tridam said:
Why NVIDIA didn't optimise its shader engine (in the driver) to be more efficient even with HLSL code ?

Because writing good compilers is hard. As soon as NVidia released the source code to Cg, I saw it had a rather trivial architecture for a compiler, like it had been hacked together quickly. Recognizing a MAD in all cases where it can be used requires the compiler to be really good at instruction selection, which in a tree-based intermediate representation typically relies on optimal tree covering algorithms.

No need to infer malfeasance where incompetence will do. In NVidia's case, I don't even think it it has to do with incompetence, it's more with rushing Cg to market before it was ready.

Microsoft was able to do a better job because they have a huge team dedicated to writing compilers and have been doing it for far longer.

Of course, writing good compilers is really hard. It's why NVIDIA should have worked with Microsoft and ATI on HLSL. NVIDIA choose to work alone on the Cg compiler without having done it before.

I think NVIDIA's drivers team should have work on optimising the shader engine for HLSL and ASM code instead of working on Cg and benchmarks 'optimisations'. Of course NVIDIA's team isn't incompetent ;) I think they just had not the good priorities. Unfortunately, the priority of NVIDIA was to hide some GeForce FX weaknesses with some marketing and strategic moves. It's why I think that if every GeForce FX had been a great success, we would not discuss about Cg.
 
RussSchultz said:
I don't understand what you're saying. Why would ATIs backend for <unnamed HLSL> benefit Trident? (for example)

Or are you simply retreading the whole "why would any IHV support Cg, since it obviously helps NVIDIA".

Ok, so answer the question. "Why would they?" Sure, other IHVs might have some benefit by creating their own backends, but would those benefits outweigh the benefits that NVIDIA is getting from developers using Cg? I don't see why any sensible IHV would want to help NVIDIA out.

Ignore that its Cg, and talk about the idea of a pluggable backend for the shader compiler.

Is it a good thing or not?

Yes, it is a good thing. But I don't see how you can ignore that NVIDIA basically tied the knot between the backend and the language. NVIDIA made Cg for one reason, to create shader code that runs fast on NVIDIA cards. They obviously don't care about anyone else, and obviously don't want to help other IHVs. Hence the lack of PS1.4 support in Cg. Now we know, that even gfFX cards run PS1.4 code faster than they run PS1.1 code...so why no PS1.4 support? Easy answer, ATI cards gain more from PS1.4 than NVIDIA cards do. So I guess NVIDIA would rather not gain anything than help their competitors gain more. Which is basically the reason why I don't see any IHV's jumping in the Cg boat.

Having one IHV control a language is not a good thing.
 
StealthHawk said:
Hence the lack of PS1.4 support in Cg. Now we know, that even gfFX cards run PS1.4 code faster than they run PS1.1 code...

????
I think you're false in regard to ps_1_4 performances. GeForce FX run ps_1_4 as slowly as they run ps_2_0. It's why NVIDIA doesn't like ps_1_4.
 
WaltC said:
OK, so where in these 14 pages do you see a reason to use Cg? I can't think of a single reason to choose it over M$ HLSL or OpenGL. ...would at best seem redundant at this stage of the game, IMO.

Walt, you're being just plain obstinant. A language that can span OpenGL's and DirectX's shader languages is a useful tool and is not redundant.

But that doesn't matter, because you'll throw the same bullshit up about "Cg isn't about that, its about controlling the industry", or whatever other paranoid detractions you can think of (you packed quite a few in your 3 paragraphs) rather than accepting that as a tool it has a use that people may find attractive.

I'm sorry it came from Nvidia. Maybe if it came from ATI you'd lap it up and be just as pro as you currently are anti?
 
RussSchultz said:
Walt, you're being just plain obstinant. A language that can span OpenGL's and DirectX's shader languages is a useful tool and is not redundant.

This is true, and probably Cg's big saving grace. The next question is, how many developers actually feel the need to compile to both DirectX and OpenGL? Not many I would wager.

RussSchultz said:
14 pages haven't brought up any architectural reasons why the language favors one architecture over another, but keep in mind:

-The language syntax is essentially identical to HLSL.
-The idea of pluggable backends is inherently "fair" to anybody who wants to make a backend.

Let's be honest here - How many other IHVs are going to want to write a backend for Cg and publically show themselves to be 'supporting' it in any shape or form? Again, not many is my guess. If it was the only HLSL available, I'm sure they would, but with DirectX 9 HLSL available they have no need to (unless there is a massive shift in the industry towards abandoning Microsoft's HLSL for Cg).


As far as I can see, Cg is a valuable tool, but that becomes a moot point from the fact that it was created by a major IHV and not a third-party, which will probably stunt it's growth and potential to a large extent.
 
Like I said: its a shame it ends up being IHV versus IHV on technology such as this. It's ideally beneficial to all IHVs (ignoring the politics)

I think Microsoft needs to open up their HLSL compiler for companies who feel the need to be able to re-order and re-schedule instructions to better suit their hardware.

At that point, Cg would be essentially moot because, as other threads have mentioned, not a whole lot of people are really targetting both DirectX and OpenGL at the same time.
 
RussSchultz said:
Walt, you're being just plain obstinant. A language that can span OpenGL's and DirectX's shader languages is a useful tool and is not redundant.

I would think that statement needs a bunch of qualification, such as "useful to whom?" and "How 'efficiently' does it span the APIs?" before this kind of a broad statement would have any significant meaning. If it turns out to fill a genuine developer niche there will be people who use it to cross APIs, no doubt.

But that doesn't matter, because you'll throw the same bullshit up about "Cg isn't about that, its about controlling the industry", or whatever other paranoid detractions you can think of (you packed quite a few in your 3 paragraphs) rather than accepting that as a tool it has a use that people may find attractive.

Aren't you assuming an opinion of mine not expressed in that post?...;) But, my opinion is that nVidia started work on Cg at a time in which it thought it had the 3D chip market "sewn up"--that period of time between the absorption of 3dfx and the advent of R3xx. Times have changed, and I'd say that if nVidia had to start Cg all over again the company probably wouldn't bother. Just IMO. I just think it is naive in the extreme to think that nVidia started Cg with an eye towards steering the industry in a direction other than the one it wanted to travel.

I'm sorry it came from Nvidia. Maybe if it came from ATI you'd lap it up and be just as pro as you currently are anti?

With M$ HLSL and OpenGL HLSL coming around, why should ATi bother? That's the point.

I think Microsoft needs to open up their HLSL compiler for companies who feel the need to be able to re-order and re-schedule instructions to better suit their hardware.

See, this is the kind of comment I find puzzling. I was under the impression that M$ actively solicited the participation of all of the major IHVs, as well as a consensus of software developers, when it formulated things like D3d versions and its HLSL. The chief beneficiary of HLSL is the software developer, and the IHV benefits indirectly by how well he supports the agreed-upon standards. nVidia has had as much opportunity to participate in the formulation of DX9 and M$'s HLSL as has ATi, certainly. Indeed, M$ has had a much stronger commercial relationship with nVidia through the xBox. So I don't see M$'s HLSL compiler as any more of a "problem" for nVidia than it is for ATi. The problem I think is when as an IHV your hardware can't keep pace with the rest of the industry. That's when an IHV starts taking the "workaround" route. Again, just IMO.
 
Hi WaltC,
WaltC said:
OK, so where in these 14 pages do you see a reason to use Cg? I can't think of a single reason to choose it over M$ HLSL or OpenGL. As you say, nVidia has nothing on these other approaches in terms of ideas or language. So....hard to see any reason at all to use Cg. Right?
Perhaps not in these 14 pages, but of the about 10 dev studios and publishers I have personal ties to, 6 have been using Cg for quite some time already. Apparently, there's something interesting in Cg for them, no?

Incidentially, doesn't TR6 use Cg as well?

93,
-Sascha.rb
 
Somone here already posted the logic behind ATI not supporting CG (hell any IHV for that matter). How would ATI approach Nvidia for changes in the HLSL ?? Would ATI release code to a competing IHV on how their VPU's work, or classified information.

All of these things would have to be done for optimal performance, and no company is giving away IP. :LOL:
 
Back
Top