My thread, between Demalian and I

RussSchultz

Professional Malcontent
Veteran
The rest of you keep your f'n posts out of here. Grrrr! (Of course, that might make the conversation really boring, since I don't know enough of the syntax of Cg to know where the language starts and stop. Don't know about Demalian)

Ok, lets discuss the idea of NVIDIA controlling the syntax of Cg and how they might change the specification to either corner the market, or benefit themselves over anybody else.

Could you provide some samples of things that could change the specification of Cg into something that would favor one architecture over another?

If I look at C, which is ostensibly unchanged for more than a decade or two, I really can't imagine anything about the language itself that favors one architecture over another.

If we take its evolution to C++, how did that change to the language favor one architecture over another?

Sure, you could possibly say that C++ might favor architectures that have prefetch, or certain indirect addressing modes because of the vtable lookups, but (in general), I think the language preceeded the optimization.

But back to the point: can you come up with any examples of how a change to the language specification would favor one architecture over another?
 
RussSchultz said:
The rest of you keep your f'n posts out of here. Grrrr! (Of course, that might make the conversation really boring, since I don't know enough of the syntax of Cg to know where the language starts and stop. Don't know about Demalian)

Ok, lets discuss the idea of NVIDIA controlling the syntax of Cg and how they might change the specification to either corner the market, or benefit themselves over anybody else.

Ok, though I still think it is backwards. ;)

Could you provide some samples of things that could change the specification of Cg into something that would favor one architecture over another?

Sure. They control the specification of such things as the arguments passed to particular functions, the expected behavior of such functions, the data types allowed in the language, and, possibly (not sure here, depends on how the "back end" works) the behavior for handling unsupported functions. These are rough guesses as I most certainly don't "know" Cg, and am just basing this on things other languages specify.

Changing or preventing change in any one of these things could disadvantage other architectures, whereas an "open" standard might evolve or add functions to the specification based on looking forward. nVidia would have a vested interest in only doing so AFTER their hardware has the functionality.

If I look at C, which is ostensibly unchanged for more than a decade or two, I really can't imagine anything about the language itself that favors one architecture over another.

Well, there are low level things in C that can make "Endian" order matter, but it is unclear whether Cg will have such low level operations, and seems unlikely that datatypes won't be abstract enough to prevent this.

Also, I programmed C on SAS/C, which had non-ANSI functions that differed from the ANSI functions. They were very appealing to use because in some cases the principles of the non-ANSI function fit better into what I was trying to do. Doing it the ANSI way worked as well, but it wasn't as efficient for what I was trying to do. Mostly I remember this while working with memory handling.

If we take its evolution to C++, how did that change to the language favor one architecture over another?

It was a profound change (in some ways not the same language), but the tasks built into the C++ standard are very general in their demands. That is not the same thing in graphics...it doesn't hurt to have a "printf" function work a particular way...CPUs aren't as tied to the implementation of the functions, and it doesn't really matter if it takes twice as long to do it on one CPU or another because of an implementation expectation. The target of what GPUs/VPUs are intended to achieve is a moving target, whereas for CPUs it is pretty much always the same things, but faster.

Sure, you could possibly say that C++ might favor architectures that have prefetch, or certain indirect addressing modes because of the vtable lookups, but (in general), I think the language preceeded the optimization.

I would offer Java as a counter example. While it, too, is a general language, the evolution of the way it interfaces, its protocols, and its functions and their behavior are the difference between a Java program that works on Microsoft's Java VM quickly and one that works on Sun's (if it works at all). And this isn't even something where doing something at half the speed will matter as much, which it will for a GPU.

But back to the point: can you come up with any examples of how a change to the language specification would favor one architecture over another?

It is pretty tough to go through the entire language specification and address what another vendor might come up with that might be limited by the specification, since I'd have to invent the hypothetical method. The problem is that as long as it is possible for such a method to be hampered by a Cg spec, the fact that only nVidia has a say in the standard becomes a negative.

That stated, I have downloaded the spec and I'll continue to peruse to see if anything stated in the specifications or restrictions on profile overload that jump out at me. :LOL:

I wouldn't have anticipated what happend to the HTML standard, for example, so that I can't view most of the ATi site on my supposedly standards compliant browser :-? (they've fixed their main page).
 
Sorry, but using C/C++ is a not a good example. There is a committee behind both of those languages, and there is an ISO standard. Show me the Cg ISO standard and then you can compare them. C/C++ is so widely used because it is vendor neutral. Anybody can get the official ISO draft for a few bucks and implement their own compiler for Hardware-X.

Cg's specification may be open, but it is based on NVidia's current and future hardware. I'm not saying it will favour NVidia over ATI, but certain design decisions of the language may not work out as well on other IHV's hardware.

I'm all for OPEN standards. Most of the hardware you see today is thanks to OpenGL and DirectX. From Voodoo all the way up to GeForce, you've just been witnessing OpenGL 1.2 being implemented in hardware. The IHV's had an OPEN model and standard to follow.
 
Try again Fresh. K&R C was first. It was used for years as an industry standard before it was "standardized" as ANSI-C. Likewise, AT&T also controlled C++ when it first came out and for years, they controlled the spec until ISO handed down the standard. STL came out of HP, and the same process occured. I would go much further and say that the standards committes fucked up (bloated) C++ bigtime and that Java, Eiffel, Perl, and Python were evolved much better under tight control by a single group/entity.


Standards committees don't invent new languages. They formalize the specifications and fix problems in existing ones. I sit on two standards committees right now and have participated in the process for the last 5 years, including several W3C and IETF working groups. Usually a vendor develops a new specification, and then, after several vendors have competiting specifications, there is usually an attempt to "harmonize" the proprietary specs into a single language.


But why are we getting into this political discussion? Russ asked for hard facts, such as how Cg differs from DX9 HLSL and how Cg (THE LANGUAGE, not the RUNTIME LIBRARY) will favor NV30 hardware.


It seems the people who continue to engage in useless political arguments just aren't technical enough to ferret out the real issues in the Cg spec and post them.
 
Language design by comitee is just a bad idea.
The only one I can think of is ADA and it shows.
Hideously overcomplicated, with no consistency.
 
ERP said:
Language design by comitee is just a bad idea.
The only one I can think of is ADA and it shows.
Hideously overcomplicated, with no consistency.

OK, had some sleep, let me answer this briefly since it is off topic at this point:

What language is being designed by committee? AFAIK, it is only comittee approval at "worst", and hasn't that happened to C/C++ as someone has stated? Perhaps an answer via PM, as my more detailed response will be sent to you ERB, or a brief link to the the other Cg thread where you could post a long answer.
 
Back
Top