What was that about Cg *Not* favoring Nvidia Hardware?

Joe DeFuria said:
nvidia has one interest: get code as optimal in nVidia hardware as possible. MS has one interest, have DirectX as a whole be as optimal as it can be across multiple hardware platforms while being as consistent as possible.

I prefer the latter approach.

Exactly. And I cannot think of any plausible reason why nVidia didn't just allocate those Cg resources to make HLSL better and as the defacto standard that DX already is for the majority of games.
 
Randell said:
Doomtrooper said:
Ok Randell I stand corrected...8 months :p ...

pfft, still cant count DT - mid June to mid December I still make as 6 months (and a few days) :)

Alright...you win...but I will use my backdoor excuse and include the playable Demo time prior to release
cwm27.gif
 
RussSchultz said:
Ignore NVIDIA; ignore the marketting aspect. What, technically, is bad about a shader language you can target OpenGL and DirectX, and has extensions for the IHV to add their own backend compiler for optimzation at runtime?

Now Russ, this is a fallacy.
One cannot ignore nVidia and the affect they will have upon thier language, CG.

Technically, i dont think people are against any HLSL.
But you are BLIND if you cannot see the possibilities for abuse here.
And if you cannot see that this game is a INDICATOR of how CG might be used, i can but tell you to use your brain.

You say "there is nothing wrong with the tool"
I say, i dont trust the hand that weilds it. And i have a perfect example of why.
 
Doomtrooper said:
The two NV30 profiles in the Cg compiler compile to two new, NVIDIA-proprietary OpenGL extensions, NV_vertex_program2 and NV_fragment_program. You cannot use them on a 9700.


http://www.cgshaders.org/forums/viewtopic.php?t=419&highlight=9700

Yep compiles to two new NEW PROPRIETARY EXTENSIONS :LOL:

http://developer.nvidia.com/view.asp?IO=cg_toolkit

# Support for new profiles
* vs_1_1 for DirectX 8 and DirectX 9
* vs_2_0 and vs_2_x for DirectX 9
* ps_1_1, ps_1_2 and ps_1_3 for DirectX 8 and DirectX 9
* ps_2_0 and ps_2_x for DirectX 9
* arbvp1 [OpenGL ARB_vertex_program]
* arbfp1 [OpenGL ARB_fragment_program]
* vp20, vp30 [NV_Vertex_program 1.0 and NV_Vertex_program 2.0]
* fp30 [NV30 OpenGL fragment programs]
* fp20 [NV_register_combiners and NV_Texture_shader)
 
Gah. Cg does not create vendor specific games. Idiots do. Those same idiots could use OpenGL to do the same thing.

Its obvious you CAN make a vendor specific title using Cg, but its just as obvious that such an endeavor will be folly in a business sense.

All you have is a perfect example of an idiot making a vendor specific program.

So, I'll respectfully disagree that I'm BLIND to the possibilities of abuse. I see them quite clearly--and believe none of those possibilities have a chance of being realized in a manner that will cause problems for anybody except those attempting to abuse them.
 
Bjorn said:
And, i think that for MS, the consistent part comes first and optimizations later.

You're wrong. The shader generates optimal assembly, and optimal assembly is just that, there's only one optimal path for both IHVs -- and that's the fewer instructions the better. Futhermore, Microsoft work very closely with NVIDIA, ATI, and everyone else to get everything working at maximum performance. If NVIDIA felt HLSL was not fast enough, they could have put their resources into helping Microsoft with the backend. _OR_ at the extreme they could have written their own backend, but they didn't do that, they created their own language, not just their own compiler.
 
RussSchultz said:
Gah. Cg does not create vendor specific games. Idiots do. Those same idiots could use OpenGL to do the same thing

Don't be silly. It's not black-and-white. Vendor-specific games, no, Cg isn't to blame there. But vendor-biased games, Cg gives you no choice in the matter.
 
Joe DeFuria said:
2) Why is having multiple IHVs each writing their own compilers "technically" good? Is it better for each IHV to reinvent the wheel?

Yes, this is much better. It is, in fact, what I believe GLSlang is supposed to do for OpenGL 2.0.

The reason it is good is simply this:
By standardizing a high-level language instead of the assembly, hardware vendors are free to change much, much more in hardware than they would be if the assembly had to be backwards-compatible, without losing speed due to emulation.

Another way of looking at it is just to look at the PC market. Why are we still bound to x86? The reason is obvious. It's because programs were closely-designed to that architecture. If, instead, a higher-level language was standardized, we wouldn't need to be tied down to the x86 architecture, and today's PCs would be much higher-performance than they are.

This is why I want to see a standardized HLSL that is made to work with compilers developed by the hardware vendor for optimal performance. Heck, it doesn't even need to be an actual language that's standardized, but, say, half-compiled code that is fully-compiled at runtime by the drivers.

And one last thing:
While not all that many developers go for support of both DX and OpenGL, a few do. Croteam and Epic are two that I'm aware of that do it right now, and others have in the past.
 
RussSchultz said:
And why would that be?

Because NVIDIA are the only ones who have input on it. Do you think NVIDIA know, or care, about performance on other IHVs hardware? Of course not. So unless your plan is to use Cg for NVIDIA cards and HLSL for other cards, your game WILL favour NVIDIA. And I'm not even touching the subject of compatibility.
 
Cg, is not just a *Shader language*.. Its a *Trademark Marketing Tool*...

And what do you think DX is? A bouquet of sweet smelling roses?

If the OGL camp wants to use Cg, thats their problem. My discussings are solely limited to DX9 and Cg. I don't care about OGL. I don't want to use OGL. I'm never going to use OGL. PERIOD.

That's all fine and dandy, just realize that DX9 is not an option for all of us as it's not available on non-windows platforms...

I DO IGNORE the fact that Cg "is nVidias." I don't care if it's nVidia's, ATIs, SIS', or any other single IHV's. I see no good TECHNICAL reason for its existence. And I only see the potential to temporarily fragment the market and cause short-term pain and inconvenience.

Ummm, just like C, C++, and Java have? I could understand your complaints if we were talking about an API, but we'er talking about a language here... I mean hey with PRman I shouldn't need another Renderman renderer, of course that doesn't others from making them (RenderDotC, AIR, 3Delight, AQSIS), and PRman isn't available for my platform...
 
790 said:
Because NVIDIA are the only ones who have input on it. Do you think NVIDIA know, or care, about performance on other IHVs hardware? Of course not. So unless your plan is to use Cg for NVIDIA cards and HLSL for other cards, your game WILL favour NVIDIA. And I'm not even touching the subject of compatibility.

I think you're wrong. NVIDIA is the only one who has control over the language specification, not the implementation.

Though this argument has been beat to death, do you really think the language syntax can be biased?
 
Chalnoth said:
By standardizing a high-level language instead of the assembly

I think this is perhaps an "ideal", but in reality it's too complicated to pull off (evidentally). For one you are taking the power away from developers to work at the assembly level, which is still extremely important.

This is NOT like the x86 situation, because if you didn't notice, ps/vs specs are completely changing in every revision. ps2.0 is nothing like ps1.1, nor is it required to be. So how are IHVs constrained, when they are free to reinvent the spec in every hardware revision?
 
RussSchultz said:
I think you're wrong. NVIDIA is the only one who has control over the language specification, not the implementation.

You've gotta be kidding. NO IHVs at all, will EVER write backends for Cg, just like no IHVs wrote backends for Glide. So it doesn't matter what is "possible", only the reality of the situation, which is NVIDIA will be the only ones ever writing the compiler.
 
790 said:
You've gotta be kidding. NO IHVs at all, will EVER write backends for Cg, just like no IHVs wrote backends for Glide. So it doesn't matter what is "possible", only the reality of the situation, which is NVIDIA will be the only ones ever writing the compiler.

Well, if you kneejerks would give it a chance...
 
RussSchultz said:
Well, if you kneejerks would give it a chance...

Nothing to do with us. We don't run IHVs. Do you honestly think ATI or any other IHV would write a backend for Cg? That wouldn't happen in your wildest dreams, because it would be technological and economic suicide.

BTW, I'm not a kneejerk, I've worked with both Cg and HLSL -- Cg long before I got into HLSL -- now I've made my choice, and it wasn't a hard one.
 
790 said:
Nothing to do with us. We don't run IHVs. Do you honestly think ATI or any other IHV would write a backend for Cg? That wouldn't happen in your wildest dreams, because it would be technological and economic suicide.

You're running a circular argument by saying Cg is vendor biased because nobody will write back ends for it because it is vendor biased.
 
RussSchultz said:
You're running a circular argument by saying Cg is vendor biased because nobody will write back ends for it because it is vendor biased.

No, the crux of the argument is NOT that it's vendor-biased; It's that it's vendor-controlled (at least the language&tools are). Don't you realise what it would mean for ATI to put the future of their development into NVIDIA's tools? It's like Apple switching to Visual Basic or C# if Microsoft gave them a backend.

It's just nuts, because no matter how harmless it seems, it's a dog eat dog world, and if ATI promoted Cg, their eggs would all be in NVIDIA's basket, and NVIDIA is free to tip it over whenever they want.
 
790 said:
It's just nuts, because no matter how harmless it seems, it's a dog eat dog world, and if ATI promoted Cg, their eggs would all be in NVIDIA's basket, and NVIDIA is free to tip it over whenever they want.

And how could they do that? Suddenly changing the language specification and every lemming would just go along with them?
 
790 said:
I think this is perhaps an "ideal", but in reality it's too complicated to pull off (evidentally). For one you are taking the power away from developers to work at the assembly level, which is still extremely important.

No, working at the assembly level is not important for developers. What it does do is allows very small performance improvements. But those performance improvements will not translate to future hardware, whereas a runtime-compiled HLSL will continue to increase in speed with new hardware.

This is NOT like the x86 situation, because if you didn't notice, ps/vs specs are completely changing in every revision. ps2.0 is nothing like ps1.1, nor is it required to be. So how are IHVs constrained, when they are free to reinvent the spec in every

No, I don't think PS 2.0 is that much different than PS 1.1 (Though it does, of course, have hugely more instructions and resources). From what I can tell, every instruction in PS 1.1 is directly-correlated to one is PS 2.0. Please let me know if I'm wrong (A very specific example would be best).

And as the language gets more and more complex, there will be substantial overhead in properly-supporting older hardware. There is also the possibility of assembly-level "hints" that give the hardware a better idea of what to do (i.e. when and how the pipelines can avoid stalls, what branches are more likely to be taken, how to handle multiple branches, etc). Such hints would just be untenable to manage at the developer-level, for a variety of reasons.

Anyway, I guess what I'm trying to say is that not only would having a higher level of standardization help in allowing hardware developers to have more freedom, but would also allow the assembly to have much lower-level control over the hardware, since it would be hidden from the developer.
 
Back
Top