What was that about Cg *Not* favoring Nvidia Hardware?

So, just as an interesting aside: if Cg wasn't made by NVIDIA how much argument would there be?
Isn't a cross platform high level shader language that can target OpenGL, and multiple flavors of DX based on a runtime back end a good thing? It boggles my mind to see the violent protest over such a useful idea.

What is with you guys Not getting the big picture here???

Cg, is not just a *Shader language*.. Its a *Trademark Marketing Tool*...

I really honestly think Some of you guys are intentionally ignoring the obvious reason this is a *Bad* thing. We Already have one Clear cut case of Nvidias *True* intentions with the Cg name and the application there of. And You guys have freaking Giant Blinders on... :rolleyes:
 
archie4oz said:
Whooptie f$cking doo... Anybody who's payed even a smidgen amount of attention to Cg and the NV30 has known about the fp30 and vp30 profiles... :rolleyes: Guess what? fp20 and vp20 don't work on ATi hardware either, wanna guess why? Of course there's arbvp1 and arbfp1 if you want cross-platform support. The only real gripe I have about this is that older programmable hardware (NV20, NV25, R200, RV250, Parhelia, dunno about P10/9 hardware) doesn't support ARB_fragment_program (just R300 and NV30, thus for those devices you're stuck with vendor specific extensions (NV_texture_shader, ATI_fragment_shader, MTX_fragment_shader) of which of course Nvidia has provided Cg profiles for themselves...

BTW, You don't need to YELL! :devilish:

:LOL: ...relax.

Doesn't make for a good arguement when a developer would cross platform a title and be limited to Proprietary Extensions does it...vendor specific extensions don't need to be proprietary...

How many Proprietary ATI extensions are there...

Nobody is yelling...Yet :devilish:
 
The discussion in the DirectX dev forum thread was very illuminating for me. nVidia promotes Cg touting cross-platform support. Microsoft touts HLSL touting a hardware-neutral solution. Both are trying to tie developers to their tools; Microsoft in order to promote and enhance its OS and XBox; nVidia in the hope that developers using Cg tools will see nVidia cards as the natural development platform, producing games that show nVidia cards in the best light.

ATI is in the Microsoft camp because they don't want to cede any developer mindshare to any nVidia-controlled tools. They are in an uncomfortable position: if they endorse Cg they give nVidia added power in the marketplace; if they resist Cg obstinately but developers become attached to it, their cards will never be shown to their best advantage.
 
I guess the "The way it's meant to be played" at the Gun Metal section of nvidia.com must be referring to "You're supposed to turn off FSAA and Aniso"

Runs pretty great on my 9700 Pro except for a small bug with the skybox (no biggie)

http://w1.461.telia.com/~u46115957/GunMetal.png

Each time I see that stupid logo I laugh my ass of, I have a handfull of videocards here, there's no way I'd use an nvidia card for any of these "The way it's meant to be played"
btw that logo seems to find it's way into everything nowadays
personally I don't like advertising in my games thank-you-very-much

what's next "Coca Cola: the stuff you're supposed to drink while you play like it's meant to be played without FSAA and Aniso because the card is too slow" ;)
 
Hellbinder[CE said:
]
What is with you guys Not getting the big picture here???

Cg, is not just a *Shader language*.. Its a *Trademark Marketing Tool*...

I really honestly think Some of you guys are intentionally ignoring the obvious reason this is a *Bad* thing. We Already have one Clear cut case of Nvidias *True* intentions with the Cg name and the application there of. And You guys have freaking Giant Blinders on... :rolleyes:

Again, emotional arguments that boil down to FUD. (And yes, I wholeheartedly believe that 90% of the arguments are meant to spread fear, uncertainty, and doubt over the intentions and the evilness of NVIDIA)

Ignore NVIDIA; ignore the marketting aspect. What, technically, is bad about a shader language you can target OpenGL and DirectX, and has extensions for the IHV to add their own backend compiler for optimzation at runtime?
 
Chalnoth said:
Derek Smart [3000AD said:
]It is useless (except for nVidia hardware)

Why, Derek? nVidia's most recent release has full support for ARB-extension headers, as well as generic DX9 profiles.

It will die - a horrible and glorified death

Again, why? There are developers out there who want to produce cross-platform engines. Cg is perfect for this. Additionally, Cg allows the possibility of producing IHV-specific compilers, which is a bonus. If ATI doesn't want to capitalize this, it's their own fault. Cg will still be better than Microsoft's HLSL, as long as the generic compilers are optimized well enough.

100% bollocks

Have you NOT been paying attention?

I'll let the others educate you on the merits of this as I can't be bothered to repeat what others have.
 
What, technically, is bad about a shader language you can target OpenGL and DirectX, and has extensions for the IHV to add their own backend compiler for optimzation at runtime?

I touched on this earlier.

1) Are there really broad-reaching benefits to targeting both Gl and DirectX? (What "technically" is good about that?) Is cross-platofrm shading really an issue, vs. all the other cross-platofrm coding problems?

2) Why is having multiple IHVs each writing their own compilers "technically" good? Is it better for each IHV to reinvent the wheel? Why was there so much pressure from game developers for MS to support the MCD platform for GL, rather than have IHVs each code a complete ICD? I presume your argument is that each IHV "knows his hardware the best" and could therefore write the most optimal compiler. Why is that a better "technical" answer than having one party concentrate the resources to write a single standard compiler (so that developers can depend on the results of the compiler to be consistent)?

Do you think that the MS compiler would be that much more inefficent than individual compilers? To the point where the consistency and stability of a standard compiler are outweighed?

I DO IGNORE the fact that Cg "is nVidias." I don't care if it's nVidia's, ATIs, SIS', or any other single IHV's. I see no good TECHNICAL reason for its existence. And I only see the potential to temporarily fragment the market and cause short-term pain and inconvenience.
 
Hellbinder[CE said:
]Derek Smart ,

Awesome. I agree with you on the un-necessity of Cg, and its intent.

hehe, thanks.

I've been trying real hard to steer clear of these Cg discussions - but after I got an email inviting me to this thread, I simply had to.

I said the same thing about Glide and that once DX became a viable alternative, it would die. Everyone called me crazy. Glide was perfect back then and especially since there was no other API available for such.

Then MS got their act in gear. And Glide died, long before 3Dfx as a company, followed suit.

However, this whole Cg thing is a farce. And to me - no matter how much I love the nVidia guys - it seems to be some vast conspiracy to undermine ATI and other video board manufacturers and control the aspect.

The end result will be that if any developer in their right minds - and I'm not talking about smaller, obscure we need the money and attention developers - decides to support Cg outright, nVidia thinks it will catch on. And if that happens, it will establish Cg as the defacto standard. The end result will spell disparity and problems for everyone. Except for nVidia.

I swear when Cg fails, I hope someone gets fired for it.

This, my friends, may be the downfall of nVidia. Nobody saw the demise of 3Dfx until it was upon us. This fiasco and delays over the release of GeForceFX, allowing ATI to take the lead and thoroughly embarass nVidia with the release of the 9700 series (notwithstanding the dismall release and drivers), is just nail #1 in the coffin. Delaying the production of the geForceFX for micron fab reasons is #2. Cg is #3. We're just going to hold fast for a few more nails before nVidia slips even further.

It never fails. You get too big, too fast and some jackass is likely to throw a spanner in the works and bring down the house of cards.

It is inevitable

It can and will happen

And its not even a matter of time - as the clock is already running out

Even if Cg does succeed in working properly with other HW (last time I checked, it didn't - and I have the latest version which I've been pissing around with), it still won't justify its existence - especially with DX9's HLSL.

Now Hurry up and get that totally Awesome BC Generations game out.. I am REALLY looing forward to it

hehe, am on it! I guess you haven't heard the latest industry stirring news then?

Joe DeFuria said:
I see no good TECHNICAL reason for its existence. And I only see the potential to temporarily fragment the market and cause short-term pain and inconvenience.

Absolutely - especially since, unlike the days of Glide, we don't need Cg. PERIOD.
 
Umm, initially I said that I would wait for the verdict to be out on nvidias Cg. But my problem here is that the demo in question does come from nvidias site. AFAIK nvidia has always produced demos that won't work on competitors’ hardware. (Correct me if I am wrong here.) It does seem however from what I have read that Cg is proprietary, in which case I am not surprised and disappointed. On the plus side is that MS HLSL will come out on top and for good reason. I think that now ATi has been able to steal the top end from nvidia for nearly half a year with DX9 compliant hardware developers are thinking twice before they create software with one IHV in mind.
 
antlers4 said:
The discussion in the DirectX dev forum thread was very illuminating for me. nVidia promotes Cg touting cross-platform support. Microsoft touts HLSL touting a hardware-neutral solution. Both are trying to tie developers to their tools; Microsoft in order to promote and enhance its OS and XBox; nVidia in the hope that developers using Cg tools will see nVidia cards as the natural development platform, producing games that show nVidia cards in the best light.

ATI is in the Microsoft camp because they don't want to cede any developer mindshare to any nVidia-controlled tools. They are in an uncomfortable position: if they endorse Cg they give nVidia added power in the marketplace; if they resist Cg obstinately but developers become attached to it, their cards will never be shown to their best advantage.

100% correct

And don't forget, there are probably nefarious happenings going on in the background between MS and nVidia - mostly stemming from that whole XBox graphics chip fiasco.

I cannot think of ANY one reason why ANY developer would want to use Cg over HLSL. Unless they have a short term career or they are part of the nVidia marketing machine.

We already use DX tools and with familiarity. As such, HLSL is the best choice - REGARDLESS of Microsoft's intentions. After all, MS did give us DX to begin with and which made developing high level graphics games possible. So, them pushing HLSL is not such a bad thing. It just makes sense.

If you didn't like MS before, well then, you should've been coding for OGL.
But if you're already coding for DX, the choice between using HLSL or Cg, is clear. Unless you're an uninformed fool or part of the nVidia marketing machine.

Don't get me wrong, I love nVidia, even ATI (when they're not pissing around and making more work for me), but this Cg farce is just that: A FARCE.

RussSchultz said:
Ignore NVIDIA; ignore the marketting aspect. What, technically, is bad about a shader language you can target OpenGL and DirectX, and has extensions for the IHV to add their own backend compiler for optimzation at runtime?

Oh, I dunno.... Everything maybe? :rolleyes:

If the OGL camp wants to use Cg, thats their problem. My discussings are solely limited to DX9 and Cg. I don't care about OGL. I don't want to use OGL. I'm never going to use OGL. PERIOD.
 
Joe DeFuria said:
What, technically, is bad about a shader language you can target OpenGL and DirectX, and has extensions for the IHV to add their own backend compiler for optimzation at runtime?

I touched on this earlier.

1) Are there really broad-reaching benefits to targeting both Gl and DirectX? (What "technically" is good about that?) Is cross-platofrm shading really an issue, vs. all the other cross-platofrm coding problems?
You can't simply ignore one problem "because we have so many others". If you did that, you'd never solve anything. Regardless, this is NOT a technical argument. If it has no place to exist, it will cease to exist on its own.

And yes, a single code tree is greatly preferred to a bifurcated one. We're forced into have two SDKs for our products because one product is just different enough that the code base doesn't work well on both. It means one doesn't work nearly as well as the other.

But, that being said, I have no idea how big of a problem this actually is today, but I can only imagine that it will get worse as time goes on. 8 instructions is no big deal to port, but there's talk of 32k of instructions, or even unlimited instructions. That is a heinous port.
2) Why is having multiple IHVs each writing their own compilers "technically" good? Is it better for each IHV to reinvent the wheel? ... I presume your argument is that each IHV "knows his hardware the best" and could therefore write the most optimal compiler. Why is that a better "technical" answer than having one party concentrate the resources to write a single standard compiler (so that developers can depend on the results of the compiler to be consistent)?
You asked and answered the question yourself--the IHV will do a better job because they know their hardware the best. If microsoft does it, you'll have compliance, but beyond that, there's no impetus by Microsoft to improve the implementations. "Different" (i.e. non-standard) architectures will get poor performing implementations.

If the IHV does it, you won't have a stability issue because you can still require compliance and test for it, but they have a desire and the knowledge to make sure the compiler makes the best use of their architecture.

Do you think that the MS compiler would be that much more inefficent than individual compilers? To the point where the consistency and stability of a standard compiler are outweighed?
Yes. My experience with MS is they have the same problems as everybody else: a limited amount of personel resources. We (Sigmatel) could have done a much better implementation of DRM on our processor then they managed. It works, but we could have done a better job which would have cost us less and performed better.
 
You asked and answered the question yourself--the IHV will do a better job because they know their hardware the best.

I asked, answered, and then questioned if that answer was valid. (Read: I don't think that answer is valid.) Again...what was all that fuss about with Microsoft not supporting the OpenGL MCD architecture in Windows?

Possibly it will be better, but that doesn't address my question. If it's better, how much?

Has anyone written HLSL code, compiled it under DX9 HLSL, (with PS target 1.3), and compared the ouput to Cg? Running on both nVidia and non-nVidia hardware?

Different" (i.e. non-standard) architectures will get poor performing implementations.

Ahh...more FUD...just what we need.

If the IHV does it, you won't have a stability issue because you can still require compliance and test for it

Right. See WHQL. :rolleyes:

but they have a desire and the knowledge to make sure the compiler makes the best use of their architecture.

You seem to imply that Microsoft doesn't have the desire to keep developers using their platform of Windows and DirectX? That MS doesn't some driver to make their product the best it can be? This only applies to IHVs?

nvidia has one interest: get code as optimal in nVidia hardware as possible. MS has one interest, have DirectX as a whole be as optimal as it can be across multiple hardware platforms while being as consistent as possible.

I prefer the latter approach.

Yes. My experience with MS is they have the same problems as everybody else: a limited amount of personel resources.

From what I've heard, MS has perhaps the best compiler coders in the industry. What's wrong with nVidia using "it's resources" to work WITH Microsoft to ensure the compilers work "well" with their hardware? (Which I believe is exactly what they did.)

Again, I would really like to see some comparisons of the results of HLSL compiler vs. Cg Compiler...on both nVidia and non-nVidia hardware.
 
nvidia has one interest: get code as optimal in nVidia hardware as possible. MS has one interest, have DirectX as a whole be as optimal as it can be across multiple hardware platforms while being as consistent as possible.

I prefer the latter approach.

If i own Ati hardware, i want it to be used as optimal as possible. And same thing if i own Nvidia hardware.

And, i think that for MS, the consistent part comes first and optimizations later.

Let's make an example, Ati's and Nvidias latest cards are just as fast in one type operation. Then they (MS) discover that they can optimize the Ati version so that it became 20% faster. Do you think that they would make that optimization a high priority ?
 
Joe DeFuria said:
I asked, answered, and then questioned if that answer was valid. (Read: I don't think that answer is valid.)

Possibly it will be better, but that doesn't address my question. If it's better, how much?

"Different" (i.e. non-standard) architectures will get poor performing implementations.

Ahh...more FUD...just what we need.

I see. Did you even read where I stated that our DRM implementation (as done by microsoft) was a very very sub-optimal implementation on our platform? I find it very appropos since: a) our platform is "different" (dsp vs. risc architecture) and b) as far as I know, the only criteria for them being done was bitwise correspondance with their expected output.

By the way, why are you hung up about the MCD vs. ICD thing? I don't see it being terribly applicable.
 
I think there are good technical reasons for Cg. IHVs should be able to provide a HLSL (by this I mean generic HLSL, not necessarily Microsoft HLSL) backend that produces the best possible shaders for their cards with their drivers. And ideally, one HLSL works with both OpenGL and DirectX.

Furthermore, NVidia had good technical reasons to bring out Cg--they wanted something that would target all the capabilities of their cards and make sophisticated shaders easier to write, and they didn't want developers to have to wait for OpenGL or DirectX to get around to it.

The arguments against Cg are more economic than technical. But they are good reasons, nonetheless. None of NVidia's competitors want NVidia to control the shader language. It's akin to giving them a piece of Direct3d to control. (We've added X feature to Cg. It's so cool you'll never want to write another shader without it. You can use it to produce shaders that run great on NVidia cards right now. We've documented the language extension so the other IHV's can update their own card-specific profiles whenever they get around to it...)

Microsoft has its own nefarious reasons for not wanting a common shader language for DirectX and OpenGL.
 
Back
Top