How Cg favors NVIDIA products (at the expense of others)

nggalai said:
Perhaps not in these 14 pages, but of the about 10 dev studios and publishers I have personal ties to, 6 have been using Cg for quite some time already. Apparently, there's something interesting in Cg for them, no?

Incidentially, doesn't TR6 use Cg as well?

93,
-Sascha.rb

As STALKER developers put it to me, they had no choice BUT to use CG for Nvidia cards, as they were having all kinds of issues with HLSL...including speed.
 
DemoCoder said:
Well, since Cg generates code slower than assembly, I find that hard to believe.

Hi,

In fact, the graphics engine is developed on Radeon9700 :)
We demand special support from FX driver guys, 'cause in pure/standart
DX9 it's impossible so run the engine on FX
, at the moment.
Yes, FX will be slightly faster, but in the margin of several
percentages...

So, don't worry, your card will be excelent performer in
S.T.A.L.K.E.R.

--
Best regards,
Oles V. Shishkovtsov
GSC-Game World
oles@gsc-game.kiev.ua

Going by what I'm told Democoder ;)
 
First some broken thoughts, then a logical examination into the endless possibilities.
As it stands now CG is beneficial to NVIDIA due to the fact that the back end was optimize for NVIDIA designed chips.
development costs, company resources, time, effort, producing code marginally faster than dxhlsl in some cases actually slower in others, games that use cg right now? don't really see how CG is that beneficial to NVIDIA right now

Moreover, Cg is going to be identical to DirectX9's built in HLSL. Therefore, if you’re are an IHV and you DON'T PRODUCE A BACKEND FOR DX9 HLSL, YOUR CARD WILL DEFAULT TO GENERIC DX9 HLSL COMPILER IN THE D3DX library.
If this is true, CG will most likely become the standard unless it has competition which produces a greater net gain in performance.

I've got some words for you to ponder..' ATI WILL NOT SUPPORT CG 'so you can forget this 'profile BS' as ATI is not going down that road...maybe after that is driven into your skull twenty times you will see only NVIDIA will be using this
If the previous statement was true, then this hardly matters. ATI wouldn't have much to fault in CG because I’m pretty sure they stated that DXHLSL is already performing optimally on the R3XX. The only complaint they could have is that it does increase the performance of their competition, but this is hardly valid unless companies are expected to role over for their competition in a capitalistic system. That's not right is it?

So you think CG HLSL will be more popular than Microsoft’s official DX9 HLSL..I don't
code once: overall increase in performance = better gaming experience = more games sold = greater developer support... At least that is how I see it.

*modified* Either NVIDIA(or ATI) will make it really good at generating code for R300 and other cards (in which case, what's the problem?) or developers will refuse to use it and it will die in the market place
This is pretty much true... let us examine a few cases.[/quote]


First let’s consider something that resembles the current situation in shader performance (very roughly). Let left column be standard DX9 performance, right CG.
Code:
       NV3X                       R3XX
       150%                       100%
    0.70 | 1.05               1.00 | 1.00 (max 1.10)
Clearly this benefits NVIDIA; however, it also benefits the game developer as he now has a game that runs better on a potentially significantly greater number of machines. We will guess that if ATI's potential performance increase of 10% which while giving them back the performance crown is hardly worth their time if the performance threshold of the game is already being met (say 120fps+ through all settings).

The previous example might be more closely related to overall performance than that of shaders... just to try and keep complaints down:
Code:
       NV3X                       R3XX
       150%                       100%
    0.50 | 0.75               1.00 | 1.00 (max 1.10)
Here, there is even less motivation for ATI to develop a back end as they are winning by a large margin either way. Let's say in the next generation the picture changes a bit though....
Code:
       NV4X                       R4XX
       150%                       100%
    0.90 | 1.35              1.00 | 1.00 (max 1.40)
Now, it is certainly more valuable to ATI to acknowledge CG. (If your wondering why the shader performance didn't increase for the R4XX, I am letting 1 represent the score of the performance leader and displaying the lesser's as a percentage of that score). This works out well for all parties concerned at the developer and ATI get increased performance (retains competitiveness vs. losing it) and NV cuts the ATI lead in half. Perhaps, as some fear NV would try to limit the implementation of CG so that competitors’ products can only use the default profile. However, if ATI develops a backend for CG, there is really nothing NV can do to stop game developers from using it (as suing the people that are going to make the games that will [or wont!] run on your hardware does not seem like a viable plan if NV wants to stay in business for any length of time.)

Game genre FPS
Performance threshold (60fps):
Code:
       NV4X                       R4XX
       150%                       155%
    0.75 | 1.125              1.00 | 1.55   (of 60)
       45 | 67.5                  60 | 93
Here, we can see another possibility with regards to performance in that CG here provides NV with the valuable service of boosting the performance of their card to acceptable levels despite giving an even greater increase to ATI.

In thinking about the situation analytically, CG in my eyes does not directly benefit NV but rather all those companies who trail the performance leader. Thus, CG currently benefits NV more than ATI; however, the results of CG do seem odd if NV is indeed intent on reclaiming the performance crown. Somewhat ironically, NV may be counting on CG to aid them in this quest, but at the same time, it might aid their adversaries enough to push them across the competitive threshold.

So what we have in CG is a proposal to:
1. Improve overall performance
2. Increase capabilities
3. maintain an equal or reduced workload

In general, I have no fear of CG succeeding or failing as there are really only two possibilities: the industry advances faster because of cg, or at the same rate as it would have without cg. From my viewpoint, I consider it a win/win situation. Of course, I probably just wasted a large amount of time in trying to understand something that will only have one result. Well, future reference I suppose......
 
Evildeus said:
Hmmm, shouldn't this date make you wonder why :?:

No, as the same issues affect the entire FX line-up with exception to the NV35. I assume you feel these hardware issues exposed here and many other sites are just magically disappearing ?
 
Just saying that at the time the FX was non-present, and the drivers were... well worse than they are today, and today they are till not great... So, if you ask the same question today, are you sure you would receive the same answer?
 
They switched to a Nv35 about a 2 months ago to start 'optimizing' for being a technical partner and all.
Bottom line is, not everyone will have a 5900 to play Stalker with, in fact approx 3% of the people buying the title will have one.
 
Tridam said:
StealthHawk said:
Hence the lack of PS1.4 support in Cg. Now we know, that even gfFX cards run PS1.4 code faster than they run PS1.1 code...

????
I think you're false in regard to ps_1_4 performances. GeForce FX run ps_1_4 as slowly as they run ps_2_0. It's why NVIDIA doesn't like ps_1_4.

No, AFAIK I am correct. Someone ran 3dmark03 with a gfFX5800 with PS1.4 vs PS1.1. There is no reason why his results should be inaccurate, and unless someone more trustworthy contradicts said results I'll believe them.

http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=13945

5800 Ultra and 44.65 drivers...

3DMark03 (330):

GT2 + PS1.4 - 37.1 fps
GT2 + PS1.1 - 29.5 fps

GT3 + PS1.4 - 31.5 fps
GT3 + PS1.1 - 25.4 fps
 
StealthHawk said:
Tridam said:
StealthHawk said:
Hence the lack of PS1.4 support in Cg. Now we know, that even gfFX cards run PS1.4 code faster than they run PS1.1 code...

????
I think you're false in regard to ps_1_4 performances. GeForce FX run ps_1_4 as slowly as they run ps_2_0. It's why NVIDIA doesn't like ps_1_4.

No, AFAIK I am correct. Someone ran 3dmark03 with a gfFX5800 with PS1.4 vs PS1.1. There is no reason why his results should be inaccurate, and unless someone more trustworthy contradicts said results I'll believe them.

http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=13945

5800 Ultra and 44.65 drivers...

3DMark03 (330):

GT2 + PS1.4 - 37.1 fps
GT2 + PS1.1 - 29.5 fps

GT3 + PS1.4 - 31.5 fps
GT3 + PS1.1 - 25.4 fps

Actually you're right and I'm right ;)

In fact I've checked it up with FX5200 and I've assumed it was true for FX5800 and FX5600. I was wrong. I've just finished some more tests (not with 3dmark):

FX5600 shading power:
PS 1.1 : 1300 Mops/s (285 MPix/s)
PS 1.4 : 1300 Mops/s (281 MPix/s)
PS 2.0 : 650 Mops – texturing ops (81 MPix/s)

FX5200 Ultra shading power:
PS 1.1 : 1300 Mops/s (283 MPix/s)
PS 1.4 : 650 Mops/s – texturing ops (105 MPix/s)
PS 2.0 : 650 Mops/s – texturing ops (80 MPix/s)

With FX5200, ps_1_4 is really slow but it seems good with FX5600.
 
ninelven said:
....
So what we have in CG is a proposal to:
1. Improve overall performance
2. Increase capabilities
3. maintain an equal or reduced workload

In general, I have no fear of CG succeeding or failing as there are really only two possibilities: the industry advances faster because of cg, or at the same rate as it would have without cg. From my viewpoint, I consider it a win/win situation. Of course, I probably just wasted a large amount of time in trying to understand something that will only have one result. Well, future reference I suppose......

I rather thought the "proposal" of Cg was to utilize a high-level language coupled with an efficient compiler in order to avoid the mind-numbing tedium of hand-coding shaders in assembly.....?

I have no fear of Cg whatever from any perspective. I agree that since we're getting HLSL from M$ and for OpenGL that Cg isn't going to impact the industry to much of any degree. Those that find it useful will no doubt use it, just as those who don't will use the alternatives.
 
Typedef Enum said:
The thing I find most ironic is the fact that people don't seem to have a problem with a Company like Microsoft dictating things...a Company that has already been ruled a Monopoly...

Does anybody get that warm/fuzzy kind of feeling that the issues surrounding the OpenGL patents is just the beginning?
Thank-you for saying this.

Joe DeFuria said:
Micorosoft doesn't sell 3D Hardware. It does not have a direct interest in seeing one IHVs hardware succeed over another.
.

I think there is this thing called the Xbox, and microsoft has to get some company to make the 3d chipset, if you don't think this affects their stance towards Nvidia and ATI you are incorrect.
 
ninelven said:
Clearly this benefits NVIDIA; however, it also benefits the game developer as he now has a game that runs better on a potentially significantly greater number of machines. We will guess that if ATI's potential performance increase of 10% which while giving them back the performance crown is hardly worth their time if the performance threshold of the game is already being met (say 120fps+ through all settings).

Hardly, HLSL is outperforming CG on a 5600, so how can that above statement be correct. A developer has a much better compiler to use that has already been optimized for all hardware, vs. one.

The evidence is in this thread, and even on Nvidia hardware.
Radeon 9800 Pro HLSL : 125 MPix/s
Radeon 9800 Pro Cg : 100 MPix/s

GeForce FX 5600 HLSL : 11.2 MPix/s
GeForce FX 5600 Cg : 12.4 MPix/s

GeForce FX 5600 HLSL_pp : 14.8 MPix/s
GeForce FX 5600 Cg_pp : 13.8 MPix/s

GeForce FX 5600 HLSL AA/AF : 7.0 MPix/s
GeForce FX 5600 Cg AA/AF : 6.1 MPix/s

http://www.beyond3d.com/forum/viewtopic.php?t=6864&postdays=0&postorder=asc&start=20

HLSL is faster in 2 out of 3 tests on Nvidia Hardware, so what is the incentive again
it also benefits the game developer as he now has a game that runs better on a potentially significantly greater number of machines

That statement is wrong.
 
I was under the impression that CG was a work in progress (i.e. improving). Please forgive me if I was mistaken.

I rather thought the "proposal" of Cg was to utilize a high-level language coupled with an efficient compiler in order to avoid the mind-numbing tedium of hand-coding shaders in assembly.....?

This would be an advantage of Cg over assembly sure. However, I was adressing some of the potential issues regarding why a person might choose one hlsl over another.
 
Doomtrooper said:
HLSL is faster in 2 out of 3 tests on Nvidia Hardware, so what is the incentive again
Those tests were essentially the same, and I see no reason to believe that they are indicative of a large number of shaders used in-game.

Additionally, Cg is currently the only HLSL that works in OpenGL.
 
Chalnoth said:
Doomtrooper said:
HLSL is faster in 2 out of 3 tests on Nvidia Hardware, so what is the incentive again
Those tests were essentially the same, and I see no reason to believe that they are indicative of a large number of shaders used in-game.

Of course those tests are based on a single shader and can't say "Cg is always better" or "HLSL is always better". Those tests just show how graphics cards deal with Cg-minded (less register) and HLSL-minded (less instructions) shaders.
 
Doomtrooper said:
Hardly, HLSL is outperforming CG on a 5600, so how can that above statement be correct. A developer has a much better compiler to use that has already been optimized for all hardware, vs. one.

Doomtrooper, there is no way to be optimal for all hardware. You wouldn't want a single device driver for all video cards and require register level compatability to make it work, and you don't want a single compiler for all video cards.

You want the compiler to be part of the device driver, and you want each vendor to ship their own optimizer that optimizes it at runtime.

Your crusade against Nvidia is blinding you to the real issue which is to have multiple backend optimizers be part of DX9. You won't admit it, because you perceive it as an Nvidia originated idea and no one can admit that Nvidia has any good ideas.

All Microsoft's compiler can do is perform generic optimizations like common subexpression elimination, copy propagation, and dead code elimination. It cannot schedule instructions (reorder them to take advantage of each card's architecture) nor alter it's register allocation strategy based on card architecture.

Some people expect the device driver to do this on the shader assembly, but that is essentially trying to have the device driver reverse engineer the full semantics of the source code out of compiled binary at runtime, and you won't get the same level of optimizations as if you start with the original source.

Since both HLSL and GLSLANG contain semantics which are not expressible in shader languages (e.g. loops, branching, etc) some of that information will be erased by the time it reaches the device (e.g. branches turned into predictation) which means if the hardware actually contains *real branches* it will have to "infer" this intent somehow from a batch of predicated instructions.

I say the device driver should contain the backend of the compiler, and the front end merely does generic optimizations and stores the result as a serialization of the internal representation of the compiler.
 
Back
Top