Devs Speak on PS3.0

As Walt explained, we just do not know the state of SM3.0 WRT the Nv40 due to the fact that we have nothing to compare it to. Either andypksi or sireric once said about the r3xx that SM2.0 code just works. That is a important consideration even for today. As Dave’s preview has shown, the same can now be said with Nvidia’s SM 2.0 handling using the NV4x. There does not seem to be any need for hand tuning of shaders. This in turn may leave developers in the position of looking at a very sizable market (all R3xx and Nv4x derivatives) which can be addressed with relative ease. To think that ATI will be at a disadvantage because developers will be coding SM3.0 games on Nv hardware is, perhaps, a bit premature.
 
geo said:
reever said:
So, now it appears that ATI may be about to invite a reoccurrence of the DX8 situation with PS3.0 to their own detriment and the detriment of future ATI owners.

So do you think more people are going to be buying an R420 variant(before the PS3.0 Ati cards come out, which prbably won't be long), then people who bought an FX, which is going to get worse and worse as developers add more PS3.0 code AND 2.0 code for the R420? All while Nv most likely isn't going to be babying it around with replacement shaders?

The problem will be, if one develops, for people who buy the R500 and then find that its PS3.0 drivers are what they will call (very loudly, whether fair or not) "buggy". Some portion of that will be due to all the development for PS3.0 to that point will have been done on NV40 hardware and drivers. Some of that will be the months of experience with PS3.0 drivers that ATI will be ceding to NV, and some will be that rightly or wrongly the NV40 implementation will become the defacto standard that developers will be programming around. Some NV "bugs" will become standard and owners of ATI R500 will bitch that ATI's correct implementation is the "bug" since "it worked fine" on NV40.

This is happening now with the nVidia GLSL driver/compiler: it defaults to a syntax and extensions that only work on nVidia hardware. So, if a developer uses nVidia hardware to develop an OpenGL application, there is a large probability it won't run on cards from other vendors.

If I were a developer and saw the track record of nVidia from the last year or so, I would use an ATi card for development. But then I might need to spend additional time to get it to run on (older) nVidia. Either way I lose and have to spend extra time and money.

I think it is a very bad thing when one vendor actively tries in a number of ways to get applications created that only run on their hardware. And it definitely holds the use of new hardware features back.

I don't think that they can get away with it very easily, but they seem to think otherwise (and they are BIG). Would they actually try to create a situation in wich you need graphics cards from a number of vendors to run all your software? Hm. Can that be done with PCIe? I hope not!
 
jimmyjames123 said:
As I said, it's completely dependent on what the application is doing.
And as I said, apparently certain effects can be coded more efficiently for PS 3.0 than for PS 2.0. Why else would any developers even be interested in embracing PS 3.0 at all?
That wasn't what you said at all.
Apparently, PS 3.0 is supposed to be a more efficient way of processing instructions than PS 2.0, even if image quality is unchanged.

-FUDie
 
nelg said:
As Walt explained, we just do not know the state of SM3.0 WRT the Nv40 due to the fact that we have nothing to compare it to. Either andypksi or sireric once said about the r3xx that SM2.0 code just works. That is a important consideration even for today. As Dave’s preview has shown, the same can now be said with Nvidia’s SM 2.0 handling using the NV4x. There does not seem to be any need for hand tuning of shaders. This in turn may leave developers in the position of looking at a very sizable market (all R3xx and Nv4x derivatives) which can be addressed with relative ease. To think that ATI will be at a disadvantage because developers will be coding SM3.0 games on Nv hardware is, perhaps, a bit premature.

I prefer "forward looking" :) And, no doubt it will turn out that way eventually (R550? R600?) as it now, eventually, appears to have done with PS2.0 as you note. I'm just pointing at the recent history (last two generations prior to this), the lessons of which seem clear on this point, at least to me. As Damon Runyan said, "The race is not always to the swift, nor the battle to the strong --but that is the way to bet."
 
jimmyjames123 said:
That wasn't what you said at all.
Now we are just arguing semantics. The implication should have been clear.
No, it sounded to me like you were arguing that PS 3.0 magically makes everything more efficient. If that is not what you meant, fine.

-FUDie
 
DiGuru said:
This is happening now with the nVidia GLSL driver/compiler: it defaults to a syntax and extensions that only work on nVidia hardware. So, if a developer uses nVidia hardware to develop an OpenGL application, there is a large probability it won't run on cards from other vendors.
Very wrong.

GLSL is not statically compiled. It is always runtime compiled. This will never happen (the only possibility would be a programmer writing a shader that is simply too long to compile on PS 2.0, or uses too many resources...but that's an issue that developers have been working with for a very long time).
 
Chalnoth said:
DiGuru said:
This is happening now with the nVidia GLSL driver/compiler: it defaults to a syntax and extensions that only work on nVidia hardware. So, if a developer uses nVidia hardware to develop an OpenGL application, there is a large probability it won't run on cards from other vendors.
Very wrong.

GLSL is not statically compiled. It is always runtime compiled.

What difference does that make?

If nVidia's compilers:

1) Allow things they shouldn't, based on proper GL behavior

and/or

2) Don't allow things they should

Any given app that has been developed using nVidia drivers / compilers as the basis has the possibility of NOT working on other cards...even if it compiles without errors on nVidia hardware.
 
Chalnoth said:
DiGuru said:
This is happening now with the nVidia GLSL driver/compiler: it defaults to a syntax and extensions that only work on nVidia hardware. So, if a developer uses nVidia hardware to develop an OpenGL application, there is a large probability it won't run on cards from other vendors.
Very wrong.

GLSL is not statically compiled. It is always runtime compiled. This will never happen (the only possibility would be a programmer writing a shader that is simply too long to compile on PS 2.0, or uses too many resources...but that's an issue that developers have been working with for a very long time).

Others feel the same way
I thought my comment on nVidia's implementation being "deficient" would raise an eyebrow or two, but Mark Kilgard himself... wow


quote:
--------------------------------------------------------------------------------
Wouldn't it seem the implementation upon which the shader does not work is the deficient one?
--------------------------------------------------------------------------------

No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.

The typecast-operator alone should have immediately thrown up a syntax error. But, like a good Cg compiler, it just took it.

It's like having an implementation of ARB_fragment_program that, when shadow textures are bound, does the depth compare operation in clear defiance of the spec... oh wait, nVidia's GL implementation does that too...

The point is that it is perfectly acceptable to call an implementation of an extension that does not follow the spec deficient.


quote:
--------------------------------------------------------------------------------
Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.
--------------------------------------------------------------------------------

Which is both true and a perfectly legitimate thing to bring up when the language was being defined. And I'm pretty sure you guys did. However, you lost.

The correct decision at that point is to accept the loss and do what the spec says. It is not acceptable to violate parts of the spec just because you just don't agree with them, even if the disagreement is perfectly reasonable and rational. This confuses shader writers who need cross-platform portability. Suddenly, what seemed like a perfectly valid shader on one card fails to even compile on another.

Look, I agree that there's a lot of nonsense in glslang. I can't say I'm happy with the language; there's lots of stuff in there that looks like it was added solely to be different and wierd. I probably would have preferred that Cg became the OpenGL shading language, or something similar to it. But we have to adhere to specs, even those we disagree with. If we don't, we create chaos and further weaken OpenGL.


quote:
--------------------------------------------------------------------------------
inability to override standard library functions
--------------------------------------------------------------------------------

Wait. It has that. I forget what you have to do, but I definately remember reading about precisely how to do it in my OpenGL Shading Language book.


quote:
--------------------------------------------------------------------------------
NVIDIA's GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically "just work as you'd expect" in GLSL.
--------------------------------------------------------------------------------

But it doesn't have to. 3DLabs was "nice" (read: desperate for attention) enough to provide a full parser for glslang that would catch the vast majority of errors that nVidia's compiler lets through. The idea for releasing this was so that there would be some conformity in compilers. Apparently, you just decided to shoehorn glslang into nVidia-glslang.

If you're having a meeting with your lead programmer, and he comes to a decision you don't agree with, then you argue with him. Either you convince him that he's wrong or you don't. However, when the meeting is over and a decision is made, you either follow through or quit. Back-dooring the language like this is just unprofessional.

I was getting pretty stoked for an NV40-based card. But this complete and total lack of willingness to ahdere to a spec, more than anything, even the news of ATi upping the number of pipes in their new chips, is sufficient reason to keep a Radeon in my computer. At least, I can be sure that any shaders I write will work anywhere...

The correct response to, "Your compiler is in violation of the spec" is not, "We don't agree with the spec because it's silly." The correct response is, "We recognise this to be an error, and we will fix the problem at our earliest convienience." I would have accepted, "Our glslang compiler was built by shoehorning our Cg compiler to accept the language. Doing this, however, did leave langauge constructs that Cg provides open to the glslang input path. We intend to correct this as our glslang implementation matures."


quote:
--------------------------------------------------------------------------------
Our extended features are there just for the convenience of developers.
--------------------------------------------------------------------------------

Extending the language is one thing. Perfectly reasonable with valid extension strings/specs. Changing it's syntax, making a syntax acceptable that isn't, is quite another.
 
Back
Top