Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DemoCoder said:
Apparently you are not aware how most software developer divisions are run. Do you work or have you worked at any software company?

Not at a software company...but I develop software for my group.

Let me put it this way: Do you think each and every API feature is analyzed for ROI?

Not exactly, no. But something as significant as the structure of the API / Compiler / Driver? Of course.

I might not examine the ROI for every feature that I code, but you can bet there is an examination of that for the platform.

At best, TIME constraints come into it: e.g. "We need to deliver this product by Q4, and we must sort features by most important, and which must be delayed to a following release"

But DC, TIME is a factor for ROI. The faster you can get it out, the more "benefit" you get for it.

If you're saying one reason why MS did what the did might be due to time, I'd say that's a completely valid possibility. (Though I'm not sure how supplying an "intermediate compiler" is faster than not supplying any compiler at all.) Now you'd have to argue that consumers would be better off with delaying DX9 HLSL support. (And yes, there arre of course arguments for and against that, but it's certainly not a given, and it's certainly arguable that IHVs that have DX9 hardware on the market want a platform available ASAP that supports it.)
 
Joe DeFuria said:
Chalnoth said:
I'd be willing to bet that the reason is simple: I don't think MS did most of the original work on HLSL. I think nVidia offered Cg to Microsoft for use in DX9, and Microsoft said, "sure," and tweaked a couple of things about the language.

So then CG compiles to an intermediate assembly as well?
Yes, with multiple targets:

http://developer.nvidia.com/object/cg_toolkit.html
* vs_1_1 for DirectX 8 and DirectX 9
* vs_2_0 and vs_2_x for DirectX 9
* ps_1_1, ps_1_2 and ps_1_3 for DirectX 8 and DirectX 9
* ps_2_0 and ps_2_x for DirectX 9
* arbvp1 [OpenGL ARB_vertex_program]
* arbfp1 [OpenGL ARB_fragment_program]
* vp20, vp30 [NV_Vertex_program 1.0 and NV_Vertex_program 2.0]
* fp30 [NV30 OpenGL fragment programs]
* fp20 [NV_register_combiners and NV_Texture_shader)
Anyway, Cg always compiles to one of a number of different intermediate assembly languages, based upon the profile selected. There's no direct compiling to the hardware. Cg also supports runtime compiling of programs.
 
Bingo. Joe, the first DX9 SDK didn't even have any documentation for HLSL at all! It was clear that MS tacked on HLSL near the very end of the cycle and rushed it out. It was clearly not part of DX9 from the beginning, but added late in the devrel cycle. None of the early DX9 slides even mentioned HLSL, and MS didn't start talking about it until after NVidia started pushing Cg.
 
Chalnoth said:
Anyway, Cg always compiles to one of a number of different intermediate assembly languages, based upon the profile selected. There's no direct compiling to the hardware. .

That's true, but the NV30 fragment programs are pretty much directly mapped to the metal so in this case, the intermediate language is the destination language.
 
DemoCoder said:
Chalnoth said:
Anyway, Cg always compiles to one of a number of different intermediate assembly languages, based upon the profile selected. There's no direct compiling to the hardware. .

That's true, but the NV30 fragment programs are pretty much directly mapped to the metal so in this case, the intermediate language is the destination language.
With the only caveat being that this is OpenGL-only.
 
DemoCoder said:
BTW Joe, I am not "anti"MS. I love many of MS's products. But I am a software developer, and despite the fact that I like to use their products, I am not "impressed" by the quality of MS's APIs, architecture, or documentation. A word comes to mind when looking at much of the MS Win32 APIs: HACK.

I am a software developer too.

And one thing I know that, as a developer, when trying to maintain backward compatibility, (implement new changes / features while not brekaing already existing code or data that relies on past infrastructure which didn't take into account the new features they are now asking for) you end up doing lots of "hack" type things.

Why?

Because the customer wants as little "impact" to him as possible. The customer doesn't want to hear how "elegant" the code is, how "robust" it is, etc. The customer wants it now, and wants to spend as little up front to get it.

Yes, I know all about educating the customer and trying to convince to them and illustrate that "a little patience and a little more up front cost will save more in the long run...." As I'm sure you know, that doesn't always, or even usually, work. At least not with my customers.

Think of it this way: You might enjoy riding a certain sportscar, but when you lift the hood, you see that the internals of the car are a mess, and it works in spite of itself, but it is not something you, as a mechanic, would respect.

Believe me...I understand!

Now think of it this way.

I enjoy riding in a certain sports car...and I don't give a rat's ass what's under the hood. :)

I have no reason to dispute that from a developer point of view, the GL model and API is a better solution.

This doesn't mean that it's better for consumers.
 
Chalnoth said:
DemoCoder said:
Chalnoth said:
Anyway, Cg always compiles to one of a number of different intermediate assembly languages, based upon the profile selected. There's no direct compiling to the hardware. .

That's true, but the NV30 fragment programs are pretty much directly mapped to the metal so in this case, the intermediate language is the destination language.
With the only caveat being that this is OpenGL-only.

OK...so what's going on here.

DC seems to be claiming that the "intermediate" language that CG compiles to is pretty much "to the metal."

Chalnoth seems to be saying that's only the case for GL.

If it's "to the metal", then CG's compiler doesn't seem to be doing a particularly better job than HLSL with, for example, Tomb Raider. So where's all the performance gains?
 
Colourless said:
Multipass behind your back + Alpha Blending = One way trip to the hot place
It can be done, given the 'right' circumstances. Deterministic stuff, the driver has all the knowledge it needs for that (current blend mode, depth/stencil state and buffer masks methinks). If it can't be guaranteed to work, shader compilation would fail (which is okay given the trial & error mechanism that's already provided).

I'm actually not too fond of automated multi-pass splitting and IIRC GLslang doesn't even allow it. Anyone care to clear this up?
 
Joe DeFuria said:
Humus said:
As have been said a number of times already on this forum, compiler technology is a standard course taught at every university. Finding good compiler people is simpler than finding good driver developers.

So, XGI, PowerVR, and S3 should have no problem then, and should have full OpenGL 2.0 and GLSL supported compilers along with everyone else..and at the same time they have DX9 support. Gotcha.

Care to wager on that?
They'll most certainly be late to the party, just as they were (are) late with shader support on the DirectX Graphics front. You really need the hardware first to start fiddling around with the compilers. No surprise here.

I'd really like to see the "small guys" collaborating on that, to a certain degree (say, parser, syntax check, dead code elimination; stop there and do the rest alone). Or they could just buy some know how from somewhere. You know, there are many ways to get some piece of software, you don't have to do everything in house if you lack the expertise.

3Dlabs already has a 'reference' compiler online with source code, and I'm sure they're willing to license it. It's a bit clunky AFAICS but it's available.
 
zeckensack said:
Care to wager on that?
They'll most certainly be late to the party, just as they were (are) late with shader support on the DirectX Graphics front. You really need the hardware first to start fiddling around with the compilers. No surprise here.[/quote]

Agree...the question is, will the small guys have "full" DX9 HSLS support before or after "full GLSLang" support?

I'd really like to see the "small guys" collaborating on that, to a certain degree (say, parser, syntax check, dead code elimination; stop there and do the rest alone).

So some (certinaly not extensive) degree, isn't some of this what MS gets you with the intermediate compiler?
 
Joe DeFuria said:
Humus said:
Huh? Wtf are you talking about? So the driver can accept it, then freely do something else that what the shader says?

Of course...the driver has the final say it everything. (Not limited to GL).

But if it does something else than what the spec says it's non-conformant.

Joe DeFuria said:
Then your driver is flawed and you will have to live with dissatisfied customers if it becomes a problem.

Wrong. Then the software dev / other IHV is wrong, yet the "correct" IHV has to live with dissatisfied customers because.

This is just cynical. If you as a software developer point out a flaw in a IHV's driver they are going to fix it rather than losing developer mindshare. There's no advantage of having non-conformant drivers.

Joe DeFuria said:
Should we also standardize exact sample positions for AA too? I mean, otherwise a vendor might implement it "wrong" and we get inconsistent results?

Why? Do software devs expect something other than "reduced aliasing" when AA is enabled?

No, which is my point. Small acceptable variance within the spec. Different precisions across vendors is another "inconsistency". I think you should post an example of any "inconsistency" that's going to matter and which the DX model would do better.

Joe DeFuria said:
I'd love to see you take any Matrox card, a 3dfx 5500, a Kyro II, S3's shipping chips, Intel's solutions.... and run it on a variety of GL apps and games vs. the same chips on a variety of DX games.

Out of these cards I would only distrust the 5500 when it comes to OGL since it's support always was subpar. Matrox isn't excellent, but ok. I have no experince with or heard any word about bad Kyro GL drivers. According to what I hear on opengl.org most people seem to be for most parts quite ok with Intels drivers too.
 
Joe DeFuria said:
DemoCoder said:
MS has lots of money to waste....

Wrong. No company has money to waste. This doesn't mean that companies don't waste money, of course. In other words, no company invests resources without at least some idea that there is a return for it.

So what is the (perceived) benefit to Microsoft for supporting and developing a DX9 HLSL to intermediate?
Same guaranteed pass/fail criteria for all drivers supporting the specific shader profile. Not that this requires a layered compiler approach, mind you. And the driver can still mess it all up.

I don't like it. But that's what I see as the intended benefit of the whole idea. One developer I've had lots of personal API wars with named this as the single biggest reason for him to use DXG over GL.
 
JohnH said:
What F-Buffer support on 9700? Or is it really there and they just don't publish the fact ? I think you might be able to manage without it but...

And naturally I could supply an arbitrarily complex shader and its guarenteed to work without bugs and at acceptable performance on all HW irrespective of its level shader support? If you think thats the case then you might want to check the tint you've had applied to your glasses!

For example just consider dynamic flow control, I could write an instruction sequence which would cause current ATi HW to have to break the shaderdown into an instruction per pass. Could it do it ? Yes. Will it be be usably fast ? NO.

Start mixing subroutine calls and looping and it gets a whole lot more messier.

On top of this the suggestion that GLSlang is supposed to be a simpler option that reduces bugs is not compatible with having to compile an arbitrary shader onto a peice of HW that has few of the features required to support it i.e. ALL current hardware.

John.

Who said anything about F-buffer on 9700? I didn't even mention the F-buffer at all. Ashli solved the problem with additional render targets.

Of course they aren't going to be able to support everything in hardware on the R300 chip, but a good subset. I would guess they won't even try accelerating dynamic flow control. It will be done in software. But supporting long shaders shouldn't be a problem.
 
Joe DeFuria said:
Wrong. No company has money to waste. This doesn't mean that companies don't waste money, of course. In other words, no company invests resources without at least some idea that there is a return for it.

So what is the (perceived) benefit to Microsoft for supporting and developing a DX9 HLSL to intermediate?

MS only need some engineers saying "we'll do it this way" and it will be done that way. The management isn't going to question design decisions in the API.
 
Joe DeFuria said:
DemoCoder said:
He's talking about D3DX and MS's compiler. If MS updates the compiler to handle newer hardware in a better fashion, it won't do jack for all those games you statically linked it into.

Correct. But it also won't f*ck up the game on existing hardware that it already runs on with no problem.

The IHV has some burden to not only take advantage of new code, but to run old code faster.

But a regular driver update can just as well f*ck up the game. There's no cure for that. We're not gaining anything.
 
Joe DeFuria said:
Chalnoth said:
I'd be willing to bet that the reason is simple: I don't think MS did most of the original work on HLSL. I think nVidia offered Cg to Microsoft for use in DX9, and Microsoft said, "sure," and tweaked a couple of things about the language.

So then CG compiles to an intermediate assembly as well?
Yes. Cg has a lot of common ground with the DirectX thingy. It compiles to already defined API interfaces (PS1.x/PS2.x assembly, NV_register_combiners&NV_texture_shader or ARB_fragment_program on the GL side of things). It's also rather 'fixed version' in nature, ie it isn't updated by drivers. A Cg DLL of a given version ships with the game and stays.

In some document NVIDIA also more or less clearly stated that runtime compilation (which, to me, implies hardware detection and automatic target profile/optimization selection), is not the intended usage model for Cg. Lost the link ;)

edit: I forgot to mention the NV_fragment_program target on OpenGL. This one's GeforceFX specific, of course, but it's still an already exposed API programmers can opt to work with directly.
 
zeckensack said:
I'm actually not too fond of automated multi-pass splitting and IIRC GLslang doesn't even allow it. Anyone care to clear this up?

It's of course allowed so long as it doesn't have side-effects, such as destroying framebuffer contents for blending. You can use separate render targets.
 
Humus said:
This is just cynical. If you as a software developer point out a flaw in a IHV's driver they are going to fix it rather than losing developer mindshare. There's no advantage of having non-conformant drivers.

This is just non-realistic.

If you're nVidia, and joe schmoe developer tells you that "hey, I found out that your driver is non-conformant. My opriginal code (which works on your drivers) is wrong, and ATI's drivers which do conform, don't accept it. I want to change my code to conform but I find out that it then breaks on your drivers."

Does nVidia tell joe schmoe "OK, we'll change it" or does he say "put a code path in your game...we'll break lots of stuff if we change our behavior now...we'll get around to it at some point."
 
Joe DeFuria said:
Agree...the question is, will the small guys have "full" DX9 HSLS support before or after "full GLSLang" support?

So some (certinaly not extensive) degree, isn't some of this what MS gets you with the intermediate compiler?

Well, support for HLSL is just support for assembly pixel and vertex shaders. Sure, it may be quicker to write naive support for those than naive support for GLSL, but this advantage goes away with time since you'll still have to write an optimizing compiler if you want performance.
 
Humus said:
Well, support for HLSL is just support for assembly pixel and vertex shaders. Sure, it may be quicker to write naive support for those than naive support for GLSL, but this advantage goes away with time since you'll still have to write an optimizing compiler if you want performance.

Hmmm.....That is more or less my entire point. ;)
 
Back
Top