Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DemoCoder said:
You are operating under the delusion that the DX9 assembly is so close to the underlying hardware that perhaps you just need to rename some of the instructions and it will work.

I am under no such delusion. Perhaps you can try and explain it better. For what reason does Microsoft compile HLSL into an intermediate representation at all?

Why does MS "Waste" the money and resources compiling into this "intermediate"? Why not just hand the HLSL source over to the driver, and wipe their hands free of it? It's all win-win for Microsoft to do it that way, right? IHVs would be happy? MS doesn't spend their own money doing it....Why don't they then?
 
JohnH said:
Arbitrary depth dynamic flow control, arbitrary depth call stack ?? NV cannot do these, they will need to resort to brute force multi-passing, maybe even with host interaction to do the conditionals properly.

John.

It's only needed for loop conditionals. Arbitrary depth dynamic flow control can be done quite easily for regular branches. Ditto for arbitrary call stack. As long as the number of "live" registers is within your limits, and instruction slots, you can inline.

Function calls are likely to be composed of lots of small functions (e.g. smoothstep, faceforward, etc), which don't add to the overall register load. Moreover, due to the fact that all functions in HLSL are "pure immutable", that is F(X) always returns the same thing no matter how many times it is called, it is likely that many calls will be outright eliminated. (e.g. if anyone is stupid enough to call NORMALIZE(N) more than once on the same N, it won't result in another call)
 
zeckensack said:
JohnH said:
Err, but this whole thread originated from OGL2.0 vs Dx9 i.e. <Something for which No HW is available for, isn't shipped in driver form by anyone yet> VS <the here and now of Dx9 SW and HW>. Given that prespective you could say this whole thread is irrelevent!
Agreed. I'd feel much more comfortable with a "DX9 vs OpenGL" poll (no version attached to OpenGL at all), that would be much more real worldish.
But its still a stupid poll, as has been stated before people tend a favourite, there's gernerally not really any point trying to turn them around. However a discussion of the two approaches used by the API's for compilation is actually useful for the furtherment of both API's.

JohnH said:
Chalnot I think you continue to miss the point here, the fact is with current HW the information loss when using the current Dx profiles doesn't matter as the HW couldn't take advantage of it anyway. (Ignoring the bug with expansion of some macros).
There will be a new generation of hardware, sometime. Requiring old applications be patched up to a new interface to perform well on new hardware is backwards, and not an attractive business proposal anyway.
The responsibility for getting optimum performance out of any given piece of silicon should IMO rest with the people that have a vested interest in this undertaking: IHVs; or more precisely their driver teams. This is the only way to make sure it gets done.
This is not correct. The old "interfaces" do not require you to patch up you code when new HW is released, it just works, and the likely hood is that if you haven't gone and used "complex" defined functions everywhere it will run faster anyway as there will always be a push to improve things at this level. RISC vs CISC migth be an interresting parallel here.

JohnH said:
wrt to the normalisation, if the HW can't do a rsq and a mul for the normalisation then what do think its going to do?
It doesn't matter. I could as well ask "If the hardware can't draw a full screen quad, how is it going to clear buffers anyway?". Semantics are good.

Sorry, I don't understand this comment.

John.
 
Joe DeFuria said:
I am under no such delusion. Perhaps you can try and explain it better. For what reason does Microsoft compile HLSL into an intermediate representation at all?

The intermediate is DX9 assembly language. DX8 made sense, because people needed a textual language for programming register combiners. MS continued with DX9 by offering an assembly language first. Then they designed a compiler to generate assembly language because even assembly language can be made easier by using a procedural language instead.

MS also has a .NET compiler which compilers C# into MSIL (Intermediate Language) which is then COMPILED by the CLR runtime a second time. Ditto for Sun's Java. JavaC compiles to "Java assembly", then JRE (Runtime Environment) compiles to 80x86 on your PC.



Why does MS "Waste" the money and resources compiling into this "intermediate"? Why not just hand the HLSL source over to the driver, and wipe their hands free of it? It's all win-win for Microsoft to do it that way, right? IHVs would be happy? MS doesn't spend their own money doing it....Why don't they then?

MS has lots of money to waste, and not getting things right the first N times doesn't hurt them. We're up to, hmm, the nineth revision of DX9. Why did they waste money on the first 8 revisions?
 
JohnH said:
This is not correct. The old "interfaces" do not require you to patch up you code when new HW is released, it just

He's talking about D3DX and MS's compiler. If MS updates the compiler to handle newer hardware in a better fashion, it won't do jack for all those games you statically linked it into.

JohnH said:
wrt to the normalisation, if the HW can't do a rsq and a mul for the normalisation then what do think its going to do?
It doesn't matter. I could as well ask "If the hardware can't draw a full screen quad, how is it going to clear buffers anyway?". Semantics are good.

Sorry, I don't understand this comment.
[/quote]

He means, the hardware should receive high level instructions with semantic meaning, that is, "CLEAR THE SCREEN", and the HW figures out how to do it vs "do it exactly like this" Imagine a HW that can't draw full screen quads. It could still clear the screen if it had a special "HW accelerated screen clear" function.

The analogy is: today, HW might not have accelerated normalization. So what? Don't matter. Just tell the HW to normalize. Today, it will normalize it using RSQ. Tommorow, it will do it using whatever technique it wants. By removing the specific information that you are "intending to normalize X", the driver will have a much harder time of utilizing specialized HW in the future to do the normalize. (hell, you might even get ANGRY if it replaced your DOT/RSQ sequence with a HW normalize that has difference error characteristics)



John.[/quote]
 
DemoCoder said:
MS has lots of money to waste....

Wrong. No company has money to waste. This doesn't mean that companies don't waste money, of course. In other words, no company invests resources without at least some idea that there is a return for it.

So what is the (perceived) benefit to Microsoft for supporting and developing a DX9 HLSL to intermediate?
 
Colourless said:
Multipass behind your back + Alpha Blending = One way trip to the hot place
Why? All passes before the final pass would not even bother to write to the framebuffer (since you'd want an FP intermediate format), so the alpha blending would only be done on the final pass, when the final color value is found.
 
DemoCoder said:
He's talking about D3DX and MS's compiler. If MS updates the compiler to handle newer hardware in a better fashion, it won't do jack for all those games you statically linked it into.

Correct. But it also won't f*ck up the game on existing hardware that it already runs on with no problem.

The IHV has some burden to not only take advantage of new code, but to run old code faster.
 
Xmas said:
JohnH said:
But you haven't proven the converse either, and no one has successfully argued against my point on the profiles.
I'll try :)

JohnH said:
Xmas said:
JohnH said:
Many of the claimed benifits do exist. Tell me how you guarantee that something written for GLSlang and test on one driver/HW is guaranteed to work on any piece of HW in the field ? The profiles used for the intermediate target are a good step to fixing this, they need beefing up a bit by inclusion of a few ?external? caps, but as I said to Humus, the 3.0 profile does this.
One of the easiest ways to solve this would be that each IHV offers a download of small "validation tools" that share their code with the shader validation mechanism in the driver. Then a developer doesn't need the hardware to know whether it will run or not. It has the implicit assumption that upcoming hardware is at least as capable as its predecessor, but I don't think that's a problem.
Having a standard is supposed to help you avoid having to do things like that.

The fact is you have to provide paths which run on lesser variant of HW any way, having a set of profiles that define the those variants should make it a lot less hit and miss, its either that or back to detecting vendor and device ID's (not that we've managed to get away from that yet).

John.
I think validation tools from every IHV would be a far better way of defining the limits of the hardware than a profile. Profiles are simply not able to express the kinds of limitations and capabilities of hardware today, so it comes down to the LCD. You could even end up having a shader that would run on any available and forthcoming DX9-level hardware, but simply can't because the profiles are too restrictive.

From the development perspective, both variants aren't that much different. But the validation tool approach could potentially yield better shaders. Because it gives the IHVs the possibility to express the hardware limitations and capabilities in the most accurate way.

With profiles, you decide on your target profile, write a high-level shader, and simplify it until it compiles using the profile.

With validation tools, you can do the same, decide on which tools (representing certain hardware) you want to target, then write the shader and simplify it until all validation tools report success. BUT you could also keep all the "side-product" shaders you get while simplifying and JIT-compile them in your application if the hardware proves capable of running them.

Hmm, although you still need to provide the fall backs for HW and compilers/validators that you've never seen, e.g. you could exclude a peice of HW from running something unnecessarily for the most simple of reasons/bugs. Basically I think this still only works well/reliably given a profile like framework as you end being able to minimise your targets. I'd suggest validation levels at Dx equive levels 2.0, 3.0, full GLSlang, Dx10 generic (or what ever), note that there should be no expectation for HLSL support on HW prior to DX9 VS/PS2.0, just to keep things simple. But then again I'm starting to babble a bit now.

Must go and do some work..
John.
 
JohnH said:
Arbitrary depth dynamic flow control, arbitrary depth call stack ?? NV cannot do these, they will need to resort to brute force multi-passing, maybe even with host interaction to do the conditionals properly.

More in Response to Humus below..

John.
Obviously nVidia doesn't yet have these. The point was that nVidia supports long enough programs that going multipass won't be a huge performance hit (unless lots of stuff needs to be reprogrammed in the second pass...but by that time, we're not talking about realtime rendering anyway).
 
Joe DeFuria said:
"if there's no difference between compiling from DX9 'intermediate representation' vs. compiling from GL 'high level representation', then why are we having this discussion

says who, pardon me
 
Joe DeFuria said:
DemoCoder said:
MS has lots of money to waste....

Wrong. No company has money to waste. This doesn't mean that companies don't waste money, of course. In other words, but no company invests resources without at least some idea that there is a return for it.

So what is the (perceived) benefit to Microsoft for supporting and developing a DX9 HLSL to intermediate?

Why do you even think any alternative was on the table? What's the perceived benefit of MS using COM interfaces, Hungarian Notation, execute buffers, and locks? If you've already got an assembly language, why not produce a compiler for it?

So what's this coming to Joe? You have no technical backup for your argument, so you appeal to authority? MS did X, ergo, it must be the best option? ARB is composed of IHVs, and they voted for the OGL2.0 architecture. Obviously, if they thought it had huge implications for their ICD teams, they would have voiced them in the resolutions and minutes right? Oh, but ARB sucks, and MS gets everything right the first time?
 
Joe DeFuria said:
DemoCoder said:
He's talking about D3DX and MS's compiler. If MS updates the compiler to handle newer hardware in a better fashion, it won't do jack for all those games you statically linked it into.

Correct. But it also won't f*ck up the game on existing hardware that it already runs on with no problem.

The IHV has some burden to not only take advantage of new code, but to run old code faster.

Then they should never ship new device drivers then. Ever heard of regressions? Boy, here we are again, at the same point we were a few days ago.

Besides the fact that you are just plain wrong (MS could simply ship a new and old compiler, which detects old HW and runs the same old compiler on older HW for compatibility if BUGS were detected), MS compilers produce DX9 PS2.0, and therefore, any legal PS2.0 program that runs on new hardware should also run on old HW, otherwise, the old HW is not WHQL compliant.

Of course, you don't seem to care that any updates to a DX9 driver are far more likely to f*ck up old games.
 
DemoCoder said:
Why do you even think any alternative was on the table?

There's always alternatives on the tabnnle. Why do you even think there was no alternative?

What's the perceived benefit of MS using COM interfaces, Hungarian Notation, execute buffers, and locks? If you've already got an assembly language, why not produce a compiler for it?

You're making no sense. Why doesn't MS do nothing. Just define the HLSL language spec, and tell the IHVs; "OK...Have At It!"

So what's this coming to Joe? You have no technical backup for your argument...

No, I have common sense. Companies don't invest money without an eye for a return on their investment.

MS did X, ergo, it must be the best option?

Sigh.

No, MS did X for a reason. What is that reason?

ARB is composed of IHVs, and they voted for the OGL2.0 architecture.

Therefore, it must be the best option?

Obviously, if they thought it had huge implications for their ICD teams, they would have voiced them in the resolutions and minutes right? Oh, but ARB sucks, and MS gets everything right the first time?

Pardon me, but stop being a jerk-off. How many times have I said that there are pros and cons for each approach. I've never said that MS is god, or that the ARB sucks.

Far from it.

I said that MS does it for a reason.
 
Wow DC...seem to have pressed your buttons today. Are you really just that Anti MS?

DemoCoder said:
Then they should never ship new device drivers then.

Of course they should.

Besides the fact that you are just plain wrong (MS could simply ship a new and old compiler, which detects old HW and runs the same old compiler on older HW for compatibility if BUGS were detected),

Yes, that sounds like a support windfall. :rolleyes:

Of course, you don't seem to care that any updates to a DX9 driver are far more likely to f*ck up old games.

Moreso than new GL drivers?
 
Joe DeFuria said:
I said that MS does it for a reason.
I'd be willing to bet that the reason is simple: I don't think MS did most of the original work on HLSL. I think nVidia offered Cg to Microsoft for use in DX9, and Microsoft said, "sure," and tweaked a couple of things about the language.
 
oops, almost missed this one

JohnH said:
darkblu said:
<snip>
that without a doubt is a valid 'hear'n'now' statemanet. but we must try looking in perspective -- after all GLslang is a perspective, and i believe that's what this whole thread is about - looking in perspective.
Err, but this whole thread originated from OGL2.0 vs Dx9 i.e. <Something for which No HW is available for, isn't shipped in driver form by anyone yet> VS <the here and now of Dx9 SW and HW>. Given that prespective you could say this whole thread is irrelevent!

john, by 'in prespective' above i meant 'forward-looking'. could have worded it more clearly. i admit.
 
Chalnoth said:
I'd be willing to bet that the reason is simple: I don't think MS did most of the original work on HLSL. I think nVidia offered Cg to Microsoft for use in DX9, and Microsoft said, "sure," and tweaked a couple of things about the language.

So then CG compiles to an intermediate assembly as well?
 
Joe DeFuria said:
No, I have common sense. Companies don't invest money without an eye for a return on their investment.

Apparently you are not aware how most software developer divisions are run. Do you work or have you worked at any software company?

Let me put it this way: Do you think each and every API feature is analyzed for ROI? Software design decisions are not reviewed by the CFO, and such decisions decisions such as product architecture are usually left up to product development.

If any analysis happens, it is years later, that's why companies drop support for software products only after LONG periods, even if such software was not very successful. API feature support is deprecated only much much later.

My experience has been, a company will usually have one or two competiting groups vying to do similar projects, sometimes the company ships both, sometimes, through internal politics, one group kills off the other's project, but in no sense, have I ever seen ROI come into it.

At best, TIME constraints come into it: e.g. "We need to deliver this product by Q4, and we must sort features by most important, and which must be delayed to a following release"


BTW Joe, I am not "anti"MS. I love many of MS's products. But I am a software developer, and despite the fact that I like to use their products, I am not "impressed" by the quality of MS's APIs, architecture, or documentation. A word comes to mind when looking at much of the MS Win32 APIs: HACK.

Think of it this way: You might enjoy riding a certain sportscar, but when you lift the hood, you see that the internals of the car are a mess, and it works in spite of itself, but it is not something you, as a mechanic, would respect.
 
Joe DeFuria said:
Chalnoth said:
I'd be willing to bet that the reason is simple: I don't think MS did most of the original work on HLSL. I think nVidia offered Cg to Microsoft for use in DX9, and Microsoft said, "sure," and tweaked a couple of things about the language.

So then CG compiles to an intermediate assembly as well?

It supports multiple targets:

Target FX: Generates code directly for GFFX
Target PS: generates code for DX intermediate
Target ARB_FP: Generates code for ARB intermediate
 
Back
Top