Microsoft's approach to Dx

Razor1

Veteran
I was just thinking about somethings as of late and wanted to ask everyone,

How do you feel about Microsoft's setting standards of Dx which are very strict, and anything outside those specs will not be accepted?

I see the good side of it as in it will even the playing field and help in overall stability, but I also see the "soup" nazi in there too, MS is creating a system that restricts free will, free thought, and as an end result cause invoation to slow down.
 
I see the good side of it as in it will even the playing field and help in overall stability, but I also see the "soup" nazi in there too, MS is creating a system that restricts free will, free thought, and as an end result cause invoation to slow down.
I don't know about innovation slowing down. GPUs are still in their infancy compared to CPUs, and there is no common instruction set for them or anything like that. Having a common API where developers can focus on writing software instead of writing workarounds for the quirks of various cards is definitely an advantage. Now, in ten years, we'll see. But for the moment, yes, I think it's probably a good thing.
 
I see the good side of it as in it will even the playing field and help in overall stability, but I also see the "soup" nazi in there too, MS is creating a system that restricts free will, free thought, and as an end result cause invoation to slow down.
Direct3D 10 offers a ton of programmability. You can use the GPU to do totally different stuff, like raytracing, sound processing, weather prediction, and several things we can't even imagine today. So I don't think innovation is slowed down at all.

Besides, there's always Direct3D 11... ;)
 
I definitly agree Dx10 is a large step forward, but by limiting any vender specific approachs (well vender specific cause noone else will have another's feature outside of dx specs), that forces venders to go through MS to get anything into Dx, which would never show up until a year or two down the road. Its like the Ogl commity they have to approve it at the end to get it into Ogl, but at least the functions are availiable before hand.
 
I actually think I like both OGL and DX:

nVidia, ATi can support a DX featureset fully with a chip.

Then can also support functionallity that they add otherwise, through the use of an OGL extension. (Giving devs time to play with it, etc.)

I think in that way, OGL will actually DRIVE innovation into the next DX.

(Not right now, though. Hopefully under Kronos though...)
 
Considering the super-high featureset of D3D10, imo there's actualy not that much 'spontaneous innovation' left that GL's extension system would provide. Besides, MS is going to be doing those annual (or so) refeshes of D3D10 (E.G. D3D10.1), couldn't that include a couple features now and then?
 
So, when will DX & GPUs evolve enough that there is no more need to add to the API and that its flexible enough to do what you damn well want?

Surely by the time GPUs are built on 32nm with over 3billion transistors, and a couple more DX revisions. Wil there be enough flexibility and abstration that the only way forward is to just keep adding more of the same units on chip.

Theoreticaly you would then just be able to rig up arrays of these things to pools of memory and do your offline rendering with them.

surely we are heading towards that.?

Your thoughts please.
 
I think the Baron is mentioning something that many folks may overlook...

Where is the "innovation" in today's CPU's? They're still chewing away at the x86 instruction set from more than a decade ago. Is that to say that the individual vendors have no free will and can't develop and innovate new technologies and features? Absolutely not.

There's still plenty of room for innovation, however a lot of that will be driven from the developers UP -- not from the vendors DOWN. It's the same way that CPU's have been going for eons. Math coprocessors came about because developers wanted the floating point performance, and eventually they were no longer "co-processors" but integrated into the rest of the chip. Multiple math units, multiple threads per core, and even multiple cores are all driven by market demand. How they achieved those technologies was innovative and still has lots of free-will and interpretation behind each vendor.

I think DX10's very strict requirements are the best thing that could happen right now. Maybe this decision will need to be changed later, but right now there are too many "wishy washy" things that both vendors do that isn't right. It detracts from the developers resources and it detracts from the user's experience with the final product.
 
If it were that easy, x86 wouldn't have been changed at all in the past 15 years.
x86 hasn't changed at all in the past 15 years.
There have been (OGL style) vendor extensions eg SSE/MMX/3DNow & AMD64 but x86 is still x86.
 
x86 hasn't changed at all in the past 15 years.
There have been (OGL style) vendor extensions eg SSE/MMX/3DNow & AMD64 but x86 is still x86.
Not quite fifteen, about ten years ago the Pentium Pro added a rather large number of conditional instructions to the core ISA, and past that point AMD and Intel periodically added new instructions for data prefetch and system calls, among others.
 
frankly how innovative should you allowed to be?
Microsoft created DX for gaming and digital enterataiment.
if DX specs would be mostly open, i realy dont want an arm/corner cut race between GPU makers, and dev's.
if i buy a g.card i want it to run every game with all of it features that comes out with in a reasonable time table, meaning utill MS actualy updates the API with new functions i dont want new features added by GPU makers. since their only motivation will be to make cards with new features as fast as possible so dev's will use them and and thas forcing you to buy a new card. and thats most easily acopmlished by cutting corners, and releasing those features 6 months later in the next card refresh.
 
FANTASTIC BLOODY FANTASTIC

why?
i play old games and some of them dont work on later systems
why?
nonstandard directx calls
 
Back
Top