DX10 on XP, through OGL wrapper?

Should be possible with all the new Nvidia OpenGL extensions for the 8800, but would be _a lot_ of work.

Feels like there are better things to spend time on tough :)
 
Big problems are the undocumented binary formats for shader and effects. Additional some D3D10 concepts don’t map very well to the OpenGL concepts.
 
No, the D3D9 format was always documented. D3D10 have a new format that even contains a hash key. The WDK even miss the header file that contains the structures and enumeration.
I don't see anything on that page that tells it's D3D9-only, or not D3D10, but it's in the Windows Vista Display Driver Model Reference. And as far as I know this format has been used from Shader Model 1.0 to Shader Model 3.0. I see little reason to use anything completely different for Shader Model 4.0.

What's the hash for?
 
I don't see anything on that page that tells it's D3D9-only, or not D3D10, but it's in the Windows Vista Display Driver Model Reference. And as far as I know this format has been used from Shader Model 1.0 to Shader Model 3.0. I see little reason to use anything completely different for Shader Model 4.0.

Believe me I have checked this by my own with a hex editor and decode some of it. The only thing that hasn’t changed is that the still use 32 bit tokens. As they have change the whole DDI for D3D10 it wasn’t a big deal to break with the old format too.

What's the hash for?

Validation. The compiler signs the binary shader it generates and the runtime later checks it. If the hash doesn’t match it refuse to create the shader.
 
Believe me I have checked this by my own with a hex editor and decode some of it.
I believe you. :)

Anyway this doesn't seem like a hard reverse engineering problem.
Validation. The compiler signs the binary shader it generates and the runtime later checks it. If the hash doesn’t match it refuse to create the shader.
Sorry I don't fully see the point in that. Is it a form of authentication (avoiding third-party tools) or maybe threading related?
 
Anyway this doesn't seem like a hard reverse engineering problem.

Sure it can be done. The dissembler that is part of D3D10 is very helpful. One problem with this job is that there is no public documentation for the assembler code too.

Sorry I don't fully see the point in that. Is it a form of authentication (avoiding third-party tools) or maybe threading related?

I don’t know the real reason but I believe it was done to make shader creation faster. The D3D9 runtime runs some expensive validation on the shader before they are passed to the driver. Calculation the hash (I believe it is an MD5) is much faster. This makes it possible to remove the expensive checks as the runtime knows that the shader code is error free.

Keep in mind that the preferred model for shader handling is still to compile shaders on the developer systems and distribute the binary code.
 
Problem actually lies somewhere else -- in the driver. Especially when DX10.1 comes out there will be the need for driver to support context switching on a GPU. In another words, GPU will have to be capable of continuing some other work instead of waiting for that lengthy transfer from system memory to finish.
I don't see how something like that could be emulated without driver support.
 
Problem actually lies somewhere else -- in the driver. Especially when DX10.1 comes out there will be the need for driver to support context switching on a GPU. In another words, GPU will have to be capable of continuing some other work instead of waiting for that lengthy transfer from system memory to finish.
I don't see how something like that could be emulated without driver support.

I dont see why this would need to be emulated if you dont use it. I guess most people would just want to run DX10 Games (no context-switch required), and not hack/patch WinXP to use a 3D-Desktop.
 
Just use "nvemulate" tool provided by NVIDIA. However it'll only give you D3D10 functionality (features) under OpenGL...
 
Well, the early Crysis DX10 footage was run on an X1900 XTX with some form of DX10 emulation, though I'm not certain if they were using XP or Vista. I wonder if something like that could be implemented for all games.
 
There continues to be some obvious confusion as to what DX10 is really giving you. It isn't just about a few extra features, it's also about a large number of efficiency enhancements. Sure, there is now geometry shader stuff and other pixel formats you can access (which wouldn't be "emulatable"), but there are also new memory access methods and driver state changes and a ton of other things that you simply could never emulate.

Most of what current devs are doing with DX10 is adding extra graphics goodies because of the efficiency enhancements provide for acceptable performance with them enabled. By trying to emulate DX10, you'd be shooting yourself in the foot TWICE: running the code that needs the extra efficiency in an emulation layer that will be FAR less efficient than even native DX9.

Again, DX10 doesn't just bring a few new PS, VS and GS functions, it also brings very-low level performance enhancements that simply cannot be "emulated". And when a dev builds a DX10 render mode into their game, the very last thing you'll want to do is somehow build an emulator for it.

Let alone the nightmare of the DX -> OGL instructions conversion, since a ton of them simply don't map logically 1:1 like that anyway.
 
Again, DX10 doesn't just bring a few new PS, VS and GS functions, it also brings very-low level performance enhancements that simply cannot be "emulated". And when a dev builds a DX10 render mode into their game, the very last thing you'll want to do is somehow build an emulator for it.

Let alone the nightmare of the DX -> OGL instructions conversion, since a ton of them simply don't map logically 1:1 like that anyway.
It won't be that easy to map D3D10 to current OpenGL (+ extensions), right. But that doesn't mean OpenGL Mt Evans can't implement those same improvements D3D10 does. In fact we already know the new object model will be much more tuned for performance.
 
There continues to be some obvious confusion as to what DX10 is really giving you. It isn't just about a few extra features, it's also about a large number of efficiency enhancements. Sure, there is now geometry shader stuff and other pixel formats you can access (which wouldn't be "emulatable"), but there are also new memory access methods and driver state changes and a ton of other things that you simply could never emulate.

Most of what current devs are doing with DX10 is adding extra graphics goodies because of the efficiency enhancements provide for acceptable performance with them enabled. By trying to emulate DX10, you'd be shooting yourself in the foot TWICE: running the code that needs the extra efficiency in an emulation layer that will be FAR less efficient than even native DX9.

Again, DX10 doesn't just bring a few new PS, VS and GS functions, it also brings very-low level performance enhancements that simply cannot be "emulated". And when a dev builds a DX10 render mode into their game, the very last thing you'll want to do is somehow build an emulator for it.

Let alone the nightmare of the DX -> OGL instructions conversion, since a ton of them simply don't map logically 1:1 like that anyway.
Could you share the links to the tests showing that DX10 on Vista is faster than OGL on XP?

If, as you claim, there is such performance advantage, and it's even so big that it makes said emulation impractical, I'm sure nVidia demo team is now busy porting Adrianne to DX10.
 
I dont see why this would need to be emulated if you dont use it. I guess most people would just want to run DX10 Games (no context-switch required), and not hack/patch WinXP to use a 3D-Desktop.

Then you didn't get the point. GPU doesn't need to do multithreading between applications, but between rendering tasks. In simple terms -- say that in order to render the next frame it needs a texture with higher level of detail which is not in local memory and it stalls waiting for DMA transfer to finish.

In the meantime you turn sharp 180° in your shooter because a zombie is attacking you from behind. With DX10.1 GPU will not wait for above mentioned texture to load but will jump to render what is behind you, because it already has all textures/data in memory for that view.

Without DX10.1 your game will freeze for a split second and you will be dead.

Others already said that DX10 doesn't bring that many new features so I can just add that those features are already exposed in latest OpenGL ICD. No need to worry, OpenGL games will be able to use all that features on Windows XP too and I personally prefer OpenGL games because they can be ported to Linux and Mac.

EDIT:
Lets not forget that game developers want to sell their games and they can't expect to sell many copies if the game is DirectX 10 only. So they will have to maintain two code paths -- one for DX9 and one for DX10. Of course, the only sane alternative to that madness is to ditch DX completely and start writing OpenGL game engines with portability in mind. Once games start appearing for Mac and Linux those platforms will gain wider support both from IHVs and from end users.
 
Last edited by a moderator:
Back
Top