Direct3D 10 rocks :)

Andrew Lauritzen

Moderator
Moderator
Veteran
Not only is it a much cleaner is nicer API that eliminates a lot of nonsense that used to exist at the application level, the performance rocks, even with totally immature drivers!

I've been porting the shadows demo and I've got just simple shadow mapping up and running... and it's literally twice as fast in DirectX 10 as in DirectX 9. Granted one probably won't see that sort of increase once the bottleneck is shifted significantly back onto the GPU, but conversely this efficiency allows asymptotically simpler algorithms that used to be too slow due to abnormally large pass overhead. Should be interesting to see how much performance can be squeaked out of D3D10 :)

Anyone else have similar results, or is it just me? I'm really surprised and happy to be seeing this so early as I've done *no* optimization in D3D10 yet... not even putting stuff into constant buffers, etc. which is pretty easy. Hopefully there are even more speed improvements to come!
 
  • Like
Reactions: Geo
Wow, you sure went from "I think I've got stuff displaying now" to "omg shadow maps that kick D3D9's ass" in a jiffy :p

Ooc how simply is your implementation? I mean, do you just set up the objects to render, draw them into depth surface, then draw objects into frame buffer with lighting/shadowing shader?
 
D3D10 is made right !
What an achievement, 10 versions to do something right... better late than never I guess.
 
Ooc how simply is your implementation? I mean, do you just set up the objects to render, draw them into depth surface, then draw objects into frame buffer with lighting/shadowing shader?
Yeah it's all pretty simple stuff, even the SAVSM part (which I'm still porting). Yet another reason to use it ;)

Ingenu said:
What an achievement, 10 versions to do something right... better late than never I guess.
Hehe, you're not wrong, but on the other hand I'm still waiting for OpenGL to clean up their act. Arguably DX9 was already ahead of GL in that regard and there's no question that DX10 is a much cleaner, safer, more powerful, and more efficient API. It's still way too easy and implicit to "fall off the fast path" in GL as it is just too far away from what the hardware is actually doing now.
 
D3D10 is made right !
What an achievement, 10 versions to do something right... better late than never I guess.

You should count again. ;)
Don’t get me wrong as a user of every single version (even the ugly command buffer interface of the early days.) I know that they have come along a very large road. I am mostly happy with the new API but IMHO there is one point were Microsoft has gone too far.

As a maintainer of the core render system of a game engine Direct3D 10 makes my life harder not easier for a long timeframe. Until we finally cut the support for D3D9 hardware we have to support two API. A pain that everyone in the same situation will share.

I know that sometimes it is necessary to cut old trees but supporting hardware with at least SM 2.0 with D3D10 would have been a nice bonus.

Anyway it would be a interesting job to add D3D10 to our engine.
 
  • Like
Reactions: Geo
As a maintainer of the core render system of a game engine Direct3D 10 makes my life harder not easier for a long timeframe. Until we finally cut the support for D3D9 hardware we have to support two API. A pain that everyone in the same situation will share.
Yeah I certainly understand that, but IMHO it was at the point where it's better to cut the slack and drop backwards compatibility than to try and paste new features in. The latter gets you the state of OpenGL right now ;)

Still I wouldn't want to support both, even in small apps. I suspect that most of my stuff will be either D3D9 or D3D10 which is unfortunate, but practical.
 
Yeah I certainly understand that, but IMHO it was at the point where it's better to cut the slack and drop backwards compatibility than to try and paste new features in. The latter gets you the state of OpenGL right now ;)

I don’t want another D3D8 to D3D9 step. A new fresh API like D3D7 to D3D8 is fine and I am fine with dropping all the fixed function stuff. But it would be not that hard (from the technical point of view) to build a runtime that use a D3D9 driver and provide D3D10 interfaces for the application. As a D3D9 driver could not support anything we need some kind of caps interface but that’s and some additional flags are primary all that are needed.

Still I wouldn't want to support both, even in small apps. I suspect that most of my stuff will be either D3D9 or D3D10 which is unfortunate, but practical.

Well we have a large app and need to support both. One size fits all (ok most) systems approach.
 
But it would be not that hard (from the technical point of view) to build a runtime that use a D3D9 driver and provide D3D10 interfaces for the application.
For the purposes of allowing applications to just convert to D3D10, perhaps, but I think you'd lose all advantages of D3D10 by doing that, and I suspect the differences in API would cause some significant overhead similar to what the D3D9 drivers have to do anyways.

Still someone developing something similar might be a good solution for people looking to upgrade applications.
 
I came in here expecting a nicely rendered picture of some rocks. (Done in DX10, natch)

*looks at shoes*

I'll ... i'll just be going now.


:D
 
I'm having plenty of fun with D3D10 as well - but I've had almost 18 months to appreciate the 'cleanliness' of the API. It is definitely a pleasure to be working with D3D10 over D3D9 :smile:

Having said that, there still seem to quite a few rough edges with the SDK that are annoying me. I'm not upto Feb'07 yet, but the Dec'06 SDK seems to have some questionable HLSL compilations and the D3DX10 library is extremely weak (to the point I wrote a D3DX9->D3DX10 conversion). Documentation on a few things could be improved as well.

But such things I'll be hoping to chase up when I drop by Seattle next month :D

Anyway, this thread needs pictures:

Gf8800_BrokenRayTracing.png

Seems like an added bonus of Nvidia's 100.59 and 100.64 drivers - it automagically draws contour lines in your relief mapping implementations. Wonder if they're trying to market the 8800's to the Ordnance Survey? :LOL:

Jack
 
  • Like
Reactions: Geo
Well I disagree on the performance or feature set not available in OpenGL, however I agree that a new clean OpenGL would be welcome and it's planned for release this year, a bit like both the existing OpenGL|ES 2.0 and the OpenGL 2.0 "Pure" 3DLabs proposal, OpenGL 3.0 Lean & Mean is supposedly coming this summer (AFAIR).

As I said D3D10 is the first right making of that API, but I most likely won't touch D3D10 before a good while at work (well depending on the project that's true).
Let's not forget the market out there is still made of GF2-4 in huge numbers.
(And now don't show me off Steam Survey as it's only representative of HL players.)

Anyway I did change my engine to have a much more D3D10 Renderer interface, and I like it a lot that way, not played with the API yet, it's definetly on my task list !

(Ran some D3D10 SDK demos on my GF8, results were not that great, but I'll blame non mature drivers)
 
OpenGL 3.0 Lean & Mean is supposedly coming this summer (AFAIR).
That would be awesome... I've heard nothing about it since the initial presentation, which is always disconcerting.

Let's not forget the market out there is still made of GF2-4 in huge numbers.
Being in academia right now I have the luxury of not caring about the current consumer market :)

(Ran some D3D10 SDK demos on my GF8, results were not that great, but I'll blame non mature drivers)
Yeah I wasn't terribly impressed with some of the SDK demo performances, but it seems that Microsoft has cleverly not included much crossover on the available demos between DX9/10, so it's hard to compare directly. In my particular case, even stuff that wasn't CPU bound at all on DX9 is seeing a huge performance increase, but I suspect that will lessen somewhat when I ramp up the complexity.

Still I can't complain about significant performance increases together with a much cleaner API :)
 
lot of new stuff about gl recently, see opengl.org

OpenGL SDK now online
This is not a traditional SDK in the sense that it doesn’t arrive on CD-ROM, and it isn’t one monolithic download. Instead, it is a gathering of 3rd party contributions from many of the leaders in the community. In some cases the information and downloads are available directly from the SDK on opengl.org. In other cases, you’ll find links to the original materials elsewhere on the web. In all cases, the contributions have been hand selected and represent the best of what’s out there.

OpenGL Pipeline Newsletter Vol 3 covers Long Peaks, the SDK, Vista and more
The third edition of OpenGL Pipeline, the quarterly newsletter covering all things the OpenGL standards body has “in the pipelineâ€￾, covers a bunch of exciting news and tips: from updates about OpenGL “Longs Peakâ€￾ to a first glimpse of the new OpenGL 2.1 SDK to OpenGL running on Vista.
 
D3d10 and Vista have a more efficient IPC that keeps more operations in user mode. Since its a shadowmap demo i'm assuming its just unoptimized and the gains you are getting in dx10 is just a reduction in kernel context switching.
 
D3d10 and Vista have a more efficient IPC that keeps more operations in user mode. Since its a shadowmap demo i'm assuming its just unoptimized and the gains you are getting in dx10 is just a reduction in kernel context switching.
It's not exactly unoptimized... indeed much of the code is pretty simple and thus cannot be further optimized. In the more complex modes, the performance difference is less pronounced, but still there. However is should be noted that already I've been able to take advantage of a few features/fast paths in D3D10 that have given significant performance increases, and I suspect that there are more just waiting to be found :)
 
I'm trying to implement shadow maps into DX10 and am coming up with a problem. When I send my shadow map to the effect as a shader resource, I cna't render to it again.

Code:
D3D10: WARNING: ID3D10Device::OMSetRenderTargets: Resource being set to OM RenderTarget slot 0 is still bound on input! [ STATE_SETTING WARNING #9: DEVICE_OMSETRENDERTARGETS_HAZARD ]
D3D10: WARNING: ID3D10Device::OMSetRenderTargets: Forcing PS shader resource slot 0 to NULL. [ STATE_SETTING WARNING #7: DEVICE_PSSETSHADERRESOURCES_HAZARD ]

Whatever will I do?
 
You have to make sure that you don't have it bound as a shader resource at the same time as a render target... The runtime is a bit silly about this and doesn't seem to be able to always properly determine when a "real" read/write hazard exists.

That said, the workaround for the error messages is to do a SetResource(0), and then an effect Apply(), and/or a OSSetRenderTargets(0, 0, 0) to explicitly remove the binding in one place or another.
 
For some reason both of those didn't work, but I found this code in the SDK's Cube Map sample:

Code:
ID3D10ShaderResourceView *const pSRV[1] = {NULL};
pDevice->PSSetShaderResources(0, 1, pSRV);
 
Back
Top