About Longhorn's graphics driver model...

AlNom

Moderator
Moderator
Legend
One of the most interesting things to us is the Longhorn Display Driver Model (LDDM). Under the new display driver model, Microsoft wants to more closely integrate the graphics hardware with the operating system. In order to do this, a couple things are going to happen. First, graphics drivers will give up management of graphics memory to Windows. Windows will then handle the complete virtualization and management needs of graphics memory. This will have a large impact on the way graphics hardware vendors approach driver writing. In spite of the simplification windows memory management will bring to the graphics subsystem, different management techniques may lend themselves more readily to one hardware architecture or another. Right now, we are hearing that ATI and NVIDIA are both playing nice with Microsoft over what will happen when they lose the full control of their own RAM, but we will be sure to keep abreast of the situation.

From http://www.anandtech.com/tradeshows/showdoc.aspx?i=2399&p=7



So... what does giving up RAM control mean in the realworld? What difference does it make? (for games and professional graphics alike)
 
I bet king and country it means more CPU overhead and lower performance overall, though it might make things more convenient for microsoft.
 
It means drivers will need parts rewritten.
This is definitely dramatic and personally I see bad things coming out of this as much as I see the good things come out.

For one, if done right the simplification of writing software that needs low level access to your video card would be greatly simplified with minimal overhead and maybe eventually we will see some gains from it as M$'s implementation becomes more mature.

This will definitely speed up my development in most cases so I may not manually have to free memory and mess around with bit level optimisations just to alleviate texture bottlenecks.
 
I just had thought that when you are using your graphics card in normal 2D desktop mode, quite huge amount (with 128MB soemthing like 90% of memory isn't used in 2D desktop mode.) of graphics memory isn't used at all. could it be that Microsoft wants to take that extra space for storing something that does not suffer having slower access time? Swapping from graphics memory via PCI-e is most likely still faster than swapping from disk.

Plus of course, if longhorn has the new 3D desktop running, they want to have better controllability, what's stored and where.
 
My understanding of Longhorn's virtual memory model for GPUs is that it will no longer be necessary to load an entire texture (all mip levels) into GPU local memory in order to use that texture. The texture is loaded as needed, freeing up space for other textures and load-balancing situations when the GPU's local RAM can't hold all texture data.

In other words it seems to me like a great way to manage 512MB+ of texture resources for rendering a frame. The vast majority of the texture data stays in system RAM while only the required portions are delivered to the GPU. Least recently used texture data will be paged back to system RAM, freeing GPU RAM for more immediate texturing.

I imagine Windows will give the GPU's virtual memory high priority in gaming applications, simply because there are no other executing apps demanding huge amounts of memory.

But I don't program GPUs so what do I know.

Jawed
 
K.I.L.E.R said:
It means drivers will need parts rewritten.
This is definitely dramatic and personally I see bad things coming out of this as much as I see the good things come out.

Well, so far I see more bad things, for example HD-Content only with strong DRM (German).
The article essentially talks how hardware and driver developers will have to completely follow Microsoft's (or rather the wishes of the MPAA and consorts) wishes with regard to tamper resistance, support for Certified Output Protection Protocol (COPP), Protected Video Path Output Protection Management (PVP-OPM) and Protected Video Path User Accessible Bus (PVP-UAB).
All development will have to be completely stopped while hardware and drivers are certified by CableLabs (owned by the content producers).
This will increase costs, in particular for early adopters.

And the best thing is, if a driver / hardware programmer has fscked up, they can revoke the DRM key for that hardware, and you will have to buy a new graphics card to play the "premium HD content" you bought...

I do not like that future...
 
K.I.L.E.R, for longhorn they need to write new driver anyway because the new model is very different from the current XP model. The new memorymanagment is only a part of this. As far as I understand the LDDM drivers should be more stable und faster if the use this new model.
 
Why faster and more stable? Just because it's a part of windows (like IE)?

Jawed said:
My understanding of Longhorn's virtual memory model for GPUs is that it will no longer be necessary to load an entire texture (all mip levels) into GPU local memory in order to use that texture. The texture is loaded as needed, freeing up space for other textures and load-balancing situations when the GPU's local RAM can't hold all texture data.


So, if I'm understanding this right, it looks like a more "console" way of doing things? *Halo 2's frequent texture loading comes to mind* Sounds interesting, and by that I mean it sounds like there is a lot of space wasted in the current implementation. And basically, this new way of memory management is really an evolution (something better across the board) or could there be cases where this is really bad for a developer?
 
Btw, (and this is more of a confirmation than anything) the winhec build has a LDDM driver for my 9800 pro (allowing all the touchy-glassy-feely effects) and in dxdiag instead of saying "DDI 9 (or higher)" it now says "DDI WGF 1.0"
 
management techniques may lend themselves more readily to one hardware architecture or another

Who's bow was that a shot across? How does this stuff integrate with previous discussions we've had on advanced memory interface designs? Obviously, MS didn't spring this on ATI & NV at the conference for the first time.
 
Mordenkainen said:
Btw, (and this is more of a confirmation than anything) the winhec build has a LDDM driver for my 9800 pro (allowing all the touchy-glassy-feely effects) and in dxdiag instead of saying "DDI 9 (or higher)" it now says "DDI WGF 1.0"
The one that was made available to attendees at the 2005 WinHec Convention in Seattle :shifty eyes: ?
 
radeonic2 said:
Mordenkainen said:
Btw, (and this is more of a confirmation than anything) the winhec build has a LDDM driver for my 9800 pro (allowing all the touchy-glassy-feely effects) and in dxdiag instead of saying "DDI 9 (or higher)" it now says "DDI WGF 1.0"
The one that was made available to attendees at the 2005 WinHec Convention in Seattle :shifty eyes: ?

And beta-signups, yes.
 
My understanding of this is Circa WinNT 3.x all graphics drivers ran at Ring 1 in the kernel. i.e. it was basically impossible for the driver to take down the core OS.

When MS added D3D to NT with 2000 (4.0) they moved the graphics drivers to Ring 0 for performance reasons (a ring transition is an expensive operation in the OS).

A lot of the reasons why this was originally done are probably not true anymore and MS wants to move the drivers back out of Ring 0. For efficiency reasons this probably means that MS have to do some of the graphics management in the kernel.
 
[maven said:
]
K.I.L.E.R said:
It means drivers will need parts rewritten.
This is definitely dramatic and personally I see bad things coming out of this as much as I see the good things come out.

Well, so far I see more bad things, for example HD-Content only with strong DRM (German).
The article essentially talks how hardware and driver developers will have to completely follow Microsoft's (or rather the wishes of the MPAA and consorts) wishes with regard to tamper resistance, support for Certified Output Protection Protocol (COPP), Protected Video Path Output Protection Management (PVP-OPM) and Protected Video Path User Accessible Bus (PVP-UAB).
All development will have to be completely stopped while hardware and drivers are certified by CableLabs (owned by the content producers).
This will increase costs, in particular for early adopters.

And the best thing is, if a driver / hardware programmer has fscked up, they can revoke the DRM key for that hardware, and you will have to buy a new graphics card to play the "premium HD content" you bought...

I do not like that future...

Me too, the heise newspost sounds quite disturbing. But:
How is revoking the DRM Key supposed to be having an effect on already bought content? Or will it be mandatory to be connected to the net to watch HD content? Otherwise I have no idea how some consortium should be able to affect my working hard- and software...
 
[maven said:
]
K.I.L.E.R said:
It means drivers will need parts rewritten.
This is definitely dramatic and personally I see bad things coming out of this as much as I see the good things come out.

Well, so far I see more bad things, for example HD-Content only with strong DRM (German).
The article essentially talks how hardware and driver developers will have to completely follow Microsoft's (or rather the wishes of the MPAA and consorts) wishes with regard to tamper resistance, support for Certified Output Protection Protocol (COPP), Protected Video Path Output Protection Management (PVP-OPM) and Protected Video Path User Accessible Bus (PVP-UAB).
All development will have to be completely stopped while hardware and drivers are certified by CableLabs (owned by the content producers).
This will increase costs, in particular for early adopters.

And the best thing is, if a driver / hardware programmer has fscked up, they can revoke the DRM key for that hardware, and you will have to buy a new graphics card to play the "premium HD content" you bought...

I do not like that future...

that future is good that future will be Linux :D
 
ERP said:
My understanding of this is Circa WinNT 3.x all graphics drivers ran at Ring 1 in the kernel. i.e. it was basically impossible for the driver to take down the core OS.
If my memory serves me right the video driver ran in ring 3 until NT 4. NT-based OSes (current and past) only use Ring 0 and Ring 3 on x86 systems.
 
Guden Oden said:
I bet king and country it means more CPU overhead and lower performance overall, though it might make things more convenient for microsoft.
Wrong.
Microsoft said:
Design goal: 1/10th overhead of D3D9
Quoted from this powerpoint.

More about the longhorn display driver model.
 
Wish I knew more about this stuff to get a better appreciation for the evolution of the API. There are so many changes that I imagine there is going to be a lot of divergence in the implementation details of the WGF hardware from the different IHV's.
 
Back
Top