Will WGF 2.0 video cards be usable in XP/2000?

suryad said:
Thanks for the great explanation!

I got another question though. So when Vista ships, it will have both these Direct3D versions correct? Why 2? Is it for backwards compatibility of the Aero interface on older non-Direct3D 10 class hardware?

With Direct3D 9Ex you will have some benefits from the new driver model with older hardware too. This is primary for AERO but games can profit from it too. Currently AERO use always D3D9 EX even if you have a D3D10 GPU.
 
Demirug said:
With Direct3D 9Ex you will have some benefits from the new driver model with older hardware too. This is primary for AERO but games can profit from it too. Currently AERO use always D3D9 EX even if you have a D3D10 GPU.

Thanks. What kind of benefits would older hardware benefit from? Performance increases because of the new lesser overhead driver model?
 
Armored_Spiderman said:
but...isnt DX10 full FP32 without any support to other precision?...i thinked that in DX10 wouldnt exist FP16 or FP24...only FP32 :???:
Calculations have to be FP32, but rendertargets, the front and back buffer, depth and other surfaces don't have to be.
 
Armored_Spiderman said:
but...isnt DX10 full FP32 without any support to other precision?...i thinked that in DX10 wouldnt exist FP16 or FP24...only FP32 :???:

Don't confuse internal precision and framebuffer/rendertarget format (which is the output)
surely no FP16 or FP24 internal precision allowed in D3D10, but you certainly can't live without FP16 rendertarget, which are already bandwith hungry enough compared to FX8.
(I'd expect FX8 support as well)
 
FX8 will certainly be supported for display output. But I would like to see a move to 10-bit precision for display output. It would, in particular, be much better for gamma correction.
 
suryad said:
Thanks. What kind of benefits would older hardware benefit from? Performance increases because of the new lesser overhead driver model?

The lower overhead is something that all Direct3D version will see. Direct3D 9EX supports as one example a faster switching between full screen mode and the desktop. There are other small small improvements but I don’t expect that many games will make use of it because it will break Windiws XP compatibility.
 
Chalnoth said:
FX8 will certainly be supported for display output. But I would like to see a move to 10-bit precision for display output. It would, in particular, be much better for gamma correction.

D3D10 supports the classic FX8 mode, FX8 with sRGB encodings, FX10 and FP16 as display formats.
 
Demirug said:
D3D10 supports the classic FX8 mode, FX8 with sRGB encodings, FX10 and FP16 as display formats.
I suppose the latter would be nice for some HDR displays. But I doubt it will be used in games.

I do hope, however, that the FX10 format for display output is not an HDR format, but rather a high precision format.
 
Chalnoth said:
I suppose the latter would be nice for some HDR displays. But I doubt it will be used in games.

I do hope, however, that the FX10 format for display output is not an HDR format, but rather a high precision format.

Depends on who you ask. For ATI anything beyond FX8 is HDR.
 
Chalnoth said:
I do hope, however, that the FX10 format for display output is not an HDR format, but rather a high precision format.
10-10-10-2 comes in either unorm or uint format, not float. 11-11-10 is the closest float format.

hth
Jack
 
I thought D3D9 Ex adds the graphics memory virtualization which is one of the main benefits of WDDM drivers. is it so, or is it reserved to D3D10?
also, any chance that the virtualization can be done at driver level without explicit application support?
(so that RAM usage on say Battlefield 2 goes down by not requiring all those hundreds of megs of texture in system RAM)
 
Blazkowicz_ said:
I thought D3D9 Ex adds the graphics memory virtualization which is one of the main benefits of WDDM drivers. is it so, or is it reserved to D3D10?
also, any chance that the virtualization can be done at driver level without explicit application support?
(so that RAM usage on say Battlefield 2 goes down by not requiring all those hundreds of megs of texture in system RAM)
WDDM is responsible for this. There are two flavours - WDDM-Basic and WDDM-Advanced; the former is just a tidier version of the current resource stuff (by the looks of it) and the latter is hardware based page-faulting and scheduling.

I've not seen anything to suggest that it won't "back port" to existing D3D 8 or 9 apps running through D3D9Ex.

Jack
 
JHoxley said:
WDDM is responsible for this. There are two flavours - WDDM-Basic and WDDM-Advanced; the former is just a tidier version of the current resource stuff (by the looks of it) and the latter is hardware based page-faulting and scheduling.

I've not seen anything to suggest that it won't "back port" to existing D3D 8 or 9 apps running through D3D9Ex.

Jack

IIRC there was an announcement at the last WinHEC about this. You will get some benefits from the new WDDM memory model with older version but to get everything you have to use D3D9EX.
 
JHoxley said:
10-10-10-2 comes in either unorm or uint format, not float. 11-11-10 is the closest float format.

hth
Jack
Well, it's obvious it wasn't a float format, hence calling it FX10. I imagine that the difference between unorm and uint is determined by the chosen range of the format (i.e. [0,1] vs. [0,4])?
 
Blazkowicz_ said:
Don't confuse internal precision and framebuffer/rendertarget format (which is the output)
surely no FP16 or FP24 internal precision allowed in D3D10, but you certainly can't live without FP16 rendertarget, which are already bandwith hungry enough compared to FX8.
(I'd expect FX8 support as well)


so the internal process is all 100% FP32 but the output can random.

thanks for the explanation....;)
 
Armored_Spiderman said:
so the internal process is all 100% FP32 but the output can random.
If by 'random', you mean 'up to the choice of the developer, not the driver', then you'd be right.
 
Yeah, I don't think developers (or users) would like it very much if the output format was actually random :)
 
Chalnoth said:
I imagine that the difference between unorm and uint is determined by the chosen range of the format (i.e. [0,1] vs. [0,4])?
Off the top of my head (haven't had a chance to check it) UINT is just an unsigned integer of the given dimension (10bit UINT being 0..1023) and UNORM has the same resolution but is deemed to be in a [0,1] range. That is, for a 10 bit UNORM you'd get 1024 discrete intervals between 0 and 1...

Jack
 
Yeah, the latter is what I'd prefer (and is not what is currently available, except from Matrox on the Parhelia). But uint doesn't make any sense as a framebuffer format (so I'm sure it's purely meant for intermediate storage).
 
Chalnoth said:
But uint doesn't make any sense as a framebuffer format (so I'm sure it's purely meant for intermediate storage).
I'd have to give it some more thought (getting a bit late / tired now!) but it would probably be of most use as intermediate storage. Given the integer instruction set in SM4 it makes sense to allow normal ranges of integer values. Things like accumulators/counters and stencil-like operations come to mind...

Cheers,
Jack
 
Back
Top