Supersampling doesn't do edging very well.
What do you mean?
Supersampling doesn't do edging very well.
Personally, I think nvidia or ati should make a DX5/6/7/8/9/10-->10.1 wrapper.
Yeah Microsoft already does the "wrapping" for you. PhysX uses CUDA so not sure how you'd "license" that for your non-NVIDIA hardware...If I were a wealthy young entrepreneur starting a vid card company,I'd make my new DX11 HW come with DX10.1 wrapper, I'd pay nvidia to license Physx
Huh? The ref rasterizer is not exactly an ideal or anything. It actually has some bugs or lower-precision cases that the real implementations don't.and I'd more closely match the MS ref rasterizer than ATi and nvidia
Well, 16 bit color/textures suck on DX10+ HW, and it would be nice if someone could emulate the W-buffer, if they don't already. Other than that, their drivers are pretty good.Microsoft provides the wrapper functionality already. Driver writers don't write any DX6 drivers for modern hardware. It's translated to at least DX9 level by the runtime / driver interface. Unless there are bugs in the MS implementation that they won't fix I don't see what purpose this would serve.
True, IHVs will sometimes use higher or lower precision than the ref rasterizer. A lower MSE doesn't always mean a better image.Huh? The ref rasterizer is not exactly an ideal or anything. It actually has some bugs or lower-precision cases that the real implementations don't.
Zero point in that - complementary Z buffer with fp32 gives the same or better quality everywhere. For legacy apps (if that's what you care about?) the driver could just use this format, but maybe that's what you mean by "emulate".and it would be nice if someone could emulate the W-buffer
Microsoft provides the wrapper functionality already. Driver writers don't write any DX6 drivers for modern hardware. It's translated to at least DX9 level by the runtime / driver interface. Unless there are bugs in the MS implementation that they won't fix I don't see what purpose this would serve.
Could anyone make an app that would emulate dithering and the w-buffer or does it have to be directly written into the driver?
That was indeed what i meant. They should just force better, higher precision formats. Why they don't do that, I don't know.Zero point in that - complementary Z buffer with fp32 gives the same or better quality everywhere. For legacy apps (if that's what you care about?) the driver could just use this format, but maybe that's what you mean by "emulate".
Fair enough although in how many applications do you legitimately see Z-buffer precision fighting nowadays? 16-24bit mostly solved that if you use reasonable near planes.That was indeed what i meant. They should just force better, higher precision formats. Why they don't do that, I don't know.
This is not exactly correct. 16 bit color textures work perfectly fine on DX10/11 hardware. The quality is identical to the older APIs.Well, 16 bit color/textures suck on DX10+ HW
There is no way to force MSAA on all deferred shaded DX10/DX11 games either, without doing specific driver code and shaders for all the possible variations and updating the driver every time a new game is released. If the game developer doesn't code a support for MSAA, the driver cannot add it later by any simple universal methods.So, to conclude, there is no theoretically possible way to force MSAA in a DX9 game that uses deferred shading?
It's not as simple as that. You would have to convert the buffers back to 565/1555 when they were locked as well. Many older games lock the backbuffer during rendering.If you are rendering to a 565/1555 render target (back buffer), you no longer have the option to enable dithering to your color writes (regardless of the format of the textures you read your color data from). A simple wrapper that would replace 565 back buffer with 8888 back buffer would improve the alpha blending quality of the old games nicely (and would also remove the 16 bit dithering artifacts completely along the banding). No need to go to DX11, since all the older DX-versions also supported 32 bit backbuffers. Just intercept the CreateDevice call and replace the back buffer format parameter.
It's not as simple as that. You would have to convert the buffers back to 565/1555 when they were locked as well. Many older games lock the backbuffer during rendering.
How easy is it to intercept the calls to CreateDevice? Is there anything I could do (I don't have programming experience) or does nvidia have to do it?
Let me just call this out immediately and say that if they are both in the "four digit fps range" then neither is "significant faster". This is *precisely* why you should use milliseconds per frame to measure performance and not fps. You're measuring trivial overheads that are in the microsecond range at best and concluding that something is "significantly faster" because your huge numbers are the reciprocals of the tiny numbers that you ought to be comparing!Recently, I have revived my Wndows XP and run some lower-level benchmark which turned out to be significantly faster on XP than on 7/Vista (on both systems in the four-digit-fps range).
No it wouldn't that's my entire point - that is a difference of a whole .9 milliseconds! The difference between 30 and 31 fps is more than that. Just use milliseconds.well one could be 1000fps the other 9999fps that would qualify as significant faster
While there were a couple of cases where WinXP outran Windows 7 and Vista, for the most part Windows 7 wins more than it loses, especially when you factor in multi-GPU technologies like SLI and CrossFire. In fact, if you’re a Windows XP gamer with SLI or CrossFire, I’d definitely urge you to upgrade to Windows 7 as soon as possible.