Vulkan/OpenGL Next Generation Initiative: unified API for mobile and non-mobile devices.

  • Thread starter Deleted member 13524
  • Start date
It's just sad.

Indeed. However, something that is lost in the current rah-rah about leaner / more exposing APIs, is that a large reason for the huge cushions that "evil" people like the ARB, NVIDIA, AMD, Intel, Microsoft etc. has to do with the quality of developed SW. It is not exactly a story of throngs of awesome developers writing great code, being squashed by evil imperialists. It is more a story about parties that should not have gotten involved so deeply walking the extra mile to work around egregious failures.

In that context, I am somewhat less eager to claim that removing the cushion will yield a net benefit. It might do so for competent middleware developers (which explains why they are some of the main supporters of these initiatives), albeit even there I have some doubts. It will definitely make lots of people writhe in pain and cry for the maternal unit after the initial novelty sprinkled with toy scenarios wears off. Note that I am not saying that it is terribly hard to write Vulkan / DX12 versus OGL4.x/DX11.x, what I am saying is that I've seen such egregious failures in contexts that were much more fool-proof that I can't help being leery when I see rope being handed out freely around the gallows.
 
Indeed. However, something that is lost in the current rah-rah about leaner / more exposing APIs, is that a large reason for the huge cushions that "evil" people like the ARB, NVIDIA, AMD, Intel, Microsoft etc. has to do with the quality of developed SW. It is not exactly a story of throngs of awesome developers writing great code, being squashed by evil imperialists. It is more a story about parties that should not have gotten involved so deeply walking the extra mile to work around egregious failures.
I agree. The writer of that excellent post claims that shaders get patched or even completely replaced all the time. That's not something that will be fixed in any way or form: I don't think the shader language or capabilities will change in a significant way for both DX12 and Vulkan, so it's going application programming can (and will) write shaders that are just as crappy as before. The only difference is that they now have even less handholding from the driver and more rope to hang themselves. In fact, it would be interesting to know if it will reduce the ability of the driver to fix terminally broken code behind the scene.
 
Alex, the cushion isn't being removed.

Also, I would say it's a story about developers being given a cushion puzzle they don't want to solve and being smothered on the way to somewhere that ends-up ho-hum.

The consoles have long proven that there are plenty of developers out there who don't need the cushions.

silent_guy, shader quality is, fundamentally, not the issue. The problem with the legacy APIs is that developers are set a puzzle on how to construct state for a plethora of pipeline stages and connect all the dots to make that state work. They have to do this tens of times per frame, going through all the different states they want to use and juggle all the permutations of state change sequences, in order to make the GPU work efficiently. They're told they're working with API state, but in fact they're working with GPU state. The driver guys know how to make that work efficiently, but the gulf is so huge between driver state and performant GPU state, that ambitious and capable developers are basically thrown an intractable puzzle.

So getting it wrong is normal. Writing a shader that wires in to the badly setup state is normal. Shaders aren't the source of the problems developers have had, they're a symptom of the confusion and puzzle presented by the PC environment.
 
Dude exaggerates somewhat. Some things are definitely true, like driver validation (Vulkan will be a mess w/o a huge suite of tests to validate against). Some are definitely misrepresented, like driver's involvement in validation. D3D9 path in WDDM driver is a dog, that's true. But that doesn't come from WDDM1.x model (like Dmitry suggests above) but from the fact that it supports fixed pipeline. This adds a lot of complexity that is otherwise avoidable. Another source of complexity comes from the actual HW support for features required by a given API level. Some companies are working closer with MS and have plenty of leeway to not implement stuff in HW and overcome some basic cases with (sloooow) software hacks, or get waived features altogether. Some aren't and don't. If you happen to work for a company that doesn't live up to the API level and complains that perfectly valid shaders are slow or certain API calls blow, then yes, you end up with a broken or complex driver[1]. I'm not saying that's what Promit experienced but that's something that could have happened somewhere maybe.

[1] because honestly, if your HW supports the stuff specs tell it to the way it's supposed to, finding incompatibility with a game should result in a simpler, leaner driver, not more complex one
 
Dude exaggerates somewhat.

Just the thought of drivers having special code to handled broken games to me sounds like the big issue, is that not the case?

Are modern AMD/Nvidia drivers pure, free of game/engine optimizations?!

or is this still true today, that drivers are filled with game/engine fixes ?!
 
Just the thought of drivers having special code to handled broken games to me sounds like the big issue, is that not the case?

The sad state of the PC development is that not only drivers but every piece of OS compensates for a shit code people write. The whole idea of SxS is that you have several copies of the same DLL with different quirks (well, bugs) so that shit software that depends on those bugs (and trust me, that'd be the vast majority of software out there) can still work. This is the tax we pay for democratized development. Now, tell me, would you rather play your fav game from 5 years back, the one that doesn't work w/o driver workarounds (and that'd be any game from several years back), or not play it at all?

Are modern AMD/Nvidia drivers pure, free of game/engine optimizations?!

I don't work on those but my guess would be: no. You can easily find a set of two games of which one wouldn't work if the other did and there were no workarounds in the driver. Take Battlefield (one of them, don't remember which one) and Crysis (the second one, I think, but I'm not sure). One has shaders that depend on the 0 == -0 and the other has shaders that depend on 0 != -0. There's no way you can get both of them working on a single driver w/o having some sort of shader detection/patching mechanism.

As much as people want to believe this, new API won't change that. Driver validation is a key. Preferably a transparent one.
 
a large reason for the huge cushions that "evil" people like the ARB, NVIDIA, AMD, Intel, Microsoft etc. has to do with the quality of developed SW
But if practically no-one is able to develop "quality software", then something must have been wrong with the concept. Specifically, OpenGL concept of vendor-controlled extensions with no formal compliance testing is terribly, terribly wrong.

The writer of that excellent post claims that shaders get patched or even completely replaced all the time. That's not something that will be fixed in any way or form: I don't think the shader language or capabilities will change in a significant way for both DX12 and Vulkan, so it's going application programming can (and will) write shaders that are just as crappy as before.
Not exactly, he said that the D3D9 driver in Vista time contained complicated heuristics even for the simple functions in order to work around buggy code.

Both Nvidia and ATI were replacing shader code these days to make some games and benchmarks look better on particularly problematic cards like the viled GeForce FX series, however that was the "assembly language" of shader models 2.x/3.0, and I don't think it's viable for HLSL written to SM 4.x/5.0 anymore. GPU computing power has increased like 100-fold, it's easier to simply run the shader bytecode as is and rely on the optimizer stage in the driver.
 
I'm not convinced that shader optimizations are a thing of the past. In 2006, we were already in the HLSL era. We're talking 8800 GTX here, not GeForce FX. I don't see why developers are much better at writing shaders now than they were 9 years ago, especially since shaders are getting more complex, not less.
 
Dude exaggerates somewhat. Some things are definitely true, like driver validation (Vulkan will be a mess w/o a huge suite of tests to validate against). Some are definitely misrepresented, like driver's involvement in validation. D3D9 path in WDDM driver is a dog, that's true. But that doesn't come from WDDM1.x model (like Dmitry suggests above) but from the fact that it supports fixed pipeline.
No, DDI9 in WDDM 1.x does NOT support fixed pipeline. Microsoft emulates all fixed-function paths with Direct3D 9 shader code when you run WDDM drivers, and also remaps all Direct3D 3-8 functionality to Direct3D 9.

My point was, WDDM driver is still required to support DDI9 and DDI11 alongside DDI12 for that exact reason - to maintain compatibility with old games, since DDI9 is still used by Direct3D 9 path and 10Level9 path in Direct3D 11. This way, compatibility problems remain the responsibility of IHVs, and these problems seem to be huge.

That is seemingly the reason why Microsoft are unwilling to repeat what they did in Vista time - i.e. either remap Direct3D 9-11 on top of Direct3D 12, or remap Direct3D 3-9 to Direct3D 11 and reimplement the latter on top of DDI12 in WDDM 2.0. This would be a huge task on its own, but they would also need to maintain compatibility logic - probably by the way of a Direct3D compatibility layer mentioned above.

Another source of complexity comes from the actual HW support for features required by a given API level. Some companies are working closer with MS and have plenty of leeway to not implement stuff in HW and overcome some basic cases with (sloooow) software hacks, or get waived features altogether. Some aren't and don't.
"Software hack" is a myth of Direct3D 8 era when we still had things like "hardware fog". In this time and place, if your processing unit does not have an instruction code, operand, addressing mode, swizzle mode, or page table or TLB which is required for some feature, that is the end of the story. Trying to emulate these things will kill your performance and reduce the number of valuable resources ("slots") available to the applications.

If you happen to work for a company that doesn't live up to the API level and complains that perfectly valid shaders are slow or certain API calls blow, then yes, you end up with a broken or complex driver[1].
[1] because honestly, if your HW supports the stuff specs tell it to the way it's supposed to, finding incompatibility with a game should result in a simpler, leaner driver, not more complex one
What company specifically you are talking about? There are only three of them right now, and I don't think any of them is misrepresenting hardware capabilities in Direct3D.

OpenGL is a whole another story though, since only Nvidia currently has an OpenGL 4.x driver that works.
 
Are modern AMD/Nvidia drivers pure, free of game/engine optimizations
Have you ever seen driver release notes - they announce a fix for a broken feature in a certain game with every driver release. I have trouble recalling when there was no such announcement in the last 15 years.

You can easily find a set of two games of which one wouldn't work if the other did and there were no workarounds in the driver.
As much as people want to believe this, new API won't change that. Driver validation is a key. Preferably a transparent one.
I have a 2007 Direct3D 10 game that constantly froze at random intervals in Windows 7. The developer basically said "it's not our fault, it's bad Direct3D 10 drivers" and soon ceased any support, so it continued until recently. Then I finally upgraded to Windows 8.1, and the game became rock solid - all on the same hardware with the same Catalyst driver version. Doesn't really look like driver validation problem to me...

The sad state of the PC development is that not only drivers but every piece of OS compensates for a shit code people write. The whole idea of SxS is that you have several copies of the same DLL with different quirks (well, bugs) so that shit software that depends on those bugs (and trust me, that'd be the vast majority of software out there) can still work.
No no no, WinSxS in Windows ME was a terrible idea that tried to solve the problems that Microsoft created mostly by themselves - that is, introducing "bugfixes" and "feature enhancements" to common libraries which break older applications! It was all their fault which they blamed on application developers.

First of all, they didn't really follow secure software design rules until Vista reboot, hence Windows before XP SP3 was full of security exploits, buffer overruns, and other dreaded bugs. Secondly, their MSDN documentation was not clear enough because it was written by people who had access to the OS source code for developers who didn't, and was a source of many misunderstandings. Thirdly, they "solved" the problem of perceived "DLL hell" by requiring application to install commonn libraries - which doubled OS storage requirements, multiplicated support matrix, and made further security updates and bugfixes complicated.

"Thankfully" they moved it even further and built the whole Windows Update system on top of this! Can you believe it - they store every version of every component that was included with every update installed since the OS release... well, who is actually writing "shit code" with "different quirks" here?
 
No no no, WinSxS in Windows ME was a terrible idea that tried to solve the problems that Microsoft created mostly by themselves - that is, introducing "bugfixes" and "feature enhancements" to common libraries which break older applications! It was all their fault which they blamed on application developers.

First of all, they didn't really follow secure software design rules until Vista reboot, hence Windows before XP SP3 was full of security exploits, buffer overruns, and other dreaded bugs. Secondly, their MSDN documentation was not clear enough because it was written by people who had access to the OS source code for developers who didn't, and was a source of many misunderstandings. Thirdly, they "solved" the problem of perceived "DLL hell" by requiring application to install commonn libraries - which doubled OS storage requirements, multiplicated support matrix, and made further security updates and bugfixes complicated.

"Thankfully" they moved it even further and built the whole Windows Update system on top of this! Can you believe it - they store every version of every component that was included with every update installed since the OS release... well, who is actually writing "shit code" with "different quirks" here?

At least with modern apps they seem to be on a path to fixing this, where your "containered" store app can statically link all the libs it ever needs. All the code it doesn't need is tree-shaken out.

Well that's the theory, we'll see how it pans out after \\build\
 
Alex, the cushion isn't being removed.

Also, I would say it's a story about developers being given a cushion puzzle they don't want to solve and being smothered on the way to somewhere that ends-up ho-hum.

The consoles have long proven that there are plenty of developers out there who don't need the cushions.

silent_guy, shader quality is, fundamentally, not the issue. The problem with the legacy APIs is that developers are set a puzzle on how to construct state for a plethora of pipeline stages and connect all the dots to make that state work. They have to do this tens of times per frame, going through all the different states they want to use and juggle all the permutations of state change sequences, in order to make the GPU work efficiently. They're told they're working with API state, but in fact they're working with GPU state. The driver guys know how to make that work efficiently, but the gulf is so huge between driver state and performant GPU state, that ambitious and capable developers are basically thrown an intractable puzzle.

So getting it wrong is normal. Writing a shader that wires in to the badly setup state is normal. Shaders aren't the source of the problems developers have had, they're a symptom of the confusion and puzzle presented by the PC environment.
Couldn't have said this better myself.

The DirectX 11 resource update and binding model is a minefield for performance. You never know what kind of driver bottlenecks you hit on each GPU model of each manufacturer. It feels like you are trying to optimize a Java application (no way to control memory layout, too many abstractions, random garbage collections stalls and different runtimes). A clean explicit console style API is so much better for peformance oriented programming. It feels more like C/C++. What you program is what you get. Much more productive.

I also must comment the discussion about broken software (Illegal API usage). I don't understand how hard it is to compile a program with maximum DirectX debug validation enabled. And then fix all the warnings it outputs to the Visual Studio debug log. We have almost 100% unit test coverage for our low level graphics API and pretty good coverage for all of our rendering modules. Tests are executed several times every day with full DirectX debug validation active. Obviously we also run our final game builds with maximum validation quite frequently. For me it would be completely alien idea to publish a game with any DirectX or PIX validation errors.
 
"Software hack" is a myth of Direct3D 8 era when we still had things like "hardware fog". In this time and place, if your processing unit does not have an instruction code, operand, addressing mode, swizzle mode, or page table or TLB which is required for some feature, that is the end of the story. Trying to emulate these things will kill your performance and reduce the number of valuable resources ("slots") available to the applications.
Nowadays emulation for both the missing fixed function features and missing instructions is the norm.

Pretty much every modern GPU emulated the alpha test in DX9 by adding pixel shader clip instruction to the shader. Intel and PowerVR GPUs emulate alpha blending by adding the blend instruction sequences and the back buffer read to the end of the pixel shader (allowing nice additional things such as pixelsync / programmable OpenGL blending extensions). AMD GCN emulates lots of fixed functions features. The interpolation and fetch of the vertex shader outputs is manual. VS outputs to LDS and the pixel shader begins with instructions to fetch the transformed vertices and ALU instructions to interpolate those. The same is true for vertex data fetching in the vertex shader. Complex input data can add 10+ instructions to either shader easily. Pixel shader output to 16 bit per channel RTs is also emulated (added ALU instructions to pack two 16 bit values to 32 bit outputs). Cube map sampling has been emulated for long time on ATI/AMD GPUs at least. The GPU adds extra ALU instructions to normalize the source vector and to select the cube face (find the major axis). All the current GPUs emulate integer divide by a sequence of instructions (tens of instructions per divide). SAD is also emulated if missing. The new OpenCL cross lane operations are a prime candidate for emulation on GPUs that lack the required vector lane swizzle instructions. Emulation is the key to make your GPU compatible with all the existing APIs, while allowing the GPU to be more general purpose and future proof.
 
No, DDI9 in WDDM 1.x does NOT support fixed pipeline. Microsoft emulates all fixed-function paths with Direct3D 9 shader code when you run WDDM drivers, and also remaps all Direct3D 3-8 functionality to Direct3D 9.

It does support fixed pipeline. That's why you have to implement DDI calls for state changes like fog (you have to implement it) of color palette (I believe this one is a dummy). Some things are taken care of by the graphics stack, some aren't (or are terribly ineffective).

My point was, WDDM driver is still required to support DDI9 and DDI11 alongside DDI12 for that exact reason - to maintain compatibility with old games, since DDI9 is still used by Direct3D 9 path and 10Level9 path in Direct3D 11. This way, compatibility problems remain the responsibility of IHVs, and these problems seem to be huge.

And the alternative is?

That is seemingly the reason why Microsoft are unwilling to repeat what they did in Vista time - i.e. either remap Direct3D 9-11 on top of Direct3D 12, or remap Direct3D 3-9 to Direct3D 11 and reimplement the latter on top of DDI12 in WDDM 2.0. This would be a huge task on its own, but they would also need to maintain compatibility logic - probably by the way of a Direct3D compatibility layer mentioned above.

And they'd have to accept that only a fraction of machines would run Win10. Brilliant strategy, why haven't they thought about it?

"Software hack" is a myth of Direct3D 8 era when we still had things like "hardware fog". In this time and place, if your processing unit does not have an instruction code, operand, addressing mode, swizzle mode, or page table or TLB which is required for some feature, that is the end of the story. Trying to emulate these things will kill your performance and reduce the number of valuable resources ("slots") available to the applications.

Man I wish you've known how many waivers some pieces of HW get for validation tests they can't handle. It obviously gets better but it's a myth that software workarounds are a myth. Read sebbbi's response above.

Then I finally upgraded to Windows 8.1, and the game became rock solid - all on the same hardware with the same Catalyst driver version. Doesn't really look like driver validation problem to me...
Sure, rearranging stuff in memory doesn't change stability of a poorly written driver (or any piece of code for that matter). ;) There's a chance that what you've experienced is a DXGK problem but it's much less likely since the same DXGK runs for everyone and your driver is run by a fraction of Windows users. Coverage matters, that's why AAA games ship with bugs that weren't experienced in QA (100 people, 40h/week, 3 months of stabilization is ~50k hours; 100k gamers playing 2h of game day one is 4x the time code runs). I have to assume that you don't write systems code, correct?

No no no, WinSxS in Windows ME was a terrible idea that tried to solve the problems that Microsoft created mostly by themselves - that is, introducing "bugfixes" and "feature enhancements" to common libraries which break older applications! It was all their fault which they blamed on application developers.

Sure, it's their fault that they fixed bugs but it'd also be their fault if said bug was exploited on your machine. Clever but doesn't work like this.

First of all, they didn't really follow secure software design rules until Vista reboot, hence Windows before XP SP3 was full of security exploits, buffer overruns, and other dreaded bugs.

Dude, it's 2015, you're arguing quality of a 10 year old software. Which piece of code in 2005 (or 2001 if we're discussing XP) was of a much higher quality? And it's not true that there was no focus on quality and security before Vista reset. I know, because I interned before Vista shipped and there were tons of threat analysis docs from pre-Vista timeframe, procedures and tools aiding development. MS had static and dynamic analysis for ages. App verifier and driver verifier existed in XP timeframe. Prefix and prefast (which was released as OACR) existed for some time too. I appreciate your opinion, the problem is it laughs in the face of facts.

Secondly, their MSDN documentation was not clear enough because it was written by people who had access to the OS source code for developers who didn't, and was a source of many misunderstandings.

Here you're arguing pre-2000 state since MS was forced (and rightfully so) to document everything they themselves use, so that e.g. Office can get unfair advantage over other software. It amplified in 2007 when EU ordered production of absurdly detailed documentation for everything created from then on. Sure, some pieces of code on MSDN were and are crap. This has nothing to do with access to source and everything to do with the structure: documentation is written by technical writers. Some of them are great, some... not so much. But if you have some experience with Win32 then you'll be able to easily spot problems with sample code and documentation. It doesn't matter how good the documentation is, if someone doesn't care about code quality, things will break. And they did.

On top of that a lot of partners have communication channels you're not aware of. Most (if not all) of Windows teams spend time with 3rd party developers and help them use APIs and what not correctly. This was definitely true in Vista onwards but my guess is this wasn't new (WinSE guys had these processes as well so these contacts applied to XP sustained engineering as well). And you didn't have to be a huge software shop, you just had to be smart. We've had 2-3 people companies mailing us with questions and visiting us on-site once or twice a year. Their code was being debugged and problems with API usage and documentations were identified and fixed. There's only that much you can do when EVERYONE can code for your platform. And as much as you want to hate, a lot was done.

Thirdly, they "solved" the problem of perceived "DLL hell" by requiring application to install commonn libraries - which doubled OS storage requirements, multiplicated support matrix, and made further security updates and bugfixes complicated.

I see, doomed if you don't doomed if you do. Excellent. So if MS ships stuff - bad. If applications do - bad. If stuff doesn't work - bad. I guess the answer is either "you shouldn't have made any code mistakes in the last 20 years" or "just give up". Great outlook! :)
 
Click to expand...
And they'd have to accept that only a fraction of machines would run Win10. Brilliant strategy, why haven't they thought about it?

They have, Windows 10 will be free for all windows 7 and 8 users for the first year. So windows 10 will certainly be on more than just a fraction of machines. Windows 7 and 8 makes up about 70% of all windows machines. I'm sure its much higher in the segment that actually cares about DX
 
Intel and PowerVR GPUs emulate alpha blending by adding the blend instruction sequences and the back buffer read to the end of the pixel shader (allowing nice additional things such as pixelsync / programmable OpenGL blending extensions).
PowerVR GPUs do that, but intel do not. They still have fixed function blend.

The interpolation and fetch of the vertex shader outputs is manual. VS outputs to LDS and the pixel shader begins with instructions to fetch the transformed vertices and ALU instructions to interpolate those.
FWIW I believe something like this is done by all modern GPUs. This is because everybody nowadays uses barycentric coordinates. So the hardware sets up the barycentrics, but the interpolation must be done (helped with some special instructions typically) in the shader. (One advantage of the interpolation using barycentric coordinates is that they need only be set up once, not per attribute - though if using multiple interpolation modes in a shader (that is, linear, centroid, both with and without perspective correction, some more with per-sample interpolation modes) then you'll get multiple sets of these though presumably they should be easy for the hardware to set up as they are tightly linked to rasterization itself.)

All the current GPUs emulate integer divide by a sequence of instructions (tens of instructions per divide).
Intel actually has int div instructions (3 of them in fact, for getting quotient, remainder, and both). AMD does not, however (and I believe Nvidia neither).
 
PowerVR GPUs do that, but intel do not. They still have fixed function blend.
I have always assumed PowerVR GPUs do blending in the pixel shader, since they have had extension for that purpose for a long time (GL_APPLE_shader_framebuffer_fetch). Obviously they could support both fixed function blending and custom blending. However with native (double speed) FP16 support, the extra (up to 2) blending instructions at the end of the shader should be practically free. The frame buffer fetch shouldn't cause any scheduling stalls on their GPU, since the back buffer tile is guaranteed to be in the on-chip memory (no possibility for a cache miss on the read - less need to buffer multiple pixel shader instances simultaneously to the same pixel).

EDIT: read again your reply :). My mistake, you said Intel has fixed function blending and PowerVR does not. This makes much more sense. Intel is also capable of programmable blending with their pixelsync extension. But unlike PowerVR, they have no guarantees that the existing back buffer pixel read doesn't stalle because of a cache miss. Buffering a few fragments in the ROPs likely helps the performance, meaning that blending needs to be separated from the pixel shader (as keeping the pixel shader stalled with all the GPRs allocated potentially slows down other work).
FWIW I believe something like this is done by all modern GPUs. This is because everybody nowadays uses barycentric coordinates. So the hardware sets up the barycentrics, but the interpolation must be done (helped with some special instructions typically) in the shader. (One advantage of the interpolation using barycentric coordinates is that they need only be set up once, not per attribute - though if using multiple interpolation modes in a shader (that is, linear, centroid, both with and without perspective correction, some more with per-sample interpolation modes) then you'll get multiple sets of these though presumably they should be easy for the hardware to set up as they are tightly linked to rasterization itself.)
DX9 hardware (including last gen consoles) already supported centroid interpolation. You could actually become interpolation (fixed function hardware) bound if you had too many interpolants (or used VPOS).

If all the modern PC/mobile GPUs do the interpolation in the pixel shader using barycentrics (like GCN does), it would mean that cross vendor SV_Barycentric pixel shader input semantic would be possible. This would be awesome, as it would allow analytical AA among other goodies on PC (assuming DX12 and/or Vulkan support it).

This presentation is a good example about how useful this feature would be: http://michaldrobot.files.wordpress.com/2014/08/hraa.pptx
Intel actually has int div instructions (3 of them in fact, for getting quotient, remainder, and both). AMD does not, however (and I believe Nvidia neither).
Not surprising that Intel is leading the pack. Do you know how fast their integer divider is (2 bits per cycle? = 16 cycles total?). That would be 3x+ faster than emulation. Still, I wouldn't use integer divides unless there's a very good reason.
 
Last edited:
They have, Windows 10 will be free for all windows 7 and 8 users for the first year. So windows 10 will certainly be on more than just a fraction of machines. Windows 7 and 8 makes up about 70% of all windows machines. I'm sure its much higher in the segment that actually cares about DX
Discussion was about supporting D3D12 DDI. My bet is that plenty of HW out there won't get D3D12 drivers, regardless of Win10 install base.
 
Discussion was about supporting D3D12 DDI. My bet is that plenty of HW out there won't get D3D12 drivers, regardless of Win10 install base.

My take is Windows 10 will probably take both Win7 and Win8 drivers.
 
Back
Top