I already told you that the shader precision is higher on G965 than G35/GM965.
Which would also refute your own argument that the basic architecture is the same. Different precision would be a quite significant change.
Then again, since you need FP32, and you claim that Intel added DX10 support for Vista certification, then I would assume that the chips can do FP32 precision. Else why bother adding DX10 support in the first place, if you already know your hardware will never meet the requirements?
First, you already told me a bit of details and there are couple of tests by individuals on the net. For gaming, which is most relevant for DX10, its useless. I can understand if the X3100 could even play pre-DX9 games well, but for the most part it doesn't. Making a part that's otherwise Geforce 2-level, but making it compatible for DX10 level isn't really nice.
About the article, it says some of the specifications, not all. So there might still be finer parts which are not done in hardware, but rather in software.
You think that Crysis running at ~4 fps in DX10 mode while it runs at ~6 fps in DX9 mode is bad?
On my 8800GTS the difference between DX9 and DX10 is in the same percentage.
Aside from that, if you had looked at the diagrams in the article you linked, you'd know that this architecture in fact *is* a DX10 architecture, and built for unified shading and all that, and has nothing to do with GeForce 2-level hardware (the architecture is actually very similar to G80).
A DX9 architecture looks very different, let alone a DX7 architecture. A GF2 is barely even programmable at all, and lacks many modern features, such as 3d textures or anisotropic texture filtering, and ofcourse floating point pixelshading/rendertargets.
There is no way you could make such a design DX10-compatible, the thought alone is just ridiculous. The hardware needs to be designed for DX10 in the first place. If you cannot execute the pixelshaders in hardware, there's no way you can get any kind of useful performance. You'd basically get SwiftShader-level performance, because all pixel-processing would be on the CPU. My X3100 is way too fast to assume it does anything like that on the CPU, so the hardware must be designed for it. Do you understand this?
On games, especially ones that are couple of years old, software VS is much faster than hardware VS.
This is not true at all. In fact, many games were impossible to play before Intel enabled hardware vp. Games such as Far Cry leap to mind... Games that have high object and polycount, and therefore *require* hardware vp.
I think that either Intel has specific hardware vp driver optimizations for certain games at this point... Or they use the software vp as a fallback path for games that they have not yet verified to work with hardware vp.
But in games where hardware vp seems to work properly, such as Far Cry, the performance is actually quite good. It should only be a matter of time until hardware vp support is mature enough to run most software properly.