My Intel X3100 does DX10 :)

I already told you that the shader precision is higher on G965 than G35/GM965.

Which would also refute your own argument that the basic architecture is the same. Different precision would be a quite significant change.
Then again, since you need FP32, and you claim that Intel added DX10 support for Vista certification, then I would assume that the chips can do FP32 precision. Else why bother adding DX10 support in the first place, if you already know your hardware will never meet the requirements?

First, you already told me a bit of details and there are couple of tests by individuals on the net. For gaming, which is most relevant for DX10, its useless. I can understand if the X3100 could even play pre-DX9 games well, but for the most part it doesn't. Making a part that's otherwise Geforce 2-level, but making it compatible for DX10 level isn't really nice.

About the article, it says some of the specifications, not all. So there might still be finer parts which are not done in hardware, but rather in software.

You think that Crysis running at ~4 fps in DX10 mode while it runs at ~6 fps in DX9 mode is bad?
On my 8800GTS the difference between DX9 and DX10 is in the same percentage.

Aside from that, if you had looked at the diagrams in the article you linked, you'd know that this architecture in fact *is* a DX10 architecture, and built for unified shading and all that, and has nothing to do with GeForce 2-level hardware (the architecture is actually very similar to G80).
A DX9 architecture looks very different, let alone a DX7 architecture. A GF2 is barely even programmable at all, and lacks many modern features, such as 3d textures or anisotropic texture filtering, and ofcourse floating point pixelshading/rendertargets.
There is no way you could make such a design DX10-compatible, the thought alone is just ridiculous. The hardware needs to be designed for DX10 in the first place. If you cannot execute the pixelshaders in hardware, there's no way you can get any kind of useful performance. You'd basically get SwiftShader-level performance, because all pixel-processing would be on the CPU. My X3100 is way too fast to assume it does anything like that on the CPU, so the hardware must be designed for it. Do you understand this?

On games, especially ones that are couple of years old, software VS is much faster than hardware VS.

This is not true at all. In fact, many games were impossible to play before Intel enabled hardware vp. Games such as Far Cry leap to mind... Games that have high object and polycount, and therefore *require* hardware vp.
I think that either Intel has specific hardware vp driver optimizations for certain games at this point... Or they use the software vp as a fallback path for games that they have not yet verified to work with hardware vp.
But in games where hardware vp seems to work properly, such as Far Cry, the performance is actually quite good. It should only be a matter of time until hardware vp support is mature enough to run most software properly.
 
Not nonsense. It was an article about 15.9 driver, how the GM965 will have support for DX10 but in software.

Oh whatever, I told you it was in Microsoft presentation. Search for it if you do not believe me. There still are things in DX10 that Intel graphics do not fully enable that ATI/Nvidia does. (And about that thai massage comment by Morgoth, you are just being a jerk, oh sorry I meant idiot)



Doesn't matter. Basic architecture of G965 is identical to GM965, and since I use it development of the drivers in both fields are important. The 15.9 is a significant driver even for DX9 since it brings performance improvements that has not been there since the first hardware VS supporting driver from what I found.

The lack of performance is ENOUGH to justify that its a half-hearted implementation. Implementing it so its not running at optimal speeds is half-hearted.

And there ARE optional features in DX10. DX10.1 implements those optional features. 32-bit FP Filtering, 16-bit UNORM blending, RGB32 Rendertarget.

http://translate.google.ca/translat...Intel+DX10+non+anamorphic&start=20&hl=en&sa=N

"New drive for desktop G35, notebook GM965/GL960 eventually bring the legends of the DX10 support, but because of structure, some specifications is still software support rather than hardware support,..."

I have Intel presentation that lists some advancements of Gen 5 vs Gen 4:
-Significant improvment of Transcendental instructions performance
-Increased support of latency coverage at higher frequency
-Support of shaders with longer instruction lengths

http://softwarecommunity.intel.com/articles/eng/1487.htm

Shader precision goes down from G965's 32-bit to 24-bit for all other IGPs. Things like max 2D texture is between DX9(2k x 2k) and DX10(8k x 8k) at 4k x 4k.

If you didn't understand my post, I'm not surprised that you don't understand the concept of the optional features and why they're mentioned in different slides. You know that SM4.0 support is optional for DX8 parts?As in the option of implementing it exists...which means jack for being DX8 compliant. But anyway, carry on.
 
Which would also refute your own argument that the basic architecture is the same. Different precision would be a quite significant change.
Then again, since you need FP32, and you claim that Intel added DX10 support for Vista certification, then I would assume that the chips can do FP32 precision. Else why bother adding DX10 support in the first place, if you already know your hardware will never meet the requirements?

Uh, no. 32-bit FP Filtering is not a requirement for DX10. Have you even went to the link??

Pixel Shader-
Shader precision 24 bit

Texture Sampler
Compute precision: 16 and 32 bit

Color and Z filters:
32 bit

So I am guessing for Pixel Shader you can have 24 bit and pass for DX10.

Aside from that, if you had looked at the diagrams in the article you linked, you'd know that this architecture in fact *is* a DX10 architecture, and built for unified shading and all that, and has nothing to do with GeForce 2-level hardware (the architecture is actually very similar to G80).

Performance is really bad in certain games. I'd say its at best Geforce 2 level. Do you understand I was doing a performance comparison? There are really handful games out there that run it well. Rest run like crap.

This is not true at all. In fact, many games were impossible to play before Intel enabled hardware vp. Games such as Far Cry leap to mind... Games that have high object and polycount, and therefore *require* hardware vp.
I think that either Intel has specific hardware vp driver optimizations for certain games at this point... Or they use the software vp as a fallback path for games that they have not yet verified to work with hardware vp.
But in games where hardware vp seems to work properly, such as Far Cry, the performance is actually quite good. It should only be a matter of time until hardware vp support is mature enough to run most software properly.

Need for Speed: Underground, Guild Wars, FEAR, Civilization 4, runs MUCH faster on software than hardware. Also run vertex shader tests on 3DMark, software mode is 30% to 10x faster than hardware. There are LOTS of users out there that are surprised to find out some of their 5-6 year old games run REALLY slow. And they find out that they can run much faster when they run on software vertex processing.
 
So I am guessing for Pixel Shader you can have 24 bit and pass for DX10.
Uhhhh no. And that must be a mistake anyway - AFAIK every single X3000 derivative is FP32...

There are LOTS of users out there that are surprised to find out some of their 5-6 year old games run REALLY slow. And they find out that they can run much faster when they run on software vertex processing.
Do you have any idea if that's normal (i.e. VS stealing a lot of ALU cycles for a chip that doesn't have that much ALU performance) or if it might actually be a bug?
 
Uh, no. 32-bit FP Filtering is not a requirement for DX10. Have you even went to the link??

Pixel Shader-
Shader precision 24 bit

Texture Sampler
Compute precision: 16 and 32 bit

Color and Z filters:
32 bit

So I am guessing for Pixel Shader you can have 24 bit and pass for DX10.

I was talking about 32-bit shader precision, and I wasn't guessing, this paper from Microsoft says so: http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf
32-bit FP filtering is optional, 32-bit shader processing is not (a 24-bit FP format doesn't even exist). 32-bit is a minimum requirement (in fact, wasn't 32-bit a minimum requirement for vertexshaders since day 1? With unified shaders that requirement ofcourse trickles down to all types of shaders by default).

Performance is really bad in certain games. I'd say its at best Geforce 2 level. Do you understand I was doing a performance comparison? There are really handful games out there that run it well. Rest run like crap.

How is performance relevant to the discussion whether or not it supports DX10 fully?
The reference rasterizer is incredibly slow aswell, much slower than a GeForce 2. Are you telling me that refrast is not DX10?

Need for Speed: Underground, Guild Wars, FEAR, Civilization 4, runs MUCH faster on software than hardware. Also run vertex shader tests on 3DMark, software mode is 30% to 10x faster than hardware. There are LOTS of users out there that are surprised to find out some of their 5-6 year old games run REALLY slow. And they find out that they can run much faster when they run on software vertex processing.

What does that have to do with DX10?
And 3DMark05 and 06 are now actually faster with hardware vp than with software vp, with the 15.9 driver on my X3100.
I have already named other games, such as HalfLife 2 and Far Cry which run much faster because of hardware vp. It seems that the hardware is capable of efficient vp, the problem is in the driver.

Regardless, I don't see why you try to use bad performance or flaky game support in DX9 games as some kind of excuse to claim that it's not a true DX10 part. That just doesn't make sense.
 
Last edited by a moderator:
I was talking about 32-bit shader precision, and I wasn't guessing, this paper from Microsoft says so: (a 24-bit FP format doesn't even exist).
I wonder how MS could put a not existing format as min precision requirement for SM2.
 
I wonder how MS could put a not existing format as min precision requirement for SM2.

I'm talking about it not existing in SM4, obviously (I even link to an extensive article describing the DX10 pipeline specification).
In fact, even SM3 demands FP32.
Even half (FP16) format is required to be processed with FP32 precision internally. The specifications clearly state that the dataformats are for storage only, and all arithmetic is to be processed with 32 bit.

So if Intel's IGP can't do any of that then that would mean:
1) They will never be able to get their drivers/hardware validated for DX9+SM3 or DX10.
2) Because of the lack of precision, a lot of software will produce incorrect results.

Since Intel gets various certifications from Microsoft, 1) is unlikely.
It's hard to draw conclusions about 2) since there is a lot of software that produces incorrect results because of the poor driver compatibility. However, since a lot of relatively complex software such as 3DMark03/05/06 and Far Cry do render correctly, it seems likely that the incorrect results are caused by the drivers, not by the hardware itself.

In fact, the 15.9 beta drivers had a problem with a vertexshader example from the DX10 SDK, making some vertices jump out. This seemed to be a problem with insufficient precision. But with the final 15.9, the problem was fixed.
In general, the rasterization on the X3100 is perfectly stable anyway. Something that many older IGPs had problems with, due to lack of precision (especially ones with software vp, and not just Intel either... My Radeon IGP340M couldn't handle long thin polys well).
 
I've found Intel's IGPs touted featuresets to be rather dubious. Especially with GMA 950 and down. For example, because these chips don't even have emulated vertex shaders, there are games that will not see them as even DirectX 8 GPUs. Morrowind, DOSBOX* and Max Payne 2, for ex, won't use shader effects at all. Of course, I'm just guessing here that that is why they don't see hardware support for shader effects.


* DOSBOX has shader-based scalers in some builds
 
Well, if that's the case then technically the problem is in the software, because afaik there's nothing in the DX8/DX9 spec that says the hardware has to support vertexshaders if you want to support pixelshaders (there are separate caps for vs and ps anyway), and the D3D API has its own software emulation built-in. The developer just has to check for support and manually enable the software layer if he wants to use vertexshaders without hardware support (I used to do that on my GF2).

Regardless, that sort of thing is in the past soon, because DX10 doesn't allow you to arbitrarily choose which features you want to implement and which you don't.
Even if they were to do software vertex processing, they still have to make it appear like a fully functional DX10 unit to the API. But Intel has hardware vp in its IGPs now anyway.
I really don't think the current generation of IGPs should be compared to the GMA950 and earlier at all, because the hardware is completely different. Sadly, the drivers weren't, until recently... Which meant that these DX10-capable parts initially still ran with only software vp and SM2.0 support.
I think with this current generation of IGPs, the hardware designers actually did a pretty nice job, but it seems the driver department has trouble keeping up. But we've had a few nice milestones recently... Like the addition of hardware vp last year, and now DX10. If they can iron out the bugs and make performance more consistent, then the X4000-range may actually become a competitor to the current line of nVidia IGPs.
 
Games such as Far Cry leap to mind... Games that have high object and polycount, and therefore *require* hardware vp.
Far Cry requires hardware vp because it can't handle software vp, not because the polycount is insanely high. But I second that it's highly unlikely that Intel implemented D3D10 vp on the CPU. Doesn't make any sense because you would need to implement a full texture filter in software for it which would be rather massiv and incomprehensible slow when doing full 128 tap AF samples ;)

Besides I don't know if it's even possible because of stream out and things like that.
 
Far Cry requires hardware vp because it can't handle software vp, not because the polycount is insanely high.

The problem wasn't that the game didn't work. The problem was that it was too slow with software vp.
Other games such as Half Life 2 didn't suffer that much from software vp because the polycount/object count was much lower in general.
That's what I meant by saying it *requires* hardware vp. There just isn't a CPU that's fast enough to process the geometry in realtime. Just try with SwiftShader on a fast CPU and a low resolution and everything low detail except for the geometry. It still dies completely.

Intel actually used Far Cry to demonstrate their new hardware vp driver and the extra performance it delivered, last year, comparing old and new driver performance (see the links in my second post).
Sadly that was a driver for XP only, and in Vista software vp stuck around far longer, at least on the X3100. But recently they included hardware vp for Vista aswell, and now Far Cry runs pretty well on my X3100 at last.
 
Last edited by a moderator:
And 3DMark05 and 06 are now actually faster with hardware vp than with software vp, with the 15.9 driver on my X3100.

So?? What do you get in each of the modes?? Have you tested the previous versions of the driver in each of the modes?? The hardware mode didn't get faster, the software mode went down likely :p.
Do you have any idea if that's normal (i.e. VS stealing a lot of ALU cycles for a chip that doesn't have that much ALU performance) or if it might actually be a bug?

From Intel's paper named Getting the most out of Intel graphics

Under Vertex/Primitive Processing Capabilities

HWVP is enabled for all titles by default
SWVP may offer performance enhancements on Core 2 Duo/Core 2 Quad CPUs
For SWVP, VS/GS/Clip stages behave as pass-through
*Reallocate the compute resources back to pixel processing for overall performance gain
Driver will always export full HWVP support
*SWVP maybe used based on configuration and workload
*SWVP has optimizations beyond the current PGSP including support for evolving CPU instructions
Peak vertex throughput through the fixed function pipe is defined by the cull rate
*Gen 5 has ~2x the vertex processing throughput over Gen4 HWVP
Peak Early-Z reject rate is at 4 pixels/clk

So Intel themselves do acknowledge that some games run faster on SWVP.
 
Last edited by a moderator:
So?? What do you get in each of the modes?? Have you tested the previous versions of the driver in each of the modes?? The hardware mode didn't get faster, the software mode went down likely :p.

The new driver with hwvp gives me the highest score I've seen so far. Stop your silly conspiracy theories.

So Intel themselves do acknowledge that some games run faster on SWVP.

Depends on the game and the CPU used obviously. My laptop only has a 1.5 GHz Core2 Duo, which will have a completely different swvp performance level than eg a 3.0 GHz Core2 Quad, yet the X3100 will have the same hwvp performance with either CPU.
And ofcourse if the game already keeps the CPU busy with physics, AI and that sort of stuff, the CPU won't have time for swvp, but hwvp won't be affected.
(And ofcourse since the drivers develop and expand so quickly, a lot of info/benchmarks etc get outdated really soon).

You also seem to ignore the fact that even if *some* games run faster on swvp, them still all the *other* games apparently run faster on hwvp (besides, how does that compare to other IGPs? I wouldn't be surprised if a fast Core2 Quad could also outperform other IGP's hwvp levels in isolated cases).
Also, since Intel's driver performance is constantly improving, the list of games that runs faster with swvp will get shorter and shorter.
So I don't see the problem really. The original GeForce also didn't always get better results with hwvp at the time. In fact, a lot of reviewers initially wrote hardware T&L off as just a useless technology. They will probably feel silly now when they read back their old reviews.
In the end, regardless of whether a game runs faster with hwvp or swvp, if it plays well... who cares? It's just nice to have hwvp for those games that didn't run well with swvp, like Far Cry.
If some games run better with swvp, then continue running them with swvp I say.
The point was that we now have the option to use hwvp at last, aswell has having DX10 at last.
You might also want to read this: http://download.intel.com/design/chipsets/applnots/31888801.pdf
It puts things more in perspective and shows how much hwvp can gain in some titles like HL2:Ep1 and Far Cry.
It also confirms what I suspected: Intel has a list of applications in the driver, and their preferred vp path, so you get the best of both worlds...

I mean, what are you on about anyway? Nobody in his right mind should expect proper gaming from an IGP anyway, especially not from Intel. So trying to argue that an IGP doesn't run games very well is just silly anyway. It's not like any other IGP can get anywhere near a decent videocard or gaming console anyway. So I don't see why people have to come down on Intel so hard.

I personally think nVidia should be more ashamed of itself. Its latest line of IGPs doesn't exactly perform well either (and X4000 may actually get reasonably close), but nVidia is actually the leading GPU company, they have all the technology in-house.
Intel just tries to build an IGP as cheap as possible, while still having the basic features required for Vista Aero and video playback and that sort of thing.

Now, in the past I wasn't happy with Intel either, because their hardware lacked features, and their drivers were abominable. But with their X3000 series, I couldn't really fault the hardware... it's a nice elegant design with DX10 support and even some basic video acceleration (ClearVideo, which actually delivers decent quality with eg PowerDVD... As far as I can tell, it looks as good as my 8800GTS' PureVideo). The drivers were still really poor at first, especially in Vista... but over the past 6 months or so, there has been a tremendous improvement. They've 'filled in the blanks' now, with full SM3.0 and DX10 support, and hwvp. At this stage the DX9 portion is quite acceptable... They just need to iron out some bugs, and perhaps they can improve performance some more (I still see bugs on my 8800GTS aswell from time to time, which need to be fixed by upgrading the driver, it's not like Intel is alone in this.. In fact, in XP/XP x64 I got weird bugs in Crysis... some polygons would mutate into really large strips that covered nearly the entire playing field... they only fixed that with the drivers released last week).
DX10 is still poor (although Crysis and BioShock run), but so what? It's only the first driver release with DX10 support... it's not like nVidia and ATi got it right on the first try either. Besides, nobody on this forum really expects the Intel IGPs to actually be *useful* with any DX10 games, right? It's not like DX10 games run well on AMD's and nVidia's IGPs either, they're still too slow.

So I'm just interested in the Intel IGP in a purely academic way... I know it's never going to be a good gaming chip, but it's interesting to see how Intel is slowly ramping up its drivers to reach the full potential, both feature-wise and performance-wise. It will be very interesting to see just how close Intel can get to nVidia's IGPs with the X4000.
I'm just anxiously awaiting the day that 3DMark Vantage runs properly on the Intel IGPs, so we can do some detailed comparisons.
ShaderMark 2.1 actually runs correctly on my X3100 by the way, some stuff even over 200 fps in 1024x768 :)
 
Last edited by a moderator:
The new driver with hwvp gives me the highest score I've seen so far. Stop your silly conspiracy theories.

User report:

http://www.computerbase.de/forum/showthread.php?t=309411

From that report, you can see ever since 15.44 drivers, the score went down. 15.44 driver ran on software mode only since it didn't support hardware and the newer drivers run software by default. Yet from what I read the hardware result didn't change too much. Most likely, the software mode went down and the hardware mode looks better than software comparatively. If you can call that a improvement, so be it. Still for lots of games, especially older ones, they are just rising back up to the pre-hardware driver performance.

This is true for the X3100. The desktop X3000 seems to be generally more positive. For the X3100 though, 15.9 is the first driver that showed improvement over predecessors.
 
Last edited by a moderator:
The one being silly is YOU. Go to notebookreview.com and ask for the hardware and software mode results.

No, you're being silly, you're just ranting on about Intel's IGPs with all kinds of crackpot theories or outdated info, and ignoring any technical discussions (I mean, you even tried to claim that the drivers don't support DX10 based on some version number, despite the fact that I was actually *running* DX10 code on my X3100! Your credibility is 0 anyway).
Go to another forum, this is a forum dedicated to technical discussion, not to fanboys, conspiracy theories and other irrational nonsense.
 
Last edited by a moderator:
No, you're being silly, you're just ranting on about Intel's IGPs with all kinds of crackpot theories or outdated info, and ignoring any technical discussions.
Go to another forum, this is a forum dedicated to technical discussion, not to fanboys, conspiracy theories and other irrational nonsense.

Whatever. You are just being a crackhead. See for yourself.

3DMark05
Driver 3dmarks
15.4 (1255) 885 3DMarks
15.4.1 (1268) 858 3DMarks
15.4.3 (1283) 867 3DMarks
15.4.4 (1295) 868 3DMarks
15.6b (1288) 671 3DMarks
15.6 (1322) 882 3DMarks
15.6.1 (1329) 459 3DMarks
15.7 (1364) 866 3DMarks

3DMark06
Driver 3DMark Score
15.4 (1255) 532 3DMarks
15.4.1 (1268) 515 3DMarks
15.4.3 (1283) 529 3DMarks
15.4.4 (1295) 522 3DMarks
15.6b (1288) 425 3DMarks
15.6 (1322) 493 3DMarks
15.6.1 (1329) 359 3DMarks
15.7 (1364) 440 3DMarks

http://forums.vr-zone.com/showthread.php?t=129343&page=96

Absolutely NO progression.

Convenient graphs: http://forums.vr-zone.com/showthread.php?t=129343&page=96

Hardware is slower most of the time.

http://forums.vr-zone.com/showthread.php?t=129343&page=95

14.32 gives me 690 in HW mode and 868 in SW mode (3dmark05)

SLOW!!

You can the read user claiming Civ runs much faster on SW mode.
 
Last edited by a moderator:
Uhhh, this topic is about driver 15.9, and your list of 3DMark06 stops at 15.7.
Aside from that, you constantly throw data around without any mention of the actual CPU, OS and IGP used.
You can't just compare X3000, X3100 and X3500... you can't compare Vista to XP, and you most certainly cannot compare processors, especially when clockspeeds can range from about 1.4 to 3.33 GHz, and from 1 to 4 cores.

Heck even within the very threads you link to, there are quite varying results, some people getting faster hw performance than sw performance in 3DMark2001SE or 3DMark03 for example.
Some also show increasing performance from newer drivers.
You're just trying to conveniently pick some results to support your negative views, and ignore the fact that there are plenty of results that dismiss your views.
 
Last edited by a moderator:
Oh sorry that you don't ACTUALLY click on the link and bother to look at it. If you actually bothered to click on the link, you'll find that the first results from my latest post was from Agent_J and has X3100, and the guy testing Civ 4 was also using X3100.

Agent_J: hmm noxxle and I are running the same vbios (1436), however he does have a T7300 and 2GB RAM vs me with a T7100 and 1GB RAM. can you think of any reasons why noxxle would not be suffering from decreased performance with the 15.6+ drivers vs me who does?

You can't just compare X3000, X3100 and X3500... you can't compare Vista to XP, and you most certainly cannot compare processors, especially when clockspeeds can range from about 1.4 to 3.33 GHz, and from 1 to 4 cores.

Such bullcrap. The fastest processor there is the desktop 2.66GHz Core 2 Duo, but I didn't link his trying to prove my point. And there are only dual core processors.

There are ONLY THREE MAIN USERS there I linked to make my point in the last post. Two with X3100 and one with X3000. And the two X3100 user don't have vastly different system specs. And from XP vs. Vista testing, Vista was generally slower but most of the problems are consistent: slower hardware mode for 3dmark, same image quality problems, same slowdowns(enabling smoke makes it unplayable in NFS).

First point is that you are just ignoring me as much as I am, and the second thing is the systems aren't different enough to make vastly different results.

X3100's hardware mode is slower in 3dmark than software and lots of older games, and X3000 performs about the same in 3dmark and slower in hardware in few games.
 
I'm ignoring you because you're just ranting on.
This topic was solely about the 15.9 drivers which add DX10 support to X3100 (and X3500, although I've not seen reports from anyone with that IGP yet), which I think is quite a milestone for Intel IGPs. So I just wanted to report on what did and didn't work with DX10, and perhaps some short performance indications.
Aside from that, apparently these drivers also improve performance in various DX9 applications.

I don't know what your problem is, but you are obviously not a 3d programmer or hardware engineer like most people on this forum. You do not seem to understand many of the technical aspects of the discussion here, and you just throw random numbers and crackpot theories around, with no other interest than to put down Intel's IGPs any way you can. If you're just bothered by your X3000's poor gaming performance, here's a tip for you: buy a proper 3d card from AMD or nVidia. If you really expected an X3000 to be capable of playing games, then you just didn't inform yourself properly before buying, which I don't think anyone here really cares about.
This was supposed to be a positive thread about Intel's IGPs, because of the improvements in Intel's drivers and the milestone that is DX10 support.
 
Back
Top