My Intel X3100 does DX10 :)

Scali

Regular
Intel has promised DX10 support on some of their IGPs long ago, but there has never been any actual DX10 driver...
Recently rumours were surfacing that driver version 15.9, which is due for release in late April to early May, would be the one that enabled DX10 at last, on selected IGPs.

Anyway, when I looked around, I found this site:
http://www.computerbase.de/downloads/treiber/grafikkarten/intel_grafiktreiber

These are a 15.9 beta release from last month, and they actually do deliver DX10.

Not that it allows me to play Crysis on low detail, but still (the game does actually start in DX10 mode and seems to render correctly, something that even DX9 didn't do until 15.8) :)

Anyway, at least it will allow me to use hardware acceleration when I'm developing my DX10-engine on my laptop, and that's great news for me... The DX10 tutorial with the textured cube used to spin at < 1 fps in software... Now I get 1500 fps, so that's quite an improvement :)

PS: it scores 910 3DMarks in 3DMark05 now.
 
Last edited by a moderator:
Well, with these new drivers I also tried some DX9 games...
A while ago, Intel promoted their coming drivers with hardware vertexprocessing:
http://www.youtube.com/watch?v=phmypcn6b9Y
http://www.youtube.com/watch?v=OogVYWQLAhM

But these were for XP, and my laptop came with only Vista, so I still couldn't get this performance... in fact a lot of games didn't even run, or run properly.
I tried them with these new DX10 drivers, and indeed, Far Cry is now actually quite playable, and renders correctly even at the highest detail settings. It also looks great, with good texture filtering and all.
HalfLife 2 also works nicely and looks great, although it still had some minor glitches. It can even handle Episode 1 reasonably well, including the HDR effects.

So well, Intel just keeps on improving their hardware and drivers... They still have a ways to go, but if they continue improving, they may well have good quality drivers and a very nice GPU by the time Larrabee is introduced.
I also wonder what the X4000-series is going to do, considering that the X3100 is already capable of acceptable gaming now, and is DX10-compatible. X4000 may actually make Crysis doable.
 
I'll bite. While rendering quality might be nice and the chip draws the right pixels, even DX10 ones, I fail to see how X3100 is capable of "acceptable gaming". Full-screen gaming on the kinds of panels X3100 is likely to be driving (let's face it, it's a laptop IGP) isn't going to be anything other than a scaled slow mess *shrug*

Having to run at < 1024x768 scaled up to a notebook panel at low framerates might be acceptable to you, with games that aren't newer than a couple of years old, but I don't buy it myself. Windowed mode is still something a bunch of games developers don't seem to realise exists.

Nice if you want to do a bit of DX10 dev on the move and you want more than software rendering, but it's not a gaming graphics processor. We both know that any cheap discrete GPU will just destroy it for the most part.

If X4000 or whatever it's called is 'doable' for Crysis at any sane resolution, I'll eat a tasty hat.
 
I'll bite. While rendering quality might be nice and the chip draws the right pixels, even DX10 ones, I fail to see how X3100 is capable of "acceptable gaming". Full-screen gaming on the kinds of panels X3100 is likely to be driving (let's face it, it's a laptop IGP) isn't going to be anything other than a scaled slow mess *shrug*

Well, my panel is 1280x800, and I ran the games at that resolution. It could take medium to high settings, and still give 25-30 fps, which is 'acceptable',
Sure, if you have a high-end laptop, you may have higher resolution, then again, you probably also have a more high-end graphics chip... eg like my laptop at work, which has an nVidia Quadro.
Even so, scaling down to 1280x800-ish resolutions is not all that bad. At least it's not 640x480 or something, so you actually still see quite a bit of detail. Bugs of the past, like rasterizing not being accurate enough, or texturefiltering being very poor, have all gone. I no longer see any weird discontinuities in models, and when I select 16xAF, I get very sharp texturing. So in short, the rendering/texturing quality can now compete with discrete GPUs (save for AA, which simply is not available, and considering the performance level, it would not be useful anyway).

On another note, PowerDVD 7 detects and uses the ClearVideo acceleration, which does a good job at giving a crisp image. So the IGP has that front covered aswell now.

For all the rest, you just seem to want to rag on the Intel chips for not being as good as discrete GPUs. That's not really the point, is it? The point is that the lowest common denominator keeps being pulled up, which is good news for both end-users and developers.
As poor as these chips and their drivers are, you have to realize that a LOT of people are using them. I think considering the cost (about $4 according to Intel), it's impressive that this IGP can more or less match the level of the Radeon 9600XT that I had in my previous desktop PC as my primary development/gaming GPU.

I didn't even buy this laptop myself, I inherited it from my dad (obviously I do have a desktop PC with a fullblown discrete GeForce 8800 card in it). It's just fascinating to see just how well this absolute bottom-end IGP is doing now that the drivers are starting to catch up.
 
I can sort of buy your lowest common denominator argument. There's a whole PC gaming industry that revolves around games designed to run well on that class of hardware. Which is fine. But then there's a whole class of games, some of which you mention, which just aren't designed for IGPs at all. Min-spec for those is completely somewhere else, and for good reason. Devs are actively not caring about an Intel IGP any more.

I'm all for affordable graphics, and it's good to see that basic perf and IQ level sitting some way above what it normally is in a historic sense. But neither do I think comparing IGPs to discrete is missing the point either. There's a certain class of device where the form factor and power budget means a $4 IGP is the only thing that can ever be considered to push the pixels.

But then there's a whole enormous market that currently bundles nothing but IGP that could augment a discrete GPU into the system for little extra costs, but never does, where your performance expectation is just massively higher, giving the consumer a much better gaming experience.

That's what bugs me the most. The low-end discrete market is clobbered by IGP sales in laptops and desktops alike because OEMs don't see gaming as a value addition down there.
 
I can sort of buy your lowest common denominator argument. There's a whole PC gaming industry that revolves around games designed to run well on that class of hardware. Which is fine. But then there's a whole class of games, some of which you mention, which just aren't designed for IGPs at all. Min-spec for those is completely somewhere else, and for good reason. Devs are actively not caring about an Intel IGP any more.

I think you have it backwards. Games never cared for IGPs at all. In fact, IGPs are a relatively new development. Back in the day videocards were nearly always discrete, and if they were integrated on the motherboard, usually they were nothing more than the discrete components (including videomemory) all built onto the same PCB.
So the earliest breed of PC games tended to run on Hercules and/or CGA graphics... EGA was mostly skipped, and then many games started to demand VGA graphics, which was usually a discrete ISA card.
Then we moved on to SVGA with VLB and PCI cards, and then the 3d revolution started, ofcourse again with discrete cards like the Voodoo. Games started demanding 3d cards... and it wasn't until later that integrated videochips actually started offering 3d acceleration at all... very poorly at first, but at some point IGPs from ATi and nVidia started to adopt most of the basic technology of the discrete cards, meaning that they were more or less on par with discrete cards, just far slower.
So until recently, IGPs just weren't an option for gaming by default, because they were too far behind. Intel is the only one that hasn't quite caught up yet, but they are now finally making the effort, it seems. I don't think anyone took Intel seriously with graphics anyway. And deservedly so, because only a few years ago, their drivers were so horrible that even Office applications were not always rendered properly.

I'm all for affordable graphics, and it's good to see that basic perf and IQ level sitting some way above what it normally is in a historic sense. But neither do I think comparing IGPs to discrete is missing the point either. There's a certain class of device where the form factor and power budget means a $4 IGP is the only thing that can ever be considered to push the pixels.

But then there's a whole enormous market that currently bundles nothing but IGP that could augment a discrete GPU into the system for little extra costs, but never does, where your performance expectation is just massively higher, giving the consumer a much better gaming experience.

That's what bugs me the most. The low-end discrete market is clobbered by IGP sales in laptops and desktops alike because OEMs don't see gaming as a value addition down there.

I wonder though... AMD is trying to start off an 'IGP war' with their 780G chipset, which tries to close the gap with low-end discrete cards like the Radeon 2400 or GeForce 8400.
Intel is now pretty much 'up to date' with its drivers aswell (the X3100 hardware has actually been around for quite a while, but hardware vertex processing, DX10 features and some of the video acceleration was just never enabled until recently). And ofcourse Intel is gearing up for Larrabee next year. You can think of X3000/X4000 as a 'test-run' for their graphics team. They are now building up experience with dynamically allocated unified shaders, and general DX9/DX10 driver compatibility/optimizations...
So on the one hand, Intel may produce a serious discrete card with Larrabee, and on the other hand, Intels IGPs will probably make quite a leap in performance once the first Larrabee-spinoffs are introduced.
So, the feature gap between IGP and discrete card has already been closed now, and it seems that the performance gap will be narrowed by ATi/Intel aswell (and nVidia cannot remain behind ofcourse). DDR3 also delivers more bandwidth for IGPs, and Nehalem will have a triple-channel controller, for even more bandwidth goodness.

So I'm just saying: Intel may finally get serious with graphics... this X3100 is looking good so far . They seem to have the basic hardware going, and the drivers are starting to mature aswell... X4000 should also bump up the performance with higher clockspeed, more memory bandwidth, and more processing units... And then we'll have to see what Larrabee is capable of.
 
I think you have it backwards. Games never cared for IGPs at all.
But you just told me that IGP is everywhere, so if you're a developer looking to make money, you clearly do care. Most of the best selling PC games of all time have always considered Intel IGPs, precisely because of that. They're a relatively new development only if you look back before the advent of consumer accelerated 3D. I tend to use that dawn as a baseline around here, for somewhat obvious reasons :p

I wonder though... AMD is trying to start off an 'IGP war' with their 780G chipset, which tries to close the gap with low-end discrete cards like the Radeon 2400 or GeForce 8400.
When RS780 and RS790 are almost an order of magnitude faster than X3100 at the same IQ levels in modern games, you better believe it. An order of magnitude is sometimes being kind to X3100, too.

It's no test run for Larrabee either, no more than i740 was.

A quick grep of X3100 reviews shows me that it's still a huge embarrassment compared to competing IGP solutions in the games you say run acceptably (and the data says they really really don't run like that at all). G45/X4000 won't do much to change that, in my humble opinion.
 
But you just told me that IGP is everywhere, so if you're a developer looking to make money, you clearly do care.

No, because a large part of the market for IGPs is in office/workstation/server machines which will never ever see a game anyway.
The market for games is not the same as the market for IGPs, and never will be.

Most of the best selling PC games of all time have always considered Intel IGPs, precisely because of that.

There is little evidence to support that, because until recently, many games didn't even run or render correctly on an Intel IGP in the first place, and even if they did, generally they weren't playable even at the lowest resolution and quality settings.

When RS780 and RS790 are almost an order of magnitude faster than X3100 at the same IQ levels in modern games, you better believe it. An order of magnitude is sometimes being kind to X3100, too.

Now you're just twisting the facts.
A 780G scores 1183 3DMarks in 3DMark06 (http://arstechnica.com/reviews/hardware/amd-780g-chipset-review.ars/3).
My X3100 scores 560 3DMarks in 3DMark06... and part of that is probably because of my modest 1.5 GHz T5250 processor (the IGP in that article is a 3100, incorrectly labeled as a X3100 by the article... notice the huge performance difference... also the fact that that one doesn't even do SM3.0, and this already does SM4.0).
So they're about a factor 2 apart, not nearly a factor 10.
Besides, these are the extremes of the IGP market today... The 780G is the fastest IGP, and the X3100 is the slowest. nVidia is somewhere in between.
If Intel can get 20-40% extra performance out of the X4000 series (which is not that unreasonable considering it gets 10 processing units instead of 8, it will use DDR3, and it is clocked higher), then they'll be more or less in the same ballpark performance-wise.
This rumour includes an Intel-slide claiming about 3x the performance of G33: http://www.fudzilla.com/index.php?option=com_content&task=view&id=3828&Itemid=1
So they should land somewhere in 800-1000 3DMarks... A stone's throw away from the 1183 that the 780G gets.

It's no test run for Larrabee either, no more than i740 was.

It is, because the same driver codebase will be used for Larrabee, and obviously the experience in GPU-design that Intel has gathered with this and previous IGPs will be taken into account when designing the Larrabee GPU.
The i740 has nothing to do with Larrabee, but the X3000/4000 series are actually the direct precursor to Larrabee, and like Larrabee they are DX10-parts.
 
Last edited by a moderator:
It is, because the same driver codebase will be used for Larrabee
Uhm, no, the architectures are 100% different and the amount of sharing will be ~0%.
The i740 has nothing to do with Larrabee, but the X3000/4000 series are actually the direct precursor to Larrabee, and like Larrabee they are DX10-parts.
Larrabee is a DX11+ part... That is, it'll come out in the same timeframe as DX11 GPUs and will support DX11, but obviously is fully programmable except for texture sampling so it'll likely be able to support DX12 (which also means Intel *might* just keep refreshing the architecture for a couple of years if it's efficient enough).
 
A 780G scores 1183 3DMarks in 3DMark06 (http://arstechnica.com/reviews/hardware/amd-780g-chipset-review.ars/3).
My X3100 scores 560 3DMarks in 3DMark06... and part of that is probably because of my modest 1.5 GHz T5250 processor (the IGP in that article is a 3100, incorrectly labeled as a X3100 by the article... notice the huge performance difference... also the fact that that one doesn't even do SM3.0, and this already does SM4.0).
So they're about a factor 2 apart, not nearly a factor 10.
3DMark really isn't particularly representative here; plus the final score will be dominated by the CPU component of the scoring in these types of schenarios. If you really must look at 3DMark then its best to find something that has a breakdown of the individual tests and feature tests.

Alternatively, click through one page in the Ars review, or take a look at something like this to see gaming performance differences.
 
3DMark really isn't particularly representative here; plus the final score will be dominated by the CPU component of the scoring in these types of schenarios. If you really must look at 3DMark then its best to find something that has a breakdown of the individual tests and feature tests.

I have seen such, and X3000 still performs quite well: http://techreport.com/articles.x/12195/9

Alternatively, click through one page in the Ars review

As I already said, that review does NOT contain an X3100 but rather a regular 3100, which is indeed slower and has far less features.

or take a look at something like this to see gaming performance differences.

There seems to be something wrong with those numbers. In the other review on the same site, the G965 was pretty much on par with the GeForce 6150, and not extremely far behind the 690G.
While the 780G obviously is faster, the difference shouldn't be THAT large (especially since the X3500 should also be at least as fast as the X3000).
I think I know exactly what it is: They tested on Vista, and the other test is on XP.
Which brings me back to the start of my thread: Vista drivers used to lag behind the XP ones, still using software vertexprocessing... These new drivers give a significant performance boost and put performance close to XP again. As you can see, the XP review scored 660 3DMarks. I used Vista and a slower CPU, which still gave me 560 3DMarks, pretty close. With old Vista drivers, the performance was more in the sub-100 range, and indeed gaming was completely impossible.
 
Last edited by a moderator:
biglol.gif
 
Larrabee uses x86 cores. X3100... is so far from this that it's not even worth considering. In what sense can they conceivably share a codebase?

You think there is nothing more to a GPU and its drivers than the type of machinecode that a stream processor processes?
In actual fact, the instructionset is not that relevant. It only matters for the shader compiler. There is a WHOLE lot more to a D3D/OpenGL driver than just a shader compiler.
There is also a whole lot more to a GPU than just a bunch of stream processors slapped onto a piece of silicon.
Have a look at this overview for example: http://softwarecommunity.intel.com/articles/eng/1487.htm
Especially an efficient thread dispatcher is crucial to efficient rendering (this seems to be the main problem with the current Radeons, not being anywhere near as granular as the GeForce is, even though technically the raw processing power is there). Note also how they kept the fixed hardware to a minimum and 'spawn threads' on the stream processors to perform parts of clipping and triangle attribute setup. So it already does quite a bit in 'software'.
 
Last edited by a moderator:
You think there is nothing more to a GPU and its drivers than the type of machinecode that a stream processor processes?
In actual fact, the instructionset is not that relevant. It only matters for the shader compiler. There is a WHOLE lot more to a D3D/OpenGL driver than just a shader compiler.
There is also a whole lot more to a GPU than just a bunch of stream processors slapped onto a piece of silicon.
Oh wow, you have no idea what you're talking about. Oh wow. So are you saying that X3100 is a stripped-down Larrabee with a different shader core and that they're otherwise identical? Because that's so wrong it went past not being funny and looped back to hilarious.

And the scheduler isn't the reason observed performance on R6xx isn't as close to theoretical peaks compared to G8x--there are fundamental architectural differences that explain this perfectly. One is dependent on VLIW and superscalar scheduling limitations, while the other isn't. If you didn't realize that in the first place, you should really just get out before you dig yourself an even deeper hole.
 
Was writing a response at the same time as Tim so here goes:
You think there is nothing more to a GPU and its drivers than the type of machinecode that a stream processor processes?
Considering Larrabee's ONLY fixed-function unit according to all public information and all non-public information we are privy to is a Texture Sampling unit, I'd argue that unless X3100 is x86, the amount of sharing you can do is basically zero except basic stuff you'd need in ANY DirectX driver, no matter the arch, such as optimizing state changes etc... And that's really easy compared to the rest.
There is also a whole lot more to a GPU than just a bunch of stream processors slapped onto a piece of silicon.
So, Larrabee isn't a GPU? Then clearly you're either simply wrong, or arguing semantics for no good reason. Larrabee doesn't have a rasterizer, it doesn't have a global scheduler, it doesn't have texture filtering units, it doesn't have ROPs, it doesn't have an Input Assembly mechanism, it doesn't... do I need to go on?

No matter what you might have heard, the architectural heritage between X3100 and Larrabee is effectively zero. I'm sure the engineers talked to each other a bit and shared knowledge on a number of things, but it's about as near a Tabula Rasa as you can get. Can you share a little bit of driver code in such a case? Sure, but it's effectively negligible. So please stop claiming things that 95%+ of the public and 100% of the insiders know is completely false, k?
 
Words are being put into my mouth now, I am not going to defend those. I do however stick to what I already pointed out ("You can think of X3000/X4000 as a 'test-run' for their graphics team. They are now building up experience with dynamically allocated unified shaders, and general DX9/DX10 driver compatibility/optimizations..."). As for writing drivers, that's one of the hairiest things there are. Outside ATi and nVidia, nobody ever managed to deliver drivers that worked properly on over 95% of all OpenGL/Direct3D software. There are plenty of examples of GPUs that weren't all that bad technically, but never became serious contenders because they simply couldn't run most software properly. The Kyro comes to mind, then there was Volari... and S3 is still struggling. And ofcourse Intel itself. Even ATi was initially plagued with poor drivers, but they realized this in time, and got serious about drivers with the Catalyst series.
 
Last edited by a moderator:
Back
Top