My Intel X3100 does DX10 :)

Hm. I also don't see why they should rewrite the entire driver infrastructure for Larrabee.

Perhaps it's so bad designed that it is not really useable for Intel's goals with it.
 
Scali said:
Words are being put into my mouth now, I am not going to defend those.
You obviously aren't supposed to defend the 'if this is really what you mean, then implicitly you are saying that [...]' claims - the point of those is to make you realize your base claim is false, not that you really mean what is implicited.

Anyway, if your claim now is that the Larrabee team will merely look at the G3x/G4x codebase and influence themselves from it, then duh, that's quite likely. But what you said clearly was that they would 'use the same codebase'; i.e. the differences is what you should count, not the similarities which would be overwhelming:
Scali said:
It is, because the same driver codebase will be used for Larrabee
So yeah, you're either still wrong, or you're backpedaling, or it's simply all a misunderstanding because of the way you phrased your original claim which didn't represent what you actually meant to say. It doesn't really matter, I don't care either way as I don't really see the problem with either of these three scenarios.

Jawed said:
AFAICT, Tim was responding to the claim that the main reason why R6xx isn't faster is that they are 'not being anywhere near as granular as the GeForce is'. So clearly the subject was efficiency; clearly R6xx has less raw power in several respects, but Tim's point (or at least how I understood it) is merely that R6xx's global scheduler etc. are perfectly fine and that branching isn't the main problem - if we are talking about efficiency, the only real disadvantage R6xx has is VLIW. Of course, maybe Tim meant more than that, in which case I disagree with him... :)
 
Anyway, if your claim now is that the Larrabee team will merely look at the G3x/G4x codebase and influence themselves from it, then duh, that's quite likely. But what you said clearly was that they would 'use the same codebase'; i.e. the differences is what you should count, not the similarities which would be overwhelming:
So yeah, you're either still wrong, or you're backpedaling, or it's simply all a misunderstanding because of the way you phrased your original claim which didn't represent what you actually meant to say. It doesn't really matter, I don't care either way as I don't really see the problem with either of these three scenarios.

It seems to me to be a misunderstanding on the importance of drivers and scheduling routines, (and possibly the extent of the term 'codebase').
In the end it's all subjective anyway.
 
By Scali:

Now you're just twisting the facts.
A 780G scores 1183 3DMarks in 3DMark06 (http://arstechnica.com/reviews/hardw...t-review.ars/3).
My X3100 scores 560 3DMarks in 3DMark06... and part of that is probably because of my modest 1.5 GHz T5250 processor (the IGP in that article is a 3100, incorrectly labeled as a X3100 by the article... notice the huge performance difference... also the fact that that one doesn't even do SM3.0, and this already does SM4.0).
So they're about a factor 2 apart, not nearly a factor 10.
Besides, these are the extremes of the IGP market today... The 780G is the fastest IGP, and the X3100 is the slowest. nVidia is somewhere in between.
If Intel can get 20-40% extra performance out of the X4000 series (which is not that unreasonable considering it gets 10 processing units instead of 8, it will use DDR3, and it is clocked higher), then they'll be more or less in the same ballpark performance-wise.
This rumour includes an Intel-slide claiming about 3x the performance of G33: http://www.fudzilla.com/index.php?op...=3828&Itemid=1
So they should land somewhere in 800-1000 3DMarks... A stone's throw away from the 1183 that the 780G gets.

1. Oh, 780G actually gets 1500 in 3DMark06. It needs Hypertransport 3 supporting CPUs though. With Phenom it can. With Athlon X2 it'll get 1150-1200.
2. Intel's drivers run 3DMark on software pipeline. If you manually enable hardware, X3100 users will suffer a little more performance. Probably not much, like 10% though.
3. For Intel's credit, my G965 scores 660 in 3DMark06. Since 780G is a desktop variant, its more comparable.
4. Unfortunately for Intel, the subpar drivers make most games not live up to the standards that 3DMark06 makes us believe. We users are all hoping the 15.9 driver is the miracle driver that does that.

2:
While the 780G obviously is faster, the difference shouldn't be THAT large (especially since the X3500 should also be at least as fast as the X3000).
I think I know exactly what it is: They tested on Vista, and the other test is on XP.

5. Actually, I find it weird but desktop G965(GMA X3000) users say Vista is better, and laptop GM965(GMA X3100) users say that XP is better. 15.9 seems to close the gap for some games fortunately.
6. X3500 is slightly faster than X3000. Perhaps 10-15%. Perhaps not. Similary configured X3500 system got 10% more 3DMark05 score than my GMA X3000 system.


GMA X4500 will indeed offer improvements, but the amount will be depending on the version.

Basic features:
-10 EUs
-More DX10 features
-2x faster Vertex Shader performance
-65nm chipset
-H.264/AVC/MPEG hardware acceleration*
*Only on certain versions
Mobile
GM47(640MHz clock, 1066MHz DDR3): 1100 3DMark06
GM45(533MHz, 1066MHz DDR3): ~950 3DMark06
GS45(266MHz, 1066MHz DDR3): ?
GL40(333MHz, DDR2/3-667): ?

Desktop*
G45(800MHz clock, 1333MHz DDR3): ?(Here I am going to make an estimate from what I have observed. The score will be at least 1300)
G43(800MHz clock, 1066MHz DDR3): ?
There are more versions, but I don't know enough and I think the differences will be mostly trivial.
*Intel claims 1.7x the 3DMark06 score of G35 for G45, but I don't believe it. 1.7x G35 is 1100, which is similar to GM47. Unless you want to tell me the faster core clock and memory is doing absolutely nothing. I believe they are talking about the "general" GMA X4500 family.

Have a nice day and happy flaming!!
 
Well, an X2 would be more comparable to my 1.5 GHz C2D anyway. I bet an X3100 or X3500 would score better than ~500/600ish with a fast quadcore. Either way it's nowhere near the claimed "order of magnitude", unless you want to be lame and claim that you were speaking in binary or something...
I'll have to check if it still uses software vp in 3DMark06. It might have changed. It seems to have changed for quite a few games anyway, making them play much better now.

As for the X3000... Intel does treat it differently in its drivers. They will not enable DX10 as far as I know, but it's not clear whether this is because it is actually an SM3.0 part, or if they just have decided not to enable it. All I know is that initially the Vista drivers for my X3100 were completely appalling. Even a 'simple' game like Portal was too slow. Now it can play Portal pretty well. I've played through a bit of HalfLife2 at the highest settings at 1280x800, and it was actually quite enjoyable.
 
Last edited by a moderator:
Well, there's more to Intel IGPs than just their awful performance. Very frequently, they simply do not work right for games. The drivers are quite poor. These IGPs exist mainly to run Vista Aero, IMO.

A small sampling of Intel IGP "greatness"
http://techreport.com/articles.x/14261/8
http://techreport.com/articles.x/14261/9

I've never personally used a X3xxx series part but I've messed with a few GMA 950 IGPs and found quite a few compatibility issues. Morrowind doesn't detect pixel shader support, and Max Payne 2 doesn't see DX9 support. That was 2 out of the 3 games I tried. Guild Wars worked ok, but I'm not sure if it saw DX9 caps, and it was slower than a Radeon 8500. These are games from years before GMA 950 arrived and yet the drivers don't work right.

I think it's best to stick to AMD right now if you want an IGP that works and "performs" somewhat usefully.
 
Last edited by a moderator:
Well, the whole point of this thread is that the drivers are slowly improving, giving more performance and better compatibility with games.
We all know what a poor trackrecord Intel has, but that was then and this is now, so let's stop reiterating over that.

The AMD IGP is nice, but it is only available for AMD motherboards.
If I were to go with an IGP, I'd go with an nVidia-powered Core2 system. Not that I'd ever use a desktop system with an IGP though.
 
Watch out: NV's IGPs for Intel only seem to have single-channel DRAM interfaces....
 
We all know what a poor trackrecord Intel has, but that was then and this is now, so let's stop reiterating over that.
Why? No offense, but what indications have they given that they're changing? Also if I recall ATi wore that "bad driver" albatross for quite a while after they got their driver act together....
 
Perhaps you missed the driver that I linked to at the start of this topic?

That driver is as poor as all other Intel drivers, given the fact that it doesn't manage to run all titles that are out there(run as in start without crashing, not run with adequate speed or anything). Of course, the G35 I have laying around here could be cursed or something.
 
That driver is as poor as all other Intel drivers, given the fact that it doesn't manage to run all titles that are out there(run as in start without crashing, not run with adequate speed or anything). Of course, the G35 I have laying around here could be cursed or something.

Well I guess that rules out Nvidia's and ATI's driver as well. That a pretty lofty standard, not one that I think anyone will ever live up to.
 
Improvements are generally measured by what went before.
Sure, these drivers aren't perfect, then again, no drivers are. But they do allow me to run more games than before, and get better performance/less rendering bugs in a lot of games, so they are indeed an improvement. And ofcourse the DX10 support in itself is an improvement. You now have access to features you didn't have before.

I don't have too many games, but these are the ones I tested and they all worked:
- Half Life 2 (colours seemed slightly off in some cases, eg some underwater objects)
- Half Life 2 Episode One
- Half Life 2 Episode Two
- Portal
- Far Cry
- Crysis (DX9 worked fine, DX10 worked, but had some rendering issues with certain trees, some white speckle)
- BioShock demo (DX9 and DX10 both appeared to render correctly)

And ofcourse 3DMark03, 3DMark05 and 3DMark06 which all worked, and they no longer force software vertexprocesing, it seems (at least, the box is no longer ticked).
Doom3 didn't seem to want to start though, but that's the only OpenGL application I tried.

I also tried to run all the DX10 examples from the SDK. There were only 2 or 3 that didn't work properly. Not bad for a first beta of a DX10 driver. With a bit of luck the final version runs them all.
 
Last edited by a moderator:
Watch out: NV's IGPs for Intel only seem to have single-channel DRAM interfaces....
The current ones (which are clearly aimed at taking market share from VIA and SiS which also only have single-channel memory controllers), yes, but:
MCP73: Single-Channel DDR2 [September-October 2007]
MCP7A: Dual-Channel DDR2/DDR3 [June-July 2008]
MCP7C: Single-Channel DDR2/DDR3 [August-October 2008]
 
scali my mate is getting a laptop with a x3100 do you have any game benchmarks ?
just a rough indication of performance will suffice
tnx...
 
Well, HalfLife 2 in 1280x800 with highest detail settings runs from 10 fps (big outdoor scenes) to 50 fps (small indoor scenes). In most areas it will do 20-30 fps, which is good enough for me to aim reasonably accurately, so it plays quite well. I think it didn't play much better back when I originally played through it on my Radeon 9600Pro (although I had 4xAA on there, at the time).
And as mentioned elsewhere, 3DMark05 does 910 3DMarks, 3DMarks06 does 560 3DMarks.
If you want to game, you do NOT want an X3100 though :p
Apart from the fact that it's by far the slowest solution on the market, there is also the problem with driver compatibility. So pretty much every other IGP will give you a better gaming experience.
HalfLife2 is playable, but it's a relatively light game. So I guess it's more or less a best-case scenario.
 
Not to mention intel IGPs gets absolutely murdered in video playback i.e h.264/VC-1 hardware acceleration. (even with a core 2 duo)

Seriously though, i just cant believe intel can just get away with selling crap like these for so many years. Sorry for the rant :devilish:
 
Seriously though, i just cant believe intel can just get away with selling crap like these for so many years. Sorry for the rant :devilish:

Quite simple really: They're EXTREMELY cheap chips, excellent for a HUGE market of office PCs and small servers etc, which don't need game performance or video playback or anything. As long as you can use Office applications, read your email and browse the web, it's good enough.

Having said that, I still think it's a good sign that Intel is pushing forward, embracing DX10 technology and working on video acceleration. Technically they could have stopped at DX9 SM2.0 level, because that would be good enough for Windows Vista Aero, and serve the office/OEM market for the next few years.
But instead they opted for a new design with unified shaders in a scalar stream processor arrangement much like the nVidia G80, and used these shaders not only for 3d graphics, but also video acceleration.
In a way I think it's a fascinating design... it's cutting edge and minimalistic at the same time, reusing a lot of resources for shading, rasterizing and video acceleration.
Also, I don't think we've seen the full potential of this architecture yet. It started off with very immature drivers, basically using it in GMA950 emulation mode, with no hardware vp, no video acceleration, and no SM3.0 support, let alone DX10. The drivers have already improved quite a bit since the earliest incarnations, and I think they have a ways to go still.

Since the X3000/X4000 architecture should be around until Larrabee, that should give Intel a lot of time to fix their drivers and get the maximum efficiency out of the current architecture. Let's hope they make use of that opportunity. I would love to see how far they can take this minimalist architecture. Also, a stable driver base for X3000/X4000 would make me more confident that Larrabee is actually going to be able to run software properly when it arrives. I've always been a fan of 'quirky' 3d accelerators, and over the years I've had some early Matrox cards, PowerVR, Kyro II, and the Radeon 8500... but of all those, only the Radeon 8500 eventually lived up to its full potential. The rest was let down by poor driver support, often making games not run, not perform adequately, or just not render properly, making them unplayable.
ATi and nVidia have had mature drivers for a while now, at last, and you really don't have to worry about whether games will run properly... it would be nice if that other big GPU manufacturer (that would be Intel) would finally join them.
 
Back
Top