No DirectX 10 drivers for Intel chipsets until Q1 2008

B3D News

Beyond3D News
Regular
Intel, which is looking into entering back the middle-to-high-end GPU market in a form or another, would probably have to pass the second gear with its driver development schedules if it wants it desktop endeavour to take off since sources close to motherboard AIBs report, via DigiTimes, that the semiconductor giant plans on releasing its 2007 DX10 chipsets without final drivers available until early 2008.


Read the full news item
 
Interesting to consider that the other players will be clambering over 10.1 by the time we have usable low-end 10.0 hardware...
 
Why does it matter given their current solutions?

It's just standard human numerical fixation.

GMA950 is almost as fast as a Radeon 8500, btw. At least in 3Dmark2001. I saw it with my own eyes! Too bad lots of games don't even see it's shader capabilities because of all the emulation of things it does. Neither Morrowind nor Max Payne 2 would do their shader effects for me.

I'm sure that the poor little gimpy X3000 (or whatever they will stick DX10 onto) is ready to rock upcoming DX10 titles, like Crysis. For sure. ;)
 
GMA950 is almost as fast as a Radeon 8500, btw. At least in 3Dmark2001. I saw it with my own eyes! Too bad lots of games don't even see it's shader capabilities because of all the emulation of things it does. Neither Morrowind nor Max Payne 2 would do their shader effects for me.

Emulation would not be the problem. The Problem is the lack of emulation. The driver simply reports no vertex shader or even fixed function vertex processing capabilities. If intel integrates this in their drivers games would make use of the pixel shader. With a Core2 Duo processor they could even offload the vertex processing work to the second core.

But wait we talk about “We do only as much driver work as needed to pass WHQLâ€￾ Intel.
 
GMA950 is almost as fast as a Radeon 8500, btw. At least in 3Dmark2001. I saw it with my own eyes! Too bad lots of games don't even see it's shader capabilities because of all the emulation of things it does. Neither Morrowind nor Max Payne 2 would do their shader effects for me.

wow that's dumb. Does 3dAnalyze allow to go around that?
 
With a Core2 Duo processor they could even offload the vertex processing work to the second core.
That's exactly what Apple's OpenGL driver does on their machines using Intel's integrated solutions. If Apple's driver team can do that I don't see how Intel's one can't.
 
That's exactly what Apple's OpenGL driver does on their machines using Intel's integrated solutions. If Apple's driver team can do that I don't see how Intel's one can't.

The question is not if they can do it. The Question is if the want do it.
 
Even more so, is "Apple's driver" made by Apple or Intel?
Most of it is likely done by Intel with the rest being Apple's stuff. AFAIK Apple's OpenGL implementation sits on top of a lower level driver usually supplied by the IHV.
 
For anyone who would like benchmarks on the G965, I managed to get hold of E6600 and G965 chipset, along with various configs and drivers, check it out :): http://forums.vr-zone.com/showthread.php?t=129343&page=18

I really hate to admit it, but the Tech Report's talk about G965 really seems to be right: http://www.techreport.com/reviews/2007q2/intel-g965/index.x?pg=1

The eight shader execution units in the GMA X3000 may sound like a lot, but those execution units are scalar—they can only operate on one pixel component at a time. A typical pixel has four components (red, green, blue, and alpha), so the GMA X3000 can really only process two complete pixels per clock cycle....

We expect, though, that not all of the GMA X3000 runs at 667MHz, as the strange numbers in the "pixels per clock" and "textures per clock" entries in the table above suggest. Intel says the G965 can compute two raster operations per clock maximum, but only for clears. For any other 3D raster op, it's limited to 1.6 pixels per clock. Similarly, it can process depth operations at 4 pixels per clock, but is limited to 3.2 pixels per clock for single, bilinear-filtered textures. What we may be seeing here is the result of different clock domains for the shader processors and the IGP's back end; the GeForce 8800 has a similar arrangement. Whatever the case, these numbers work out to theoretical fill rates of 1067 Mpixels/s and 2133 Mtexel/s. That puts the G965 ahead of the AMD 690G (1600 Mtexels/s) and the GeForce 6150 (950 Mtexels/s) in peak texturing capacity.

After looking at that, take a close look at the fillrate results that I have measured using 3dmark on the link(I know, its not the best way to show fillrate because its memory bandwidth limited or blah blah, but I think its not really useful to show the fillrate measurements at theoretical 100% efficiency, and the measurement reflects more real-life).

My GMA 950 got 1500MTexels/s in both single/multi-texturing results, very close to the theoretical limit of 400MHz x 4 pixel pipelines/1 texture unit per pixel pipeline. And I did that on the single channel DDR2-533, and you'll see over the web that different memory configs don't change fillrate results more than the margin of error of 2-3%. Zone Rendering at work here(I've read from 3D center that Zone Rendering's usefulness ends at fillrate tests and games that loves fillrate).

In older games, it really is slower. I got 120 fps with Quake 3 with DC DDR2-800 memory and Core 2 Duo E6600, a quick re-search of GMA 950 results show that it can get 200 fps at the same settings with just using E6300.
 
It's an interesting little IGP, but it certainly is rather worthless for gaming. If it can't beat 945G, that's really disappointing considering 945G is slower and less capable than a Radeon 8500 or GF3 from 2001. But, it's also not surprising that it's slower than 945G; just look at what has happened with NV & ATI's value-oriented unified GPUs. They can't keep up with the refined previous generation stuff, either.

945G has its own share of issues too, even if it is faster. Compatibility with games isn't exactly great by any stretch.
 
Hopefully Intel can prove us all wrong and deliver something that truly shows what G865 can do and what Intel can do.

If they fail to do so i wonder if they would be relevant in 2010 when Labaree comes out.
 
The likelihood of Intel becoming irrelevant because their IGP drivers fail to deliver is...ermm...slim.
 
I would expect that given Intel's goals to play with the big boys on the GPU side that there is going to have to be some improvement/more resources on the driver side as well. Wouldn't we think?
 
If they could offload a significant portion of the emulation off to a quad core (instead of just a dual core), it would be even better. As it stands, it appears that with paper and a pencil one could compute faster :D
 
Back
Top