Intel G965 to support SM4.0?

It would be nice though to have Direct3D10 support though, and partially it make sense since Intel typically supports the base Shader Models rather then the enhanceds one and they sometimes skip a few generations.

They want from a DirectX 7 IGP to a DirectX 9 IGP skipping Pixel Shader 1.1/1.4 Entirely.

So it is quite possible that they will skip Shader Model 3.0 entirely as well. Though then again it would be pretty damn early as not even ATI or Nvidia have released Shader Model 4.0 GPU's yet.
 
geo said:

So how far off their rocker is the article when they say the following:

One thing that is important to remember about the GMA X3000 family is that it is a completely programmable pipeline architecture -- meaning Intel only needs to update the microcode to add support for features like SM 4.0. This opens the door to a few possibilities with where Intel can go with the architecture. For example, since the Santa Rosa notebook platform is based on G965, but will not launch until next year, Intel may take the opportunity to add better features to the core.

bs.gif
Methinks someone is mistaking the marketting buzzwords for something more than it is.
 
geo said:
http://www.dailytech.com/article.aspx?newsid=2837

SM4 with "microcode update".

Hmm. The article says 667mhz core, but the diagram suggests that's just the memory speed, and the core is 250mhz initially, 400mhz later. . .

Something else:

This time around the Vertex Shader 3.0 units are hardware based instead of the software based shaders found in previous GMA900/950 and Extreme graphics cores. A hardware transform and lighting engine has also been integrated and a significant improvement over the previous software T&L engine.

Meaning the author didn't only misinterpret the frequencies clearly listed, but also reads other stuff wrong too. Why on God's green earth would someone add a hypothetical T&L engine alongside Vertex Shaders? Such a move would be idiotic and redundant even with VS1.0 compliant units.
 
  • Like
Reactions: Geo
BRiT said:
So how far off their rocker is the article when they say the following:

bs.gif
Methinks someone is mistaking the marketting buzzwords for something more than it is.

I've no idea what the real capabilities of that IGP are, yet it could very well be SM4.0 compliant and the >SM3.0 getting unlocked further down the road. I don't think it would be too hard in a unified shader core to have geometry shading capabilities for instance besides pixel and vertex shading. Whether the GS capabilities then stink or not is less relevant as long as it's there and X compliance can be claimed.

It's an IGP for heaven's sake; any vendor that would release a D3D10 compliant IGP today it would mean that the compliance is there on paper yet the performance just stinks. What I mean is: who on earth cares anyway? Who in his right mind is expecting to play even UT2007 on an IGP and yes I'd call that one one of the real DX9.0 games.
 
Ailuros said:
It's an IGP for heaven's sake; any vendor that would release a D3D10 compliant IGP today it would mean that the compliance is there on paper yet the performance just stinks. What I mean is: who on earth cares anyway? Who in his right mind is expecting to play even UT2007 on an IGP and yes I'd call that one one of the real DX9.0 games.

All developers out there that were forced to build D3D7 tech level paths for their games to make them run on IGPs. If a GPU/IGP is only slow but full featured you can scale resolution but if it is missing features you have to write additional code.

If we have one common feature base game developers can make the step to the next generation tech level much faster.
 
geo said:
Hmm. The article says 667mhz core, but the diagram suggests that's just the memory speed, and the core is 250mhz initially, 400mhz later. . .
I don't see that. If we go by the blue table then the speed is "~400MHz".

Also, its clear from that table that its got full SM4.0 support. The table is listing the differences between GMA950 and GMA X3000 and its specifically pointing out that while GMA 950 has SM2.0 support the vertex elements of that were software based, but GMA X3000 has full hardware SM4.0 support.
 
Dave Baumann said:
I don't see that. If we go by the blue table then the speed is "~400MHz".

Also, its clear from that table that its got full SM4.0 support. The table is listing the differences between GMA950 and GMA X3000 and its specifically pointing out that while GMA 950 has SM2.0 support the vertex elements of that were software based, but GMA X3000 has full hardware SM4.0 support.

Yeah, I misinterpreted what "Calistoga" is, based on "enhancements" in the chart, which translated in my brain to the same core tweaked left vs right. A reread shows the left is actually 945, so my bad on the 250mhz. So ~400mhz from the chart.

The article does say 667 tho:

Lastly the GMA X3000 graphics core will be clocked up to 667 MHz -- quite a bit higher than current budget ATI and NVIDIA offerings.

Having gone back this morning to look, I see that down in the comments, Kubicki claims that the 667mhz is NOT a mistake, chart or no chart.

The slide is for GM965 -- for mobile devices. The desktop chipset is G965, which the article is referring to when it says up to 667MHz.
 
To me its fairly clear that the 667MHz speed is in relation to the FSB and/or memory speeds.
 
I guess I have a rather hard time believing that Intel would do more than support DX9.L, considering that that's all the desktop uses.
 
Chalnoth said:
I guess I have a rather hard time believing that Intel would do more than support DX9.L, considering that that's all the desktop uses.

Right, but I have heard that Microsoft will give away the highest Vista logos only for D3D10 solutions. Additional there are long time planings to upgarade the desktop to D3D10.
 
Chalnoth said:
I guess I have a rather hard time believing that Intel would do more than support DX9.L, considering that that's all the desktop uses.

As Demirug points out there doesn't seem to be any significant shortcuts to gain D3D10 compliance this time around. Considering the hardware space the requirements seem to need I'm willing to believe that Intel truly has reached D3D10 compliance after all; what I cannot believe for the time being is that the result can be of any sensible use when it comes to future games.

Point taken about the lowest common denominator from a developer's perspective, but that still doesn't mean that those IGP will be any good for gaming.
 
caboosemoose said:
Anyone have hints vis a vis number of functional units - ie shader units, texture units etc?

It's most likely a unified shader core, with an unknown number of general purpose ALUs. As for TMUs I personally wouldn't expect more than 4.
 
Chalnoth said:
I guess I have a rather hard time believing that Intel would do more than support DX9.L, considering that that's all the desktop uses.

by that reasoning Intel wouldn't have made any 3d acceleration till now.
I know people still with their celeron 633 and i810 who play the Sims along using the web, word, digital camera, chinese MP3 player. Certainly loads of people will want to play the Sims 4 on their "old" X3000 PC or laptop in 2011, Intel does what's necessary for the game to at least run rather than display a dumb error message.
 
Blazkowicz_ said:
Intel does what's necessary for the game to at least run rather than display a dumb error message.
Right, which is what DX9.L support would provide. It'll be some years yet before DX10 will have any reasonable amount of support.
 
these days games require SM 2.0 and the 9700pro came out less than four years ago.

still you're right that legacy will be supported for longer this time
 
Chalnoth said:
Right, which is what DX9.L support would provide. It'll be some years yet before DX10 will have any reasonable amount of support.

Yes Chalnoth, but apparently Intel wants for obvious reasons to have the D3D10 compliance seal for it's products and this time it doesn't sound like it's that easy as with DX9.0. With the latter they got away w/o any sort of geometry unit and just PS2.0 support (hence the nice joke from darkblu above).

Frankly it makes more sense for Intel to have skipped SM3.0 entirely coming from the former castrated "SM2.0" compliance and go for SM4.0 straight away. If the hardware requirements are too high with the latter, it could easily mean less ALUs than with something SM3.0 instead in order to not overblow the transistor budget.
 
Back
Top