The Official IGP Rumours & Speculation Thread

Well, in a unified DX10 world, I'd think a unit is a unit. Tho the scheduling might indeed be interesting. I think the traditional IGP wouldn't be all that much help (10% to a high-end discrete? But what about a low-end discrete helper, hmm? Maybe turn a 7300 into a 7600? I dunno; do you?), but what if you have a Fusion (or Larrabee) world where you've got 8 or 16 (or 32 or 64. . pick a number) beefy cpu/gpu cores and now you slap a specialized gpu with a store of high-bw memory on it? What happens? Can they cooperate usefully?

I don't know. I'd love to know, and we've been asking in various corners, but no one's willing to say just yet.
 
Yeah, you got my main point (besides my usual and highly agreeable skepticism), that IGPs are relatively puny compared to discrete cards and so it seems like a lot of effort for little result, and I'm not sure the balance will change much in the near future. Who knows, SLI may be relatively simply adapted to allow for effective (and hidden to the developer) IGP-discrete communication, and it may let one (I'll be generous--OK, for real) double the power of something like a 7300. And Fusion/Larrabee can definitely be game-changers if they're significantly more powerful than current IGPs (rather than just allowing for significantly simpler and so cheaper MBs).

The idea's still cool, though, so keep working those angles.
 
:LOL:

How dare you Inkster, how dare you!!

Hey, i can't keep up with everything at CeBIT. :p
Besides, most websites seem to have somehow "forgotten" the intel versions, while spotlighting MCP68 / 7050+630a for the AMD socket AM2 platform.

Also, Xbit's piece looks confusing.
They were saying this was a DX10-ready IGP, although i knew it wasn't true... precisely because of B3D's earlier timetable (Q1'08).
So you see, i did cross-reference you, Arun. ;)


edit
Arun, i've been meaning to ask you.
What's with those ECS and Foxconn boards ?
Intel Core 2 based motherboards, but not compatible with Core 2's standard cooler (in fact, the mounting holes resemble an AM2 without the plastic frame...) ?!?
 
Last edited by a moderator:
mar.7

"I'm in San Francisco at the Game Developers Conference 2007 and had an opportunity today to speak privately with Intel's PR Managers regarding the G965 and view demonstrations. When the specifications were released last year, they talked about Shader Model 3.0 support and most importantly Hardware Texture & Lighting (finally!), but the performance didn't bear those specifications out.
...
The G965 is essentially a completely new architecture, and when the driver team set out to develop it, they were interested mainly in Vista functionality and in improved video processing.
...
The G965 has up to this point been running software vertex rendering and texture and lighting. I was able to experience an early engineering beta driver that exposed these features in hardware, and the performance was impressive (especially considering it's an integrated part with the word "Intel" attached to it).

I had the opportunity to play Half-Life 2: Episode One using Intel's [new] G965 driver, and at 1024x768 with most settings on high, no HDR or AA, the game was surprisingly smooth and very playable. Given the nature of the driver, this bodes extremely well for the future.

Intel will be posting a beta driver within the next few months, releasing a finalized driver later this year."



http://www.notebookreview.com/default.asp?newsID=3555&article=Intel+G965
 
a bit off topic:

Intel stated that their nehalem CPUs would have ' optional integrated graphics ' , rumours say that this is to be in the form of MCM package.
 
Nehalem seems to be a repeat of Intel's strategy for undermining AMD's initiatives.

Even though AMD stated their dual-core chips were "native" dual cores and that their design was planned to be so from the start, Intel stuck two die on a package and sold millions of the dual core chips before AMD's solution was released.

Fusion vs. Nehalem looks to be a similar contest. AMD's supposed head start in design and methodology can be undermined by Intel's package engineers.

If initial Fusion products turn out to be as integrated internally as the current multicore AMD devices, the supposed advantage Fusion will have over an MCM GPU+CPU will be the same as the advantage much later AMD's dual-cores had against Intel's inelegant earlier-released products.

In other words, the gains will likely be mixed, and Intel would have marginalized Fusion through manufacturing superiority.
In the case of the much lower peak performance any kind of integrated solution will have, the gains may be nonexistent, and Intel may release Nehalem before AMD releases Fusion, if Intel executes well and AMD doesn't.

Of course, Intel is no ATI when it comes to graphics. Then again, in the target markets, it's not like Intel can't make things close enough.
Unless Fusion can completely embarass a GPU-equipped Nehalem, Intel wins.

Did AMD ever really have a head-start? Was the purchase of ATI because the AMD execs suspected Intel's future platforms would allow the possibility of integration, and AMD would not be able to design a solution itself?

Is all this just AMD treading water?
 
Interesting, I missed that announcment.

If Nehalem goes for an MCM with an on-package IGP, then Intel's Fusion competitor will require little software change, compared to an integrated motherboard.

AMD seems to have somewhat more planned for Fusion on the software front, perhaps for more general use of Fusion's resources.

Intel could probably do the same, with Nehalem's second iteration, but even the initial products will gut a big fraction of the market Fusion is trying to subsume. Fusion will have little to no advantage over an Intel platform targeting the low-end segment.
 
This is the kind of stuff I know nothing about, see, but: does high-speed bus between two chips in the same package requires less power (or cost?) for a given amount of bandwidth? Or is it really fundamentally identical? I'm sure it's fairly easy to guess why I'm asking that, heh!
 
Stupid question!

Is it speed or latency that's important WRT IGP-CPU interconnects, considering the IGP will almost certainly be "budget" class?
 
"One using Intel's [new] G965 driver, and at 1024x768 with most settings on high, no HDR or AA, the game was surprisingly smooth and very playable. Given the nature of the driver, " is that supposed to be good ???

Heres what i always wondered, given all of intell's resources how come they are so crap at designing a graphics chip ?
 
This is the kind of stuff I know nothing about, see, but: does high-speed bus between two chips in the same package requires less power (or cost?) for a given amount of bandwidth? Or is it really fundamentally identical? I'm sure it's fairly easy to guess why I'm asking that, heh!

Not sure about cost: you only need 1 package, but it's more complex.
Less power, because you don't have to toggle as much external capacitance. In its simplest form, P=0.5*n*C*f*V^2.

Let's fill in some random numbers:
C = 5pf
f = 1 GHz
V = 1.8
n = 32
-> P = 0.259W

I tried to google a reasonable value for C. This Intel App note lists a trace load of 3.0 pF/inch.

So in that case, we're talking about a difference of few 100mW for a very busy 32-bit bus, just because of trace length.
 
"One using Intel's [new] G965 driver, and at 1024x768 with most settings on high, no HDR or AA, the game was surprisingly smooth and very playable. Given the nature of the driver, " is that supposed to be good ???

Heres what i always wondered, given all of intell's resources how come they are so crap at designing a graphics chip ?

Considering the transistors & heat, are they really that crappy, or is it just the impression of 'em?
I remember reading that the IGP portion of the northbridges is extremely small, and the chip does quite well compared to that.
 
So in that case, we're talking about a difference of few 100mW for a very busy 32-bit bus, just because of trace length.
Interesting, thanks! As for cost, what I was thinking of really is if it'd logically be cheaper for a VERY fast bus (>= 100GB/s), since that'd likely open up possibilities in terms of CPU-GPU collaboration that could be nearly as good as what a single-chip approach could deliver. And I'm not thinking specifically about Nehalem here, of course.

Considering that formula, is it fair to say that much of the power advantage of single-chip integration can be achieved with a package-based solution? Or are there still other important factors?
 
I think AMD is currently doing its best in accelerating Fusion and increasing its performance while decreasing power consumption and costs, AMD knows that 'IF' Nehalem really beats K10, their future will depend on Fusion.
 
Interesting, thanks! As for cost, what I was thinking of really is if it'd logically be cheaper for a VERY fast bus (>= 100GB/s), since that'd likely open up possibilities in terms of CPU-GPU collaboration that could be nearly as good as what a single-chip approach could deliver. And I'm not thinking specifically about Nehalem here, of course.

As long as the package isn't too complex (like only two chips and not a massive amount of IO going off-package) the situation would be similar to how Intel's used MCMs to keep ahead of AMD in the core race without putting too much design effort on a "native" design too soon.

It's not directly applicable because that solution didn't have a dedicated on-package bus, the two chips are hanging off the same bus.

Naive multicore where each core could just as easily be on its own can do well with an MCM solution if it doesn't involve too many die (dies, dice?).

In the case of CPU/GPU cooperation, it sounds like most near-term plans keep the GPU highly separate, with Fusion perhaps hanging the GPU off the internal crossbar.

At that level of separation, current paradigms already have a lot of latency compensation built-in. An MCM strikes me as being good enough, assuming something can be done about the hit it incurs on yields.

Considering the yield curve on overly large multicore single-die solutions, however, an MCM sounds like a good initial step. It seems worth at least one process node transition for a given doubling of the core count.

The equation changes if the performance level and the number of separate chips involved goes up. The closer one gets to a POWER4/5 MCM, the more costly and niche the product becomes.
 
Back
Top