Intel? Nvidia? Larrabee?

I don't know what to make of that, frankly. Tho there's some interesting tidbits in there too.

After AMD buying ATI, one needs to be pretty careful about knee-jerk "never!" reactions to an Intel/Nvidia alliance of some fashion.

I'm just not sure what Intel would be willing to give Nvidia that would make up in revenue *at least as much* as the lost sales of Nvidia parts. Jen-Hsun doesn't make bad deals. He's going to get a good deal for his company or not play.
 
I'm going to put the likelihood of this somewhere between "Ron Paul becomes President" and "Microsoft is defeated on the desktop by Linux." If it's an add-in board, wouldn't this cannibalize NVIDIA's profits?
 
I can't speak to the likelihood of Intel buying Nvidia, or the formation of a strong partnership.

It would be helpful for Intel, however, if Nvidia's tech for ROPs and sampling hardware migrated to Larrabee's die space.

Any cachet AMD could have gained from having an ATI GPU in Fusion would be negated if Intel were to decide to put Larrabee Lite in the same package or die with a future Nehalem chip.

The next fun thing we can wonder about is what Intel has calculated as being sufficient for a 2009 discrete GPU.

The bandwidth numbers in that story are almost within reach of today's cards, though a lot is likely handled by the huge internal bus and the shared L2.

The supposed teraflop of fully programmable shading power brings up another point of speculation.

A 16-element SIMD unit per core can supply 16 flops per cycle if it stays within the x86 SSE convention of 2 operand packed operations.

On the other hand, if Intel were to allow some kind of MADD, it would require a 3 operand instruction.
Both AMD and Nvidia count a MADD as two flops, so Intel could try the same inflated flop count.

Without a MADD instruction, at 16 cores with 16 elements per unit, we can expect 256 flops per cycle.
If there is a MADD and Intel counts it as 2 flops, then it's 512.

To hit one TF, Larrabee would have to clock just under 4 GHz with no MADD, and just under 2 GHz with MADD support.

Other alternatives to MADD support would be to have Larrabee clock high or have more than 16 cores.
A chip that size at 4 GHz sounds crazy, and 32 cores may be too big. I don't see a reference for the estimated size of a single SIMD core.

There is also the unknown of what subset of x86 is supported. I don't see the utility of having too much support for x87 or standard SSE, but it is possible that a few flops might come from legacy support hardware if it is not rolled into the vector unit.

All that aside, Larrabee better be significantly more efficient than GPUs present by that time, because 2009 is a long time for GPUs to stagnate.
R600 is close to halfway there, and Nvidia's next chip may hit nearly a TFlop this year.
 
My own opinion is that you'll see NVIDIA and Intel work closer and closer over the next 4-5yrs with an eventual merger/buyout coming after several years of successful and progressive cooperation...

Then again, I never saw the AMD/ATI deal coming.....so who knows.... lol
 
I'm just wondering, if nvidia gives intel something really important for Larrabee, would intel consider giving nvidia an x86 license? This deal is really weird.
 
Interesting, wonder how much Nvidia is being compensated for the help? This could be Nvidia's way of ceding the low end market while still making it profitable to do so.

/shrug
 
I missed the 1.7 - 2.5 GHz clock range given in the article.

If the dominant fp unit is the single vector unit per core, then 1 TF at that clock range would indicate some kind of MADD capability.

That sounds like a funky x86 extension.
 
Interesting, wonder how much Nvidia is being compensated for the help? This could be Nvidia's way of ceding the low end market while still making it profitable to do so.

/shrug

Well, even for a new IGP part, Arun and I estimated they likely make a gross margin in the neighborhood of $10/each. Would Intel pay them that? And I'm not sure from that presentation that it really looks like a low-end part that Intel is describing there. . . gross margins go up as you move up the foodchain.
 
My own opinion is that you'll see NVIDIA and Intel work closer and closer over the next 4-5yrs with an eventual merger/buyout coming after several years of successful and progressive cooperation...

I'll second that. Neither want AMD to be [too] succesful with Fusion, so it is almost a given IMHO.
 
I'll second that. Neither want AMD to be [too] succesful with Fusion, so it is almost a given IMHO.

Well, there's some reason to think that in fact AMD buying ATI was at least in part a reaction to where Intel was going. . . but then this doesn't mean that the action-reaction cycle can't accelerate as the chess moves pile up.
 
Jon Stokes over at Ars has posted his thoughts on Nvidia's future direction, based on recent investor talks by Mike Hara. Nvidia's competitive position vis-à-vis AMD and Intel gets mentioned along with some pooh-poohing (margins) of Fusion and the veiled threat of Larrabee.

Ars Technica - NVIDIA on the highwire: GeForce 8800 and beyond said:
[...]
Given the extensive discussion of oil and gas and medical imaging that Hara segued into, it's quite clear that NVIDIA is as serious about the HPC market as anyone out there. It was also clear from listening "between the lines" that right now NVIDIA see Intel's Larrabee as the number one potential threat in that space.

Hara talked a fair amount of smack about Intel's forthcoming Larrabee project, and I'm not going to summarize any of the criticisms that he had of Larrabee. However, I can't resist a quick response to them because I think one factor neutralizes pretty much all of what he had to say about Larrabee's lack of established drivers, codecs, and an overall software ecosystem: Larrabee is multicore x86, 'nuff said.
[...]
 
I don't think Stokes's short response that being multicore x86 makes up for the lack of graphics-specific code is valid.

By that logic, x86 would have done smashingly well in other more specialized fields it tried to expand into.

The truth is that x86's success in areas where it wasn't already a dominant presence is limited.
Larrabee has a leg up in some respects compared to other entrants into graphics, such as Intel's manufacturing capability.
That doesn't mean being x86 is all that helpful when graphics has done well without it.
Look at the attempts at pushing x86 into the mobile space, or into telecom.

That being said, Intel's manufacturing, software, and engineering pool is vast.

As an aside: I'm curious what TSMC's response will be. It has an interest in the success of its graphics customers, since Intel is not likely to take their place in making orders.
 
Back
Top