Larrabee: 16 Cores, 2GHz, 150W, and more...

I think the AIBs will be fine. They don't have to care too much about the internals of the chip.

The whiplash I'm speaking of is the establishment of an Intel GPU product line with drivers, tools, and an entire ecosystem and then junking it when Larrabee is released.

If Intel wants to establish an architecture that embraces alternative rendering strategies, why not get that started earlier, as opposed to building up even more inertia with a product that doesn't?
 
I think, if you want to use it as a GPU, it would depend on how you go about it. You have to use different algorithms to do things like texture lookups on a many-core CPU than a GPU, but both can do it pretty well, as long as they have enough local storage and/or threads to minimize the latency.

From that POV, a multi-core CPU that only has a bit of support for some GPU specific functions might do a lot worse if you treat it like a GPU, than a multi-purpose multi-core CPU that is pretty good at most GPU work but needs a different algorithm to make some of it work at all.

There is no simple way to compare things that have such a different architecture and come up with a single number.

I think the most important thing would be the API.
 
I think the AIBs will be fine. They don't have to care too much about the internals of the chip.

The whiplash I'm speaking of is the establishment of an Intel GPU product line with drivers, tools, and an entire ecosystem and then junking it when Larrabee is released.

If Intel wants to establish an architecture that embraces alternative rendering strategies, why not get that started earlier, as opposed to building up even more inertia with a product that doesn't?

Because not everything is doable when you'd like it to be? Infinite flexibility is wonderful to have, but hard to achieve.

Launching a new architecture who's performance is underwhelming just to get something out isn't necessarily a grand idea either. You know what they say about only getting one chance to make a first impression. With different architectures they can launch different product lines while still getting the overall relationships going and ramped up.
 
Because not everything is doable when you'd like it to be? Infinite flexibility is wonderful to have, but hard to achieve.

Launching a new architecture who's performance is underwhelming just to get something out isn't necessarily a grand idea either. You know what they say about only getting one chance to make a first impression. With different architectures they can launch different product lines while still getting the overall relationships going and ramped up.

Intel's first impression is a product at the $300 dollar pricepoint, and it has little experience in the discrete graphics market.
I'm not expecting to be wowed.

On the other hand, I do expect Larrabee to have some significant growing pains, whenever it comes out.

Intel's GPU designers will most likely get something wrong.
The drivers will most likely get something(s) wrong.
Any ambitious first product will likely get something wrong.

I'm of the opinion that if Intel can bring Larrabee or a lower-end version out early, it would be better served to get exposure sooner at a modest pricepoint.
A big splash that is hit with buggy drivers and inconsistent performance isn't helpful, and if it happens later, it delays Larrabee2.
 
Any ambitious first product will likely get something wrong.


I wouldn't underestimate Intel in that point. I think it might very well happen that they deliver exactly what they want. Their deal with nVidia might include some know-how transfer. They have quite some time to polish it. They do what they do best (x86 cores), so I think it is not that unlikely that they come out with a competitive 300$ solution with solid drivers and - unlike the competition - the ability to be programmed in a familiar fashion. Remember they are selling a compiler, too - maybe they bundle it with Larabee for HPC stuff and voilà you've got a complete toolbox for GPGPU programming, all running on an Intel Board with an Intel Xeon as the host CPU, with a complete Intel Toolchain - giving you the best possible performance. This will also allow developer to take advantage of it more easily than a multi-teraflop G100 without tools, libraries and compilers.
 
For an example of Intel first attempts first attempts that got something wrong:

Pentium Pro (the very first Pentium Pro with the expensive large L2),
P4 Williamette, IA64 Merced.

I hardly think I'm understimating them when I say I doubt Intel's engineers can walk on water.
 
I wouldn't underestimate Intel in that point. I think it might very well happen that they deliver exactly what they want. Their deal with nVidia might include some know-how transfer.

Ummm... let me get this straight you're suggesting that NVIDIA will willingly help Intel develop a high-performance GPU which would compete directly with NVIDIA products, without Intel buying ouy NVIDIA? TBH I think that hell freezing over and/or the second coming of JHC would happen before this. I can think of *zero* reasons why NVIDIA would consider doing this, given who they're playing chess with.

Remember they are selling a compiler, too - maybe they bundle it with Larabee for HPC stuff and voilà you've got a complete toolbox for GPGPU programming, all running on an Intel Board with an Intel Xeon as the host CPU, with a complete Intel Toolchain - giving you the best possible performance. This will also allow developer to take advantage of it more easily than a multi-teraflop G100 without tools, libraries and compilers.

Now if you're arguing that they want to compete against GPGPU then maybe you have a point. But if they want to do that they don't have to develop a GPU, they have to develop a vector-FPU-heavy x86. If Larabee is supposed to be a mediocre-but-passable GPU and a hard-hitting competitor against GPGPU in the server market (which is cheap due to the volume afforded by mass production of cheap-but-crap GPUs) then yes Intel potentially has a story to tell here. Not convinced that that holds together as a script for a movie though.
 
I can think of *zero* reasons why NVIDIA would consider doing this, given who they're playing chess with.

Maybe nVidia wants an x86 license ... I know it's unlikely, but Intel buying nVidia is at least remotely possible, too. At the moment, we simply don't know how good their new graphics division is, and who works there, do we?

I hardly think I'm understimating them when I say I doubt Intel's engineers can walk on water.

Yes, Intel did some things wrong, but after all, they did a few things right (Pentium M/Centrino, anyone?), too, otherwise they wouldn't be where they are.

Not convinced that that holds together as a script for a movie though.

I don't think they have much choice: I doubt they'll be able to have a competitive high-end graphics card on the first try, so the best for them is to have a good GPU with superior GPGPU capabilities and continue from this point on. It's surely much easier for them than the other way round.
 
Guess that begs the question whether they'd have given up on IA64 already had it not been for their contractual agreements with HP etc.
Exactly. And to show a profit on the R&D and investments made, of course. Who in their right mind buys an Itanium server over an Athlon one? Then again, most people still seem to buy Xeon servers, because that's the safe bet, and what they bought so far. It didn't let them down. Going AMD is scary.
 
Guess that begs the question whether they'd have given up on IA64 already had it not been for their contractual agreements with HP etc.

There's good evidence Intel is backing Itanium.
It's not conquering the world, but it is still growing and has been characterized as generating an operating profit, even if it hasn't recouped the massive early investment.

The amount of effort that went into Montecito looks (to me) to be above and beyond trying to run out the clock on contractual obligations.
 
so the best for them is to have a good GPU with superior GPGPU capabilities and continue from this point on. It's surely much easier for them than the other way round.
That'll only happen if Microsoft really likes it. Because there is no way whatsoever that such a CPU can make a good GPU in the mainstream x86 market without serious D3D changes.

But they might just pull it off, if they make it interesting enough for Microsoft to support them. That doesn't only mean a lot of money, but also a very interesting new architecture. Like with AMDs x64.

If the heavy hitters think it's going to be the next best thing, they can succeed. And just about everyone wants a good alternative. Chicken and egg. They won't succeed on their own with something like this.
 
Btw, I think we all agree that CPUs and GPUs are converging slowly. The only question is: how? We need a new instruction set, for starters.
 
Maybe nVidia wants an x86 license ... I know it's unlikely, but Intel buying nVidia is at least remotely possible, too. At the moment, we simply don't know how good their new graphics division is, and who works there, do we?
I think Intel feels that the current number of x86 manufacturers being greater than 1 is at least 1 too many.

Yes, Intel did some things wrong, but after all, they did a few things right (Pentium M/Centrino, anyone?), too, otherwise they wouldn't be where they are.

Pentium M was descended from the Pentium Pro core. It's overall design philosophy and the techniques used were elaborations of something Intel had done for years.
It was not the major shift that Pentium Pro, P4, or Itanium was.

If I give you credit for your example, I will raise you as additional counterexamples Intel's early RDRAM chipsets and the first iteration of FB-DIMMs.
 
I think Intel feels that the current number of x86 manufacturers being greater than 1 is at least 1 too many.
If we turn that around, that would be why many people think that Microsoft being the single mainstream OS manufacturer is at least 1 too few.

Monopolies are the ultimate answer to the capitalistic question. Intel is just running behind schedule: they have one left to go.
 
If we turn that around, that would be why many people think that Microsoft being the single mainstream OS manufacturer is at least 1 too few.

Monopolies are the ultimate answer to the capitalistic question. Intel is just running behind schedule: they have one left to go.

There's also VIA, I'm sure Intel forgets about them too.
 
Nvidia providing patent protection is one thing. Nvidia providing actual know-how and IP blocks? I still haven't seen the deal that makes sense for that yet. Don't just count what they could get by having a license to do servers or some such, look at what they could lose too. And every time they enable either AMD or Intel to get better, they crank up Intel/AMD's bundling advantages another notch. If you can't compete on price and convenience, you have to compete on quality and performance. If you can't compete on either, you're f**ked.
 
Back
Top