I saw both Dave Orton and Jen-hsun being infidelitous with 14 midgets and a clown.
And that was only the stuff NOT covered by the NDA I signed to film it.
I saw both Dave Orton and Jen-hsun being infidelitous with 14 midgets and a clown.
And that was only the stuff NOT covered by the NDA I signed to film it.
What about the ferrets? I got some video off of Paul Otellini's computer and it looks like he was hiding outside in the bushes doing some covert surveillance.
To be honest I just about barely get what the Larabee Group or their importance to Intel, but now I want to frontpage it in about 28 point fonts. Sometimes trying to stifle the story is the story.
You know I've really started to doubt whether or not Larrabee is even supposed to be a high-end or even discrete GPU. What if we consider a GPU as another form of throughput computing, much like Niagara, Niagara2, Rock, Cell, and I'm sure many more to come. So what if Larrabee is actually intended to be Intel's new throughput server/workstation chip? A hypothetical 32 cores for the Xeon version, 16 core desktop, 8 core mobile, and 4 core ultra-mobile. Additionally, some money can be saved by leaving the IGP out and instead using the CPU, which would now be sufficient to run Vista.
The whole point of calling it a GPU is merely meant to misinform IBM, Sun, and AMD about Intel's intents. And after all if XScale wasn't profitable enough to keep around, why would they ever bother with a huge high-end GPU? I don't think Nvidia or ATI ever made the type of money that Intel is interested in.
After looking back over the two job postings from the news thread I don't really see anything that would lead me to believe they couldn't just be using GPU as another term for throughput-CPU. Especially with the level of programmability G80, and I assume R600, have they really aren't much different on the surface.