New blog from Nvidias new chief engineer.

Nothing too spicy here, though. It's basically just Mr. Dally criticizing Intel's using x86 architecture for heterogeneous computing.

This brings up an interesting question: why is Intel using x86 for Larrabee? After all, they designed new instructions for the bulk of Larrabee's work, the LRBni. In theory, they can easily pair it with another instruction set. Of course, market-wise there's no other option for Intel. Intel can't use ARM although it looks like a reasonable choice. IA-64 is probably a better candidate, but it's still quite complex, if not more complex than x86. Furthermore, it enables the possibility of integrating LRBni in future x86 CPU, that could make it easier for Intel to compete in highly integrated CPU+GPU solution.

Of course, all of these are marketing based reasoning. Is it better to use x86 for Larrabee, technically? I don't know. I don't buy the "it's x86 so it's easier to use" argument by Intel. After all, very few people will be actually writing for Larrabee in assembly, or run any "traditional" x86 codes on Larrabee to enjoy the benefit of compatibility.

Personally I'm a firm believer in heterogeneous computing, for a long time. It's good to see that it's becoming a "mainstream" thing now. :)
 
Furthermore, it enables the possibility of integrating LRBni in future x86 CPU, that could make it easier for Intel to compete in highly integrated CPU+GPU solution.
Fingers-crossed this happens sooner rather than later.

Jawed
 
The X86 model with fully coherent caches does lock Intel into a relatively coarse grained architecture. At present GPUs are even more coarse grained, but that's not necessarily where they will be in the future.
 
Last edited by a moderator:
MMX + SSEx + AVX + LRBni ... well, that sounds a bit crowded for a CPU ;).

In some ways I am surprised Intel didn't do a powerplay and push LRBni forward to the CPUs. I am sre AVX has been in the works for a while but the sooner Intel can get their foot in the door, especially the baseline, the sooner their marketshare advantage can be leveraged. I am sure there are all sorts of technicalities and arguements for AVX over LRBni and timelines that made it unreasonable but if LRBni is their longterm target (big if) then yet another middle step that may not garner substantial support seems odd. Then again I am not sure Intel has an exact target for Larrabee.
 
Some of the Larrabee slides have be sort of confused on this, as some describe the chip as having a vector pipe and a scalar integer pipe. I don't know if they just lumped the dual-issue x86 portion under the category of "scalar pipe", or if the vector unit has subsumed one half of the dual-issue P54 core.

If that is the case, then from the POV of x86 integer code, the thing's going to look like a single-issue processor, which pushes Larrabee even further back in the maturation curve and back to ye olden times with respect to what benefits any x86-specific knowledge will grant compilers.
 
Some of the Larrabee slides have be sort of confused on this, as some describe the chip as having a vector pipe and a scalar integer pipe. I don't know if they just lumped the dual-issue x86 portion under the category of "scalar pipe", or if the vector unit has subsumed one half of the dual-issue P54 core.

If that is the case, then from the POV of x86 integer code, the thing's going to look like a single-issue processor, which pushes Larrabee even further back in the maturation curve and back to ye olden times with respect to what benefits any x86-specific knowledge will grant compilers.
Larrabee seems able to execute one scalar instruction or vector store in the first pipe and one vector instruction (which might be a load or load+op instruction) in the second pipe. As you guessed for purely scalar code it's a single-issue x86.
 
Larrabee seems able to execute one scalar instruction or vector store in the first pipe and one vector instruction (which might be a load or load+op instruction) in the second pipe. As you guessed for purely scalar code it's a single-issue x86.

Wow, so they had to pick up compilers from the attic ;). J/K :).
 
http://venturebeat.com/2009/05/22/i...-from-academia-to-the-computer-graphics-wars/

VB: Why did you decide to join Nvidia? You replaced David Kirk as chief scientist. Do you see the world differently than he does?

BD: David and I see the world similarly. That was a reason why it was natural for me to pick up where he had left off. He is more of a graphics person and I am more of a parallel computing person. In terms of Nvidia research, he built it with a strong graphics component. I’m trying to complement that by building strength in other areas.
Jawed
 
So does this mean we're going to see a new platform competing with x86 for parallel computing from Nvidia, maybe with x86 emulation, or do you think this means Nvidia is still happy to play second fiddle as an add-in card for pc's?
 
No it used ultra simplified cores. Their ISA could be counted on fingertips. And that chip had RAM mounted on top of it to reduce latency as well..
 
Back
Top