Larrabee delayed to 2011 ?

They're not bonkers decisions, they're very sensible decisions if they can pull them off without a meaningful negative impact on performance, however you might want to measure that (FLOPS per programmer dollar might be an interesting metric).
I would like to know how x86 increases the flops per programming dollar.

I would like to know apps that scale to O(100) (and beyond) hw threads that absolutely need full cache coherency implemented in hw.

Unless I can find counterexamples, these two design choices will count as dodgy-at-best, bonkers-at-worst in my book.
 
Are they abandoning Larrabee version 1 (32 cores version ?) or Larrabee architecture and will go to something more traditional like NV or AMD type GPU ?
Dunno about traditional, GPUs are headed to something close to what Larrabee was any way ... just not x86.

So if they made something more like that Intel would once again develop a high performance non x86 architecture which directly competes with their x86 line up ... I think it would be a good idea in this case, but I think with i860&Itanium in memory it's not too likely.

The SC presentation fiasco shows there is a lot of political manoeuvring going on ... in that kind of environment the mythical Larrabee v3 can fall by the wayside quite regardless of technical merit.
 
Last edited by a moderator:
With Larrabee fading into oblivion there is a chance that also it's vector instruction set will disappear.
This is a pity as it is so much better than AVX. and about the only thing I really liked about this chip.
 
With Larrabee fading into oblivion there is a chance that also it's vector instruction set will disappear.
This is a pity as it is so much better than AVX. and about the only thing I really liked about this chip.

Yeah LRBni is interesting...My guess is that's one of the (few) uses for the LRB1 as dev-kit initiative: get people to work with LRBni, since that's going to stick around for potential future iterations.
 
Lux_ said:
AMD has cancelled an architecture that was "too radical".
Which one was that? Care to elaborate?
Bits of info here and here.
Overall I agree: shift from GHz to manycore and convergence of GPU and CPU (although I'm a software guy) are very interesting to live through ;).
But I'm not sure SCC and Larrabee (when it ships) turn out to be very different.
 
I would like to know how x86 increases the flops per programming dollar.

From a programming perspective that's easy. x86 is a known quantity. It takes less time (and therefore costs less money) to develop (and optimize) for an ISA which is already well fleshed-out and known by all than it does otherwise.
 
Last edited by a moderator:
From a programming perspective that's easy. x86 is a known quantity.

None of that "known quantity" BS helps when you write in anything that is at the level of C or above.

How many of the programmers of today know about cpu architectures? Amongst user space codes, game devs are probably the last holdouts in this regard who have to optimize their code. And even they don't use assembly a lot of the time.

It takes less time (and therefore costs less money) to develop (and optimize) for an ISA which is already well fleshed-out and known by all than it does otherwise.

Are you telling me that it is harder to fix bugs if you compile code for anything that is not-x86?

What was the last time you or anybody you know made a x86 specific change in your C (or higher level) code to optimize it?


The optimizations are fairly ISA agnostic when you do them at C (or higher level).

What was the last time you or anybody you know wrote an x86-specific peice of code in a higher-than-assembly language?


The x86-is-known-and-familiar FUD deserves to be done away.
 
None of that "known quantity" BS helps when you write in anything that is at the level of C or above.

"Known quantity BS"? :LOL:

So it's just as simple to develop for a brand new ISA as it is one that's well known? Come on.

How many of the programmers of today know about cpu architectures? Amongst user space codes, game devs are probably the last holdouts in this regard who have to optimize their code. And even they don't use assembly a lot of the time.

You just defeated your own argument. The people that need to write low-level code are the very ones we're discussing. Sure the vast majority of the dev team doesn't need to know ASM but those responsible for optimization are working at a much lower level than those doing level design.

Are you telling me that it is harder to fix bugs if you compile code for anything that is not-x86?

I made no mention of debugging. I'm saying it simply takes less time to code for a known quantity than to have to reinvent the wheel.

What was the last time you or anybody you know made a x86 specific change in your C (or higher level) code to optimize it?

I am not a programmer. I'm a hardware monkey.

The optimizations are fairly ISA agnostic when you do them at C (or higher level).

True. If you want the most from your code, you'll need to do some amount of low-level optimizations, however.

What was the last time you or anybody you know wrote an x86-specific peice of code in a higher-than-assembly language?

Hardware monkeys don't write code.

The x86-is-known-and-familiar FUD deserves to be done away.

I don't see it as FUD.
 
"Known quantity BS"? :LOL:

So it's just as simple to develop for a brand new ISA as it is one that's well known? Come on.
I'd be surprised if it took someone who knows a c++ compiler framework inside and out more than a couple months to get a new target up and running. Autovectorization for LRBni is a much bigger task.

In the big scheme of things creating compilation/debug tools just isn't a big deal unless the architecture is truly untraditional (say GPUs or something like TRIPS).
 
I'd be surprised if it took someone who knows a c++ compiler framework inside and out more than a couple months to get a new target up and running. Autovectorization for LRBni is a much bigger task.

LLVM guys claim it takes people with prior compiler experience about a week to get a new ISA up and running.

Anyways, his claim was regarding developing software and not dev tools.
 
If you are assembly coding the ISA is only your interface for programming the microarchitecture, which is very different from present x86 desktop chips. Also compared to adapting to 16 wide vectors with vectorized load/stores and software managed context switching of fibers using a slightly different ISA for the scalar code is hardly going to matter.
 
Today linux and gnu libs can be compiled to almost any CPU.

It's possible the "x86 is known etc" FUD that is being spread for ages has something to do with what the big boys (big market share) keep saying to the PC devs.
 
So it's just as simple to develop for a brand new ISA as it is one that's well known? Come on.
For developers, I don't see how it makes a practical difference.

There are two main uses for LRB:
* as a general GPU: here the question of RPG should have been: "when was the last time anyone wrote a pixel or vertex shader in shading assembler language?". (The intermediate language, as defined by MS.) My guess is that this never happens.
* as a compute machine: in this case, you're still likely to use some portable standard, OpenCL, DX compute, whatever, in which case the instruction encoding of the ISA doesn't matter. But if some nutcase decided to write LRB assembler after all, he'd still have to use and learn the new ISA that's LRB specific from scratch. LRB was never supposed to be fast at old-style x86 instructions.

You just defeated your own argument. The people that need to write low-level code are the very ones we're discussing.
It would be interesting if some of the real game developers here give some insight about this. But I thought the era of coding even small sections of code in ASM was long gone.
It made sense when there were extremely repetitive sections of code that took at least 50% of the run time. Read: software based rendering routines in pre-GPU times. Those died with the introduction of Quake 2 (give or take)...

Other than being able to leverage some of the existing infrastructure wrt development tool reuse, I believe the main advantage of x86 is more psychological than anything else. It's a useful bullet on a PowerPoint slide to attract hesitant developers. And that by itself may be reason enough to take some area hit.
 
so apparently just the first iteration is being scrapped for graphics

how is this going to effect project offset?
 
For developers, I don't see how it makes a practical difference.

There are two main uses for LRB:
* as a general GPU: here the question of RPG should have been: "when was the last time anyone wrote a pixel or vertex shader in shading assembler language?". (The intermediate language, as defined by MS.) My guess is that this never happens.

Yup.
* as a compute machine: in this case, you're still likely to use some portable standard, OpenCL, DX compute, whatever, in which case the instruction encoding of the ISA doesn't matter.
Absolutely

But if some nutcase decided to write LRB assembler after all, he'd still have to use and learn the new ISA that's LRB specific from scratch. LRB was never supposed to be fast at old-style x86 instructions.

I'l bet anything that he won't be able to beat well written code in shading languages/ocl/dxcs on the flops/programming $ metric.

It would be interesting if some of the real game developers here give some insight about this. But I thought the era of coding even small sections of code in ASM was long gone.
It made sense when there were extremely repetitive sections of code that took at least 50% of the run time. Read: software based rendering routines in pre-GPU times. Those died with the introduction of Quake 2 (give or take)...

Writing in assembly is about speed and not about flops/programming $ which was the originating point of this discussion.
http://forum.beyond3d.com/showpost.php?p=1365903&postcount=260
Other than being able to leverage some of the existing infrastructure wrt development tool reuse, I believe the main advantage of x86 is more psychological than anything else. It's a useful bullet on a PowerPoint slide to attract hesitant developers. And that by itself may be reason enough to take some area hit.

My guess is that only hardware monkeys who don't code themselves will buy those slides.
 
With Larrabee fading into oblivion there is a chance that also it's vector instruction set will disappear.
This is a pity as it is so much better than AVX. and about the only thing I really liked about this chip.

LRBni was (and is ) a beautiful and elegant ISA, perhaps the only one to come out of intel in such good shape. Compared to it, SSEx (AVX too?) is just glorified VLIW instructions.
 
LRBni was (and is ) a beautiful and elegant ISA, perhaps the only one to come out of intel in such good shape. Compared to it, SSEx (AVX too?) is just glorified VLIW instructions.

AVX is basically just a glorified SSE. Its major improvement is about the coding of the instructions rather than the instructions themselves. Although the extended AVX has FMA, it still does not have what a real SIMD should have, i.e. gather/scatter, masked execution, etc.
 
Back
Top