[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

No graphics? So it all starts to make sense..

Says who? ;)
"It will be easily programmable using many existing software tools, and designed to scale to trillions of floating point operations per second (Teraflops) of performance. The Larrabee architecture will include enhancements to accelerate applications such as scientific computing, recognition, mining, synthesis, visualization, financial analytics and health applications."
All fits for points Nvidia was trying to make with CUDA - except the word "larrabee" of course. ;)
 
It can even scales to petaflops, this does not mean it's going to be good for graphics, you need much more that some zilion flops to be good at that.
 
It can even scales to petaflops, this does not mean it's going to be good for graphics, you need much more that some zilion flops to be good at that.

Well, if you take a look at AnandTech yesterday, it seems like Intel is suggesting that graphics isn't what they intend it to be good at. And surely the slides we put up are very GPGPU focused.

And yet Carmean is there, and the VCG page specifically mentions "extreme gaming" as one of their target areas. :???:

The entire picture of Intel's GPU efforts remain a riddle wrapped in a mystery inside an enigma. . . :LOL:
 
I think Intel isn't entirely sure what would benefit and be a good marketing target, either. Just like us.

I simply think that they don't have all that much (and good) plans about what to do with all that power, and are still trying to focus on x86 single-thread emulation. So I expect them to batch those cores together into smaller or wider virtual x86 cores, depending on the workload.

I am really curious how they are going to handle the required consistency. Cache memory isn't going to help there. Very fast local buffers and interconnects that can be allocated to the core collection and store the states of finished ops might. Hence, those uops.

Which does take it in the direction of Cell, but with a cache instead of relying on DMA, and with a local storage that is invisible unless you run in native mode. Which you might be able to allocate according to your needs. That would be a good win, if it turns out to be fast enough.
 
If the area/performance hit for snooping is significant I'd rather they would just not allow replication at all for writeable pages or cache lines ... sure it puts a burden on the developer, but meh ... if you don't understand your program's dataflow you shouldn't be expecting good performance anyway.
 
It's clearly prefixed by 'Classic' and the description starts by 'just about one year ago'. The point is we haven't had any new article since February and there's no point constantly pointing at the APX 2500 and ATI Linux Drivers articles - so I changed that to three good articles that might remind some people of what happened in the last year or so. What's so bad with that? :)
 
Larrabee is definitely targeted at graphics. I think the confusion is that Intel hasn't figured out how to publicly set expectations.

I think they want to avoid saying "we're going to beat NV and ATI" since it's clear that if they can get within 20% of the performance of contemporary GPUs it'll be a huge success technically.

DK
 
What makes you think Larrabee is going be handled with kid gloves?

Perception of other graphics products with slimmer performance shortfalls have been much less forgiving.

A 20% shortfall in games where contemporary cards struggle to maintain a consistently playable framerate would be enough to declare Larrabee unusable.

If it's a 20% shortfall on average, well even R600 didn't do that badly in most games.
 
20% is a lot but there's always room to learn from your own mistakes -> Larrabee 2 :)
 
That would be my expectation, but I'd hope for better results on the first outing.

Larrabee by that time would be a design at 32nm while the best the GPUs could hope for is possibly a 40nm half-node, with or without High-K and metal gates I'm unsure.

Even failing that, we can already assume circuit performance will be something TSMC wouldn't be able to match for another couple years afterwards, if all the stars align.
Power-wise, Larrabee won't be a lightweight, if some slides indicating something north of 150W are to be believed.

That's an embarassment of riches I'd rather see produce more.
 
I read in one of the Larrabee articles that Intel aims for mid-end performance with the first generation, and if things go well, the successor should be able to get into the high-end.
I suppose that makes sense, after all, they ARE the newcomer in this market. It took ATi until the Radeon 9700 to really get on top in the high-end market aswell, and ATi was far from a newcomer at that time.
 
What makes you think Larrabee is going be handled with kid gloves?

Perception of other graphics products with slimmer performance shortfalls have been much less forgiving.

A 20% shortfall in games where contemporary cards struggle to maintain a consistently playable framerate would be enough to declare Larrabee unusable.

If it's a 20% shortfall on average, well even R600 didn't do that badly in most games.

If that's would be a supposed 20% shortfall compared to the fastest GPU at the time, Intel should be chearleading if they can with the first strike yield as much as R600 sales. Wasn't it roughly in the 10% region for the high end segment for 2007? That's a whole damn lot for an IHV that enters for the first time the GPU market.

Personally I wouldn't give a rat's ass about performance; Intel would in the above case adjust the price to that thing's performance anyway. What would worry me personally most would be driver stability/compatibility as of course image quality.

The common mistake unfortunately many make is to concentrate on the price/performance ratio of a GPU. Actually it should be price/performance/IQ and since I'm getting more and more sensitive to various optimisations through the last years: 100 fps with ultra optimized AF isn't really "20% faster" than 80 fps with true high quality AF.
 
I read in one of the Larrabee articles that Intel aims for mid-end performance with the first generation, and if things go well, the successor should be able to get into the high-end.
I suppose that makes sense, after all, they ARE the newcomer in this market. It took ATi until the Radeon 9700 to really get on top in the high-end market aswell, and ATi was far from a newcomer at that time.
does anyone really believe intel

it all looks like PR to me .. i don't think intel is capable of doing Graphics unless nvidia helps them like ATi did with AMD. And that will never happen now. I think RT and larrabee won't make a dent in graphics for a decade. By then they will have just caught up to where AMD and NVIDIA is now.

To me it appears to just be smoke and mirrors by the same PR team that brought us "NetBust 10Ghz or Bust"; this looks like a move to just discredit NVIDIA and buy time for intel to really "come up with something"
 
Intel does have some advantages relative to AMD and NVIDIA. But the reality is that Intel's process technology is in no way comparable to the benefit of say...10 years experience designing graphics hardware.

The first time you do ANYTHING, you are bound to screw up and make mistakes. The best you can hope for is that the mistakes are minimized or can be somehow hidden (hurray for a driver!).

I agree that Intel's silicon is vastly higher performance than TSMCs, but they also have VASTLY more restrictive design rules, which has implications on die area.

Also, I expect Larrabee to be 32nm about 1 year behind when the first 32nm products ship, so in all reality, NV and AMD will have access to 32nm as well.

DK
 
dkanter: Agreed, that's reasonable. On the other hand, it's worth considering that many of the software guys aren't exactly new to graphics, and that the software portion will be much more important than in normal GPUs. So from my POV, the main risk is hardware engineers overestimating what software engineers can do and software engineers not taking certain hardware efficiency metrics into proper consideration until it's too late. Whether that will actually happen is anyone's guess, however.

Regarding density and performance - I am seriously expecting the 4Q09/1H10 Larrabee to be on 32nm. I said it and I'll say it again: if it's 45nm, Intel doesn't need to bother even releasing the thing, they'll just look dumb because they'll be at a noticeable process *disadvantage* given what I've seen of TSMC's 40nm process so far (in terms of (perf/mm²)/$). I'd easily describe 40nm as the biggest step in TSMC's process technology since 130nm if they deliver. Of course, given what happened at 130nm... well we'll see! :)

Anyhow, as I pointed out in my news piece yesterday, TSMC is now releasing 32G in 4Q09, instead of 1H10 - while 32LP was moved to 1Q10. So given that 40G was released in 2Q08, that means 32G chips will likely come out ~18 months after that. I'd estimate Q310, in time probably for the Winter OEM cycle for part of the line-up.
 
There was an interview with David Kirk on bit-tech, discussing CPU's GPU's, couple of days back. Basically, the same points that I've seen here come by are mentioned in the interview.
 
it all looks like PR to me .. i don't think intel is capable of doing Graphics unless nvidia helps them like ATi did with AMD. And that will never happen now. I think RT and larrabee won't make a dent in graphics for a decade. By then they will have just caught up to where AMD and NVIDIA is now.

Do not compare Intel with AMD, please.
Intel is a FAR bigger company, which has developed FAR more technology and has been successful in a lot of areas.
Will all the resources that Intel has, they can surely make this work. Getting back to your Netburst... no, it was not a very efficient design... but because Intel had a huge advantage over AMD in terms of manufacturing technology, they still made the design work, simply by brute force.

Larrabee could be a similar scenario... It might not be as efficient as nVidia's GPUs, but if they run at much higher clockspeeds, with more cores, cache and all that, then it might just work.
Raytracing is ofcourse nonsense at this point, but that's another story.
 
Back
Top