Intel pushing Larrebee console deal with Microsoft

In think LRB will be competitive with what AMD/NV has when/if LRB makes it debut in a console. It may not be as fast but it will likely be fast enough for people to not notice and/or care about any differences in graphics. It will likely come down to the art teams.
 
Can the Cell run DirectX? That is a big difference in my book.

P.S. If it turns out Larrabee sucks at DirectX...then forget I said anything;)
In a closed box how could DirectX such?
I mean DirectX is MS, they will provide a version a directX fine tuned for Larrabee (not that I think that it could great in the first software itération.
 
I'm sure they'll talk sweet and all, but 3 years after launch they'll be busting Microsoft's balls again. Especially if we're talking about a part coming off their more advanced lines. MS is gonna be looking for shrink after shrink to drive their BOM down and Intel isn't going to want to waste their best fabs on a product with a razor thin profit margin.
As Laa-Yosh stated MS got burnt by bad deal with the first box, they change their path of doing with the 360 they got burnt again, Ms is maturing as hardware vendor if the contract is no good they won't signed it, that's it.
Intel has a lot to win, really a lot, athere are rumors after rumors about them trying to make into the market. We will see how far they are willing to go.
I think that the perfs and software related extra efforts will be the main factors in Ms choice, Intel would be stupid to not do what it takes to gain the market even if it means earning few money.
 
Last edited by a moderator:
If you think that CELL and LRB have a lot in common perhaps you shouldn't comment on either chip.

You're putting words in my mouth, pal. I compared them from a practical, business and impact sort of view, not the actual chip technology.
 
Some people are saying that MS will not settle for not owning the IP of the GPU again like on Xbox 1, but there are no fabs on the planet that can manufacture LRB beside Intel's. So what good would it be to MS to own the IP if they have to pay Intel to manufacture, especially if Intel decides that LRB 1 is a lossleader to ensure market penetration and sells it to MS for cheap. I think this issue is a bit overblown.

My question for the experts: are LRB's cores capable of adequately running general game code that the 360 CPU runs now or would MS need a more traditional CPU to help out? I'm asking because if they are adequate and assuming MS has room for 2 dies on the motherboard but doesn't necessarily need a traditional cpu, Intel could pitch two medium sized LRB cores working together (say 150mm^2 each, don't how many cores that would be on 32nm) to maximize yields and reduce cost.
 
Last edited by a moderator:
I wasn't saying they will/would, only that the potential is there for Intel to fall short in regards to performance and/or IQ. The GPU market in general bears out how difficult it is to attain good performance with high quality (or better: good balance) of IQ in a specific threshold of die area, power, heat, etc. If AMD (ATI) and NV can struggle in these areas after a decade+ experience I have reservations in assuming Larrabee will be competitive in performance as well as be roughly comparable in regards to AA and AF quality. Not only are there issues of experience, iteration, and refinements & efficiencies, Intel's past efforts although half hearted sounded smart and had some good ideas but in the end couldn't compete.
The point is that both Nvidia and ATi already pretty large chip, with high power consumption. I think they reached the limit in these two areas.
I agree with some Aaron's comments (from the core forums):
Aaron Spink said:
Part of this comes down to the current trends in the graphics space where the vendors are pushing the performance side of things so hard that they've lost sight of the efficiency and thermal aspects of the designs. We've got graphics cards right now pushing 2x the power of any PC cpu, the CPU guys would love to have power budgets in the 200+ watt range instead of 45-130 range.

It wouldn't surprise to see a slow down in the performance increases in the coming years as the designs have basically already pushed the power budgets as far as they will go along with generally the die sizes as far as they will go as well.
Basically aiming at 2TFlops ( I know it's theoretical figure we have no clue about efficiency) with full programmability might not be too shabby.
For AF quality is real depending on how potent the fixed function units will be.
For AA, I've no clue. ROP render back turn to bemore and more programmable. With advanced type of AA larrabbe may end on par. For simplier well??? For high AA level x4,8,16, no clue here, does the on chips bandwidth will allow for that, I'don't know. (sure some members have a clue here )
I will be shocked if Larabee #1 at a specific die size can be within 10% performance in DX10/11 compared to AMD/NV with similar IQ. The reason I would be shocked is because of the difficulty proven GPU makers have shown in getting these things right and the lack of an iterative process to refine and fix issues makes me doubtful Intel will be able to address all the issues with competitive performance with their first offering.
I agree but it's likely (given the time frame) that we're more likely to speak about the second iteration of Larrabee.
What I expect from Larabee, personally, is some bottlenecks Intel is currently downplaying which hurt mainstream GPU performance, but it also opening up some new techniques for those with the time and money to invest in such as well as an investment in new approaches to computing. Having 32+ x86 cores with large vector units and gobs of bandwidth should allow for some neat experiments. But will it be neck-and-neck with NV/AMD in regards to DX10 performance out of the box? Getting it "right" in the performance segment of the GPU market seems to be a pretty tall task that many, including Intel, have failed at.

I dunno, maybe you think it will and I will be shocked ;) I would love to be wrong of course: huge array of CPUs capable of leading class GPU performance? Who wouldn't want that?
So far the main complain from others vendors (read Nvidia) have been about bandwidth.
Nvidia have no advantage in the chip to Vram department, thus they may be speaking of internal bandwidth. So far Intel have given no value, there might be some truth in it W&S.

As a PC part no matter the hardware (excellent/good/bad/ugly) the challenge Intel faces on the drivers side is so big that games perfs might not be relevant of the chip technical value. Hence the huge need for Intel to showcase it in a close box system (even confined in "fixed" graphical pipeline close enough to directX11.x).
 
Last edited by a moderator:
My question for the experts: are LRB's cores capable of adequately running general game code that the 360 CPU runs now or would MS need a more traditional CPU to help out?

I'm not an expert, but if you believe the Larrabee wiki then one 32 core version should do the trick as both a cpu and gpu. It claims 25 1ghz cores sustained Gears of War at 1600x1200 resolution at 60fps (no msaa), which would leave a bunch of cores over for other stuff. The speculation has been that the 32 cores would run at 1.7ghz to 2.3ghz in shipping form, so that should be enough performance for one chip to do it all, again assuming the wiki is true. On the other hand, you have to wonder how many watts such a beast would eat up.
 
The speculation has been that the 32 cores would run at 1.7ghz to 2.3ghz in shipping form, so that should be enough performance for one chip to do it all...
At about 2-3x the performance of the current gen. That would result in, what, a bit better graphical level as this gen but with better IQ? Is that going to be enough progress?
 
We shouldn't dismiss multichip configurations, though we don't know if LRB architecture supports that as well (fairly sure it does).
 
I was wondering, perhaps with the next generation it might help to understand some of the requirements that are going to be placed on the CPUs for the next generation interfaces which are likely to be coming. Lets say the Xbox 360 uses a motion capture system with movement and maybe facial recognition capturing (2+ cameras) Would that be the kind of workload suited to a parellel cpu like LRB or would it be more suited to a more traditional X86/Power PC variant with a more monolithic core design?
 
At about 2-3x the performance of the current gen. That would result in, what, a bit better graphical level as this gen but with better IQ? Is that going to be enough progress?
It's likely to be more than that and for a single chip thus a cheap box.

1600*1200 is 1 920 000 pixels
1280*720 is 921600 pixels

Basically the larrabee used in Intel simulation can render more than twice as many pixels at twice the framerate while it's badly underclocked(~a factor of two).
It could end able to achieve 8 times what xenos do.

Not to mention that the game is not even optimized in the slighest manner and I'm pretty sure that the software layer allowing to run the game might be nowhere near of mature.

I think you make it sounds worse than it is.
 
I'm not an expert, but if you believe the Larrabee wiki then one 32 core version should do the trick as both a cpu and gpu. It claims 25 1ghz cores sustained Gears of War at 1600x1200 resolution at 60fps (no msaa), which would leave a bunch of cores over for other stuff. The speculation has been that the 32 cores would run at 1.7ghz to 2.3ghz in shipping form, so that should be enough performance for one chip to do it all, again assuming the wiki is true. On the other hand, you have to wonder how many watts such a beast would eat up.
I saved Intel larrabee presentation pdf (from the IDF).
(more presentations from siggraph here: http://s08.idav.ucdavis.edu/)
They've tested three games: F.E.A.R , GeoW, Half life2
The 25 cores figures is the worse case scenario for Fear thus Geow come close.
On average it's lower than that, by a gross estimate of the graphs I would put:
F.E.A.R ~15cores (AA*4)
GeoW between 15 and twenty cores
HL2 between 7/8 cores (AAx4)

The first two games are pretty chaotic in number of cores used to sustain the 60 FPS framerate. HL is almost constant. I think some members did proper estimations in the core forum related thread.

EDIT I found the value thanks to TITANIO for provide data in the first place :)
GeoW
Average: 16.78
Min: 12
Max: 24
Median: 16
F.E.A.R
Average: 13.3
Min: 7
Max: 26
Median: 14
Discussion about the results provide by Intel starts from here:
http://forum.beyond3d.com/showthread.php?t=48314&page=7
 
Last edited by a moderator:
I was wondering, perhaps with the next generation it might help to understand some of the requirements that are going to be placed on the CPUs for the next generation interfaces which are likely to be coming. Lets say the Xbox 360 uses a motion capture system with movement and maybe facial recognition capturing (2+ cameras) Would that be the kind of workload suited to a parellel cpu like LRB or would it be more suited to a more traditional X86/Power PC variant with a more monolithic core design?

Well both can multithread, LRB just has more threads cores and more processing power than Xenos. There's not much difference in terms of thread allocation.
 
I have some things in mind that MS could see very beneficial against competition by adopting a powerful Intel product
I highlighted the important words for me.
Wii effect should not be overestimated, if two vendors (out of three) came with under powered systems and one offers a clearly more potent system it's likely to take most of the harcore enthousiast market which size and buying power sould not be underestimated.
Edit
I remove the sentence as I don't want this thread to go nowhere.
Jandlecack I think you can delete your comment ;)
 
Last edited by a moderator:
It's kind of offtopic, but Sony always wants to go and revolutionize or at least go against the stream. A powerful CPU rather than GPU was pretty much what I expect from them.
 
I highlighted the important words for me.
Wii effect should not be overestimated, if two vendors (out of three) came with under powered systems and one offers a clearly more potent system it's likely to take most of the harcore enthousiast market which size and buying power sould not be underestimated.

For me its the powerful product from intel that makes the difference ;)
 
shiznit said:
My question for the experts: are LRB's cores capable of adequately running general game code that the 360 CPU runs now or would MS need a more traditional CPU to help out?
360 CPU is already a far cry from a "traditional" CPU when it comes to adequately running general game code. While it's certainly possible LRB cores are even worse, we haven't seen any public data to support that (what we know, suggests a rough parity).
Thing is that GPR power hasn't been a real issue since last generation, we're close to a point where having more of it just isn't going to yield any perceptible benefits whatsoever.

So yea, I'm with you on the multiple LRBs thing - so long as LRB doesn't completely suck at GPR stuff, I see no need for a dedicated CPU on there.

Joshua said:
And from MS's perspective: what does MS have to gain?
Well, there's that idea that MS eventually wants to move away from manufacturing hw boxes and just control the Virtual-Console specification instead.

(how much rewriting will be needed to shift x86 "CPU" code to a Larabee core (in order, slower frequency, different SIMD set) to get good efficency though?)
Probably as much as you need now with XCPU or Cell. But on the flipside, how much will that efficiency still matter as the number of cores increases.
 
If Microsoft wants to make a huge leap beyond Xbox 360 and compete with PS4, they'll need a console that has

strong multi-core or manycore CPU from IBM, Intel or AMD
Intel's Larrabee or 2nd-gen Larrabee
ATI or Nvidia GPU

Larrabee cannot , or should not, take over the role of the strong conventional multicore or manycore CPU and Larrabee also cannot take over the role of a conventional highend GPU , IMO. Unless a lower-end, less advanced system is the plan.

next-gen console (be it XB3 or someone else's console):

CPU <> Larrabee GPGPU <> GPU or Rasteriser.


Larrabee might make an exellent & very important component for a next-gen console, but Larrabee should not be the heart of any next-gen console.

Xbox 360 is pretty well balanced between Xenon and Xenos.

PS3 is too dependant on CELL, with a somewhat lackluster GPU, where PS3 should've had a higher-end (custom G80-based, with EDRAM) GPU to balance out the cutting-edge (for the time) CELL.


Larrabee will be better than CELL1 at rendering graphics itself, however I doubt first-gen LArrabee will compete with the best highend or upper midrange GPUs from Nvidia & AMD/ATI.


sorry if im not making sense, Ive had 2 hours sleep in the last 48-50 hours.
 
Last edited by a moderator:
Back
Top