I don't at all, I see another PS3-Cell.
In a closed box how could DirectX such?Can the Cell run DirectX? That is a big difference in my book.
P.S. If it turns out Larrabee sucks at DirectX...then forget I said anything
Basically Larrabee is compliant with everything through software.Isn't LRB supposed to be compliant with the next DX version? If that's the case then LRB should be quite amazing for a CPU when it comes to graphics.
As Laa-Yosh stated MS got burnt by bad deal with the first box, they change their path of doing with the 360 they got burnt again, Ms is maturing as hardware vendor if the contract is no good they won't signed it, that's it.I'm sure they'll talk sweet and all, but 3 years after launch they'll be busting Microsoft's balls again. Especially if we're talking about a part coming off their more advanced lines. MS is gonna be looking for shrink after shrink to drive their BOM down and Intel isn't going to want to waste their best fabs on a product with a razor thin profit margin.
If you think that CELL and LRB have a lot in common perhaps you shouldn't comment on either chip.
The point is that both Nvidia and ATi already pretty large chip, with high power consumption. I think they reached the limit in these two areas.I wasn't saying they will/would, only that the potential is there for Intel to fall short in regards to performance and/or IQ. The GPU market in general bears out how difficult it is to attain good performance with high quality (or better: good balance) of IQ in a specific threshold of die area, power, heat, etc. If AMD (ATI) and NV can struggle in these areas after a decade+ experience I have reservations in assuming Larrabee will be competitive in performance as well as be roughly comparable in regards to AA and AF quality. Not only are there issues of experience, iteration, and refinements & efficiencies, Intel's past efforts although half hearted sounded smart and had some good ideas but in the end couldn't compete.
Basically aiming at 2TFlops ( I know it's theoretical figure we have no clue about efficiency) with full programmability might not be too shabby.Aaron Spink said:Part of this comes down to the current trends in the graphics space where the vendors are pushing the performance side of things so hard that they've lost sight of the efficiency and thermal aspects of the designs. We've got graphics cards right now pushing 2x the power of any PC cpu, the CPU guys would love to have power budgets in the 200+ watt range instead of 45-130 range.
It wouldn't surprise to see a slow down in the performance increases in the coming years as the designs have basically already pushed the power budgets as far as they will go along with generally the die sizes as far as they will go as well.
I agree but it's likely (given the time frame) that we're more likely to speak about the second iteration of Larrabee.I will be shocked if Larabee #1 at a specific die size can be within 10% performance in DX10/11 compared to AMD/NV with similar IQ. The reason I would be shocked is because of the difficulty proven GPU makers have shown in getting these things right and the lack of an iterative process to refine and fix issues makes me doubtful Intel will be able to address all the issues with competitive performance with their first offering.
So far the main complain from others vendors (read Nvidia) have been about bandwidth.What I expect from Larabee, personally, is some bottlenecks Intel is currently downplaying which hurt mainstream GPU performance, but it also opening up some new techniques for those with the time and money to invest in such as well as an investment in new approaches to computing. Having 32+ x86 cores with large vector units and gobs of bandwidth should allow for some neat experiments. But will it be neck-and-neck with NV/AMD in regards to DX10 performance out of the box? Getting it "right" in the performance segment of the GPU market seems to be a pretty tall task that many, including Intel, have failed at.
I dunno, maybe you think it will and I will be shocked I would love to be wrong of course: huge array of CPUs capable of leading class GPU performance? Who wouldn't want that?
My question for the experts: are LRB's cores capable of adequately running general game code that the 360 CPU runs now or would MS need a more traditional CPU to help out?
At about 2-3x the performance of the current gen. That would result in, what, a bit better graphical level as this gen but with better IQ? Is that going to be enough progress?The speculation has been that the 32 cores would run at 1.7ghz to 2.3ghz in shipping form, so that should be enough performance for one chip to do it all...
It's likely to be more than that and for a single chip thus a cheap box.At about 2-3x the performance of the current gen. That would result in, what, a bit better graphical level as this gen but with better IQ? Is that going to be enough progress?
I saved Intel larrabee presentation pdf (from the IDF).I'm not an expert, but if you believe the Larrabee wiki then one 32 core version should do the trick as both a cpu and gpu. It claims 25 1ghz cores sustained Gears of War at 1600x1200 resolution at 60fps (no msaa), which would leave a bunch of cores over for other stuff. The speculation has been that the 32 cores would run at 1.7ghz to 2.3ghz in shipping form, so that should be enough performance for one chip to do it all, again assuming the wiki is true. On the other hand, you have to wonder how many watts such a beast would eat up.
I was wondering, perhaps with the next generation it might help to understand some of the requirements that are going to be placed on the CPUs for the next generation interfaces which are likely to be coming. Lets say the Xbox 360 uses a motion capture system with movement and maybe facial recognition capturing (2+ cameras) Would that be the kind of workload suited to a parellel cpu like LRB or would it be more suited to a more traditional X86/Power PC variant with a more monolithic core design?
I highlighted the important words for me.I have some things in mind that MS could see very beneficial against competition by adopting a powerful Intel product
I highlighted the important words for me.
Wii effect should not be overestimated, if two vendors (out of three) came with under powered systems and one offers a clearly more potent system it's likely to take most of the harcore enthousiast market which size and buying power sould not be underestimated.
360 CPU is already a far cry from a "traditional" CPU when it comes to adequately running general game code. While it's certainly possible LRB cores are even worse, we haven't seen any public data to support that (what we know, suggests a rough parity).shiznit said:My question for the experts: are LRB's cores capable of adequately running general game code that the 360 CPU runs now or would MS need a more traditional CPU to help out?
Well, there's that idea that MS eventually wants to move away from manufacturing hw boxes and just control the Virtual-Console specification instead.Joshua said:And from MS's perspective: what does MS have to gain?
Probably as much as you need now with XCPU or Cell. But on the flipside, how much will that efficiency still matter as the number of cores increases.(how much rewriting will be needed to shift x86 "CPU" code to a Larabee core (in order, slower frequency, different SIMD set) to get good efficency though?)