Larrabee, console tech edition; analysis and competing architectures

Squilliam

Beyond3d isn't defined yet
Veteran
Supporter
If you want to referrence it, the thread is here
http://forum.beyond3d.com/showthread.php?t=46393

It seems like a perfect match of interests between the two companies. Intel wants to break into the GPU market to stave off the GPGPU threat and increase market share. Microsoft has control over the direct x specification on the P.C, and can leverage developer support between the two markets. In addition to this, microsoft can make direct x 11 specification favourable to Intel. *Look what happened to Nvidia when they got kept out of the direct x 9 specification. Ati 9700 pro*

The release date also seems to coincide with a range of estimated release dates for the Xbox 3. Now im no expert, it does seem like Larry* will be a good fit for a console cpu, following Sony's lead on the cell.

Now the chip will be expensive, but I can see Intel giving microsoft as they have mutually benificial perks to bring to the table.

Does this make sense or am I just blowing smoke?
 
I was going to suggest the same thing in that thread - except maybe it could be a frankenPU that's some combination of Nehalem + Larrabee. Or maybe a couple or 3 Larrabees, because I can't see a single Larrabee being much use for the entire console. And I see the motivations as Intel wanting to take control of D3D before killing it off. Succeeding where NVidia failed.

I don't see Intel doing anything to D3D11 though, that's gotta be pretty much done and dusted - it's less than a year away in theory.

So, really it's just a question of whether there'll be a D3D13.

Jawed
 
I agree its a good match of interests, and only xbox fits the bill.

Larabee doesnt need to influence DX for it to work in consoles, more like DX10/11 has influenced some of the fixed functionality (or at least design principles in 1st gen Larabee)

It would be interesting to have a console with two Larabee chips in. Where as jawed suggested could be customised to have 1 Nehelem core in each. In effect it would look something like the PS3 was rumoured to be before Nvidia got involved. with the dual Cell Broadband archtecture.

A pure FP powerhouse, With DX abstraction at API level.

EDIT: Heres some questions. (assuming it did go in to the nextbox)

Will it be fabbed at 32nm?
What will be the transistor budget for the chip?
What will be its FP performance?

How does that FP performance break down, eg, OoO x86 / in order x86 / Vector FP?
 
I would be very surprised if the next Xbox CPU is not a beefier PowerPC with more core, and fully backwards compatible with Xbox 360.
 
*Look what happened to Nvidia when they got kept out of the direct x 9 specification. Ati 9700 pro*

This is patently false. Nvidia *chose* to design NV30 around what they perceived to be the final target for the DX9 API before it was actually finalized, and they chose wrong. They were not kept out of anything.

The release date also seems to coincide with a range of estimated release dates for the Xbox 3.

Eh, not really. Larrabee is scheduled to be out in 2009. The next XBox won't be out until 2010 at the absolute earliest, more likely 2011-2012.

Larrabee as console GPU isn't a bad idea from a performance perspective, and would certainly help Intel get devs familiar with their hardware (a definite win for them), but it's just too big (and therefore expensive) a chip to be putting in consoles.
 
Larrabee as console GPU isn't a bad idea from a performance perspective, and would certainly help Intel get devs familiar with their hardware (a definite win for them), but it's just too big (and therefore expensive) a chip to be putting in consoles.

So how big is Larrabee then? And what size will graphics/cpu chips be in next gen consoles? Youve either got some seriously good insider knowledge or a crystal ball?
 
So how big is Larrabee then? And what size will graphics/cpu chips be in next gen consoles? Youve either got some seriously good insider knowledge or a crystal ball?

No, it's called common sense. There's plenty of publicly-available information about Larrabee. Enough to determine that it's going to be a massive chip (unless Intel transitions to 32nm in-time, and even then it would only be "large" instead of massive).
 
Im not here to argue the radeon 9700 and I probably shouldn't have brought it up. :) I was thinking that microsoft would look at a revised version of the larrabee on the "tock" cycle (shrinking cycle is what I mean.) The architecture looks good, and i'm sure that it will be flexible enough to be customized for the needs of the nextbox.

Backwards compatibility is pretty much dead in the power consoles of this generation. Sony has dropped it and microsoft never got fully into it either. So I don't think BC is a good enough reason to not put a good chip in. Actually furthermore does IBM even have another PowerPc chip? Apples are intels now. What choice do they have? Compete Cell vs Cell?

Intel is a hungry tiger looking for new markets. Im sure they wouldn't hesitate to go after IBM and Co's Cell (2) If they see it as a threat.
 
Backwards compatibility is pretty much dead in the power consoles of this generation. Sony has dropped it and microsoft never got fully into it either.

Actually, Microsoft's offering is fairly robust. They are confident enough in it that they will offer about 100 titles this year alone for download under the "Xbox Originals" program.

So I don't think BC is a good enough reason to not put a good chip in. Actually furthermore does IBM even have another PowerPc chip?

Full backwards compatibility from day one, the kind you can only rely on if you use a newer version of the same CPU, will allow Microsoft to get away with an earlier launch and still keep Xbox 360 a viable low-cost platform - and being first to the market worked wonders for them this time around. Besides, keeping the CPU architecture means polished tools from day one (or day -2 yr, as console development goes), and developers familiar with its shortcomings and optimization opportunities.
 
No, it's called common sense. There's plenty of publicly-available information about Larrabee. Enough to determine that it's going to be a massive chip (unless Intel transitions to 32nm in-time, and even then it would only be "large" instead of massive).

cant you at least speculate some numbers that give weight to your argument. Just saying, 'its common sense' does nothing to either give you credability, or add any value to the discussion.

For starters, common sense will tell you that Larrabee is inherently a scalable archticture and therefore to say Larabee is too big for consoles is nonsense. in the same way saying Cell is too big to put in an HDTV. Well, you just scale it to the requirements of the application.

Secondly the above disscussion touches on the idea that a custom version of Larabee could be made to suite the archtecture of a console in the same way ATI created a custom version of its graphics chip. This could better suite it to the needs of a console and make it more effecient for its required job.

Thirdly, Larrabee is supposed to be released in late 2009 probably on 45nm tech. by which time intel will have already have begun rolling out its 32nm tech for CPU's and will have luxury of using that to shrink Larrabee not long after it is launched. by late 2011 early 2012 they will have 22nm to play with, by which time, even if the chip was a large 400mm^ on 45nm tech, it would only be 100mm^ by 2012. on a 22nm process.

In the past consoles have targetted about 270mm^ for its GPU's (on console launch) and around 200mm^ for CPU die area.

AND if larrabee really is flexable enough to process both x86 serial code, as well as the massively parrallel floating point heavy graphics / physics stuff too. then actually you take away the need to have two different dedicated chips in your console, so therefore even if you did have two chips in the console they would be identical and cheaper to produce. Thus reducing the complexity and cost of the architecture and motherboard.

Incedentally with some back of the napkin calculations 300mm^ on a 32nm process would give you around 3.3 b illion transistors. Compare that to the G80 budget of 680 million or the R600 of 720 million and I think the picture becomes clear.

There could be enough transistor budget and scalability to have Larrabee in a console.

The real point of interest is, would it be appropriate at this stage to have such an architecture in a console for solving graphics, for example would it not still be better performance per watt or per transistor to have dedicated hardware, rather than a jack of all trades chip? Well that surely depends on how well its implemented right?
 
Last edited by a moderator:
cant you at least speculate some numbers that give weight to your argument. Just saying, 'its common sense' does nothing to either give you credability, or add any value to the discussion.

For starters, common sense will tell you that Larrabee is inherently a scalable archticture and therefore to say Larabee is too big for consoles is nonsense. in the same way saying Cell is too big to put in an HDTV. Well, you just scale it the requirements of the application.

Secondly the above disscussion touches on the idea that a custom version of Larabee could be made to suite the archtecture of a console in the same way ATI created a custom version of its graphics chip. This could better suite it to the needs of a console and make it more effecient for its required job.

Thirdly, Larrabee is supposed to be released in late 2009 probably on 45nm tech. by which time intel will have already have begun rolling out its 32nm tech for CPU's and will have luxury of using that to shrink Larrabee not long after it is launched. by late 2011 early 2012 they will have 22nm to play with, by which time, even if the chip was a large 400mm^ on 45nm tech, it would only be 100mm^ by 2012. on a 22nm process.

In the past consoles have targetted about 270mm^ for its GPU's (on console launch) and around 200mm^ for CPU die area.

AND if larrabee really is flexable enough to process both x86 serial code, as well as the massively parrallel floating point heavy graphics / physics stuff too. then actually you take away the need to have two different dedicated chips in your console, so therefore even if you did have two chips in the console they would be identical and therefore cheaper to produce. Thus reducing the complexity and cost of the architecture and motherboard.

Incedentally with some back of the napkin calculations 300mm^ on a 32nm process would give you around 3.3 b illion transistors. Compare that to the G80 budget of 680 million or the R600 of 720 million and I think the picture becomes clear.

There could be enough transistor budget and scalability to have Larrabee in a console.

The real point of interest is, would it be appropriate at this stage to have such an architecture in a console for solving graphics, for example would it not still be better performance per watt or per transistor to still have dedicated hardware, rather than a jack of all trades chip? Well that surely depends on how well its implemented right?

Excellent post! Perhaps i'd like to add. Having thousands of programmers working on larrabee might be the best way for them (edit:Intel) to break into the GPU market.

Also I hadnt thought of the ability to use two chips.

Lastly, to everyone... Where is the next generation of PowerPC chips? Apple doesn't use them so why would IBM make themanymore??
 
Last edited by a moderator:
I, for one, would absolutely love to see Larrabee in the next Xbox, but that's not very realistic unless Intel manages to produce it real cheap by 2010 time frame.
I'm also not sure if Larrabee on it's own can do both the graphics workload and run all the current-gen engines at the same time. Now if MS can add a main (multi PPC?) core and relegate Larrabee to a more super-flexible-GPU role, then sky is the limit. That would be one fabulous machine to program for.
 
I'm also not sure if Larrabee on it's own can do both the graphics workload and run all the current-gen engines at the same time. Now if MS can add a main (multi PPC?) core and relegate Larrabee to a more super-flexible-GPU role, then sky is the limit. That would be one fabulous machine to program for.

Wow that's some surprisingly strong Larrabee support there! :)

Speaking in the theoretical, I'm wondering given its philosophical similarities to Cell, what you would feel as to a 'Cell 2' or similar acting as a GPU in PS4... really just trying to get a sense in general of what you feel the future of graphics holds in terms of flexibility and the means of programming for it. Personally I think PS4 will be a Cell-CPU derivation and NVidia on the GPU, so it's neither here nor there save to examine what you feel the similarities and/or differences are. I have to imagine faith in Intel's compilers plays a role here, but just looking to understand the exact source of your excitement relative to a (more) fixed-function GPU as an alternative.
 
MS would be making a huge mistake going with Larrabee v.1. The crux for MS is providing an efficient platform with mature tools that allows developers to bring software to market within resonable time and fiscal budgets. Going with a totally new design would be counter intuitive to their investment in the DX platform and the current development process. More importantly is I doubt that, using current methods, Larrabee will be a better performer in terms of IQ/Performance for the cost of the chips.

Intel needs to show they can offer Larrabee to MS at a competitive price while at the same time offering the same performance and IQ (at least) for the target users. My guess is Larrabee v.1 will (a) have some significant performance, as well as IQ, shortcomings compared to other DX GPUs on the market and (b) to get the most out of Larrabee will require heavy investment in alternative approaches to game design workflow, tools, and techniques.

Longterm Larrabee may be of great interest, and I wouldn't dismiss the possibility of Larrabee being position, initially, as a third tier crossover chip (e.g. Intel is marketing it for physics and AI as well as graphics...) But seeing the history of GPUs and how difficult it is to get all the "small stuff" right in terms of performance and IQ as well as the idea of a couple dozen 1.5GHz in order x86 cores as a main processor seems suicidal.

IMO, MS has invested a lot into DX and the end result is, for now, a relative platform with Nv/ATI producing very effective chips at what they do. They have high levels of IQ, a lot of performance per mm2, and have robust tools that developers are quite indoctrinated in.

I would bet that it would be a much easier transition, at this point, to leverage GPUs as a focal point of the next design and begin creating libraries, tools, and standards that open the GPU up for more non-graphic specifc work. Yeah, some will whine ("I want my GPU doing only graphics" ... ) yet shifting focus to more GPU resources (GPU footprint) could accomidate this concern and allow some of these data centric high-flop algorhythms be offloaded/tested on the GPU. Going the full monte with Larrabee could be a distaster.

Then again Intel could turn out a part with great IQ and traditional GPU performance with the added bulk/flexibility... but lets see if that happens first.
 
Wow that's some surprisingly strong Larrabee support there! :)

Speaking in the theoretical, I'm wondering given its philosophical similarities to Cell, what you would feel as to a 'Cell 2' or similar acting as a GPU in PS4... really just trying to get a sense in general of what you feel the future of graphics holds in terms of flexibility and the means of programming for it. Personally I think PS4 will be a Cell-CPU derivation and NVidia on the GPU, so it's neither here nor there save to examine what you feel the similarities and/or differences are. I have to imagine faith in Intel's compilers plays a role here, but just looking to understand the exact source of your excitement relative to a (more) fixed-function GPU as an alternative.

Well, I've been a Larrabee supporter since the first time I heard about it. I was equally excited about Cell when Sony first announced it especially in it's "patent" form, when it was meant to be used for graphics as well. As radical as an idea it was it seems it was too early for it's time. Larrabee could be too early as well, we'll see.
I certainly was surprised by Intel's design and I certainly didn't expect them to deliver something radical that soon. In a sense, it is a complementary design to Cell. While Cell goes for local stores, explicit DMAs and huge register files, Larrabee goes for shared memory, 1-cycle L1 caches and tiny register file with memop instruction set. Shockingly x86 fits the bill!
I personally think Larrabee is a superior design from software engineering perspective even though it will probably pay for that in hardware costs. Ultimately Intel are the masters of evolving their platform and for any other company I'd seriously doubt the viability of this chip, but with Intel I can only be excited to see what they've come up with.
 
I certainly was surprised by Intel's design and I certainly didn't expect them to deliver something radical that soon. In a sense, it is a complementary design to Cell. While Cell goes for local stores, explicit DMAs and huge register files, Larrabee goes for shared memory, 1-cycle L1 caches and tiny register file with memop instruction set. Shockingly x86 fits the bill!
I personally think Larrabee is a superior design from software engineering perspective even though it will probably pay for that in hardware costs. Ultimately Intel are the masters of evolving their platform and for any other company I'd seriously doubt the viability of this chip, but with Intel I can only be excited to see what they've come up with.

I think the x86 extensibility definitely makes it very approachable from the get-go, and I share your excitement. An Intel/MS partnership on the project would of course be the way to catapult the chip into the scene across the broader market as well, especially given PC game sales trending console, and development centering more on console as lead. Personally I'm looking forward to its introduction. But although an XBox v.3 tie-up would seem ideal in a micro context, obviously there are a ton of arguments (which we all know so I won't repeat) for going the traditional GPU route. I guess we'll just see what happens!
 
"Larrabee: Samples in Late 08, Products in 2H09/1H10."

The question is when the specifications for a new console need to be locked down for a certain release date. If looking at a late 2011 early 2012 timeframe, they could have early alpha machines 2-3 years before a (hypothetical) release date in 20011-12. Surely that would be enough time to have 10-12 good launch titles? By that time the chip would have been out in the wild between 1-2 years on PC so they could hit the ground running with regards to a few key PC ports and general development work.

The reason for me making this thread is that I felt the timelines for release fit a nextbox release perfectly. A revised 32nm Larrabee for the console with a 22nm revision a year later sounds pretty good to me. (Assuming Intels process technology scales that well)
 
Squilliam almost everything in your post above, save ostensibly the the 360-successor release date, is largely arbitrary. You're creating a correlation between hardware samples and title quality that really are in the larger scheme very much decoupled. Not to mention the assumption that Larrabee will hit the PC first, and be successful there even if it does. I think a lot of folk on the contrary view the next XBox as a potential platform from which to launch the architecture from, since its form would be very much experimental in the larger PC space upon its introduction. Now... a lot of that depends on Intel's tools, support, and willingness to push - but with a guaranteed install base as given with a console, that's the kind of thing that can spur developer familiarity and adoption, rather than the contrary. That's not at all to say of course that Larrabee even has a great shot at being the GPU, or even that it's an option being leaned towards by MS; simply that for Intel that would be an ideal case scenario.

And let's leave the casual talk of 32nm and 22nm out of it for now! :p
 
Back
Top