Larrabee, console tech edition; analysis and competing architectures

So no one mentions the possibility that Intel launches into its own console venture?

Since you suggested it one, explain the thought process for Intel:

* What strategic advantage is it for Intel to enter the market?
* Can Intel make their investors a profit in the console arena? (Or is Intel's core business at risk by not entering the console arena)
* Does the console market align with their synergies and business model?
* Does Intel have the retail channel relationships to demand shelf space?
* Does Intel have Software IPs that will appeal to consumers?
* How will Intel aquire an internal group of developers? At what cost?
* How will Intel lure publishers support away from (or, minimally, multiplatform support) the established brands created by Nintendo, Sony, and Microsoft?

The list can go on and on.

As noted in the past, with each passing generation the "cost of entry" inflates. The real question is synergies: does Intel have a business model and relationships that compliment the direct entry into console market?

For Intel it is quite backwards. They aren't protecting a software platform (like MS) and they are "The Man (TM)" everyone else follows in the chip market. And they aren't a "games maker" like Nintendo and they have very, very few "direct to consumer" products, notably CE goods, like Sony.

Their game is the high margins market with their processors--which the console market is adverse to. Intel doesn't have game developers in house like the other three do, and if you noticed lately the cost of a great developer is hundreds of millions of dollars. Then they have to foot the bill for new marketing and distribution networks as well, and importantly, for an online network.

Without strong partnerships with publishers (or hostile bids) Intel would be pumping billions into a lost cause. While I don't doubt that Intel would like to be in a console (on their terms of course), they have nothing to gain in terms of their current business nor nothing vital to protect which would necessitate huge losses required to get a foot in the door.

If Intel felt it necessary to enter the market to protect x86 I think it would be much safer -- both in terms of cost to Intel as well as reaching the desired goal of market penetration -- to work out a sweetheart deal and to get their chips into an established player in the market.

Anyhow, you will have to create a strong scenario for Intel to invest in a competitive market which tends to thrive on a model where hardware losses can tally into the billions over the lifetime of a generation, only to be offset by software. There is a reason companies like Dell aren't in the market.
 
Point is slightly OT, but with regard to the idea of another competitor entering the market, I've a feeling any potential candidate observing the market may have been heartened by the success of Nintendo's disruptor, as well as how the market seems to be splitting more evenly than in some time, among more than 2 competitors*. It might suggest to an observer that there may be room for a new competitor..at least if they have something that could be a disruptor in the market.


*Specifically what I mean is, while Wii is performing some distance ahead of the other two, it doesn't seem like were seeing a repeat of the PS2 generation, with people simply switching places. Whilst is putting in PS2-alike performance to date, or even a bit better perhaps, PS3 and 360 are doing better than Xbox and GC did (PS3/360 after just 1 year were both at roughly have the lifetime user bases of gc/xbox, or just a little less). Perhaps an indication that the market is indeed growing.
 
PS3/360 after just 1 year were both at roughly have the lifetime user bases of gc/xbox, or just a little less
You mean PS3+XB360 has a lifetime user base of one of those? 'Coz they bowed out at about 25 Million units each, and it takes XB360+PS3 sales to match one of those.

I can't see any possible reason for any new company to want to take on the console market, at least head on. Someone may try a set-up box that includes games, positioned more strongly in that respect than PS3 due to PlayStation's legacy image, and more strongly than XB360 which is being marketed purely as a games device despite non-gaming functions. If going for that market though, I think the best solution would be low-power spec that's cheap so mass adoption is supported. You could go better than Wii spec at a lower price, maybe chuck in a camera for motion games and online video chat, and integrate it directly with iTunes and other services. That's be a good choice for Apple say who are well positioned to make a strike on that market. I can't see any point to Intel trying that though. It'd basically be a whole new company to what Intel currently does. They'd need a from-scratch creation of content, games, tools, developer relations, media relations, software platforms, etc. Where would you even start?! I guess where they could start is to produce a box in conjunction with MS. Intel produce the hardware at cost, MS produce the software with DX, fixed hardware provides current PC compatibility and also a console-like system for developers, MediaCentre can be put on with access to existing service, and everyone agrees to a slice of the download pie. This would probably work far better as a DirectX box than the original!

I can't see any possible reason for a high-power system though. Historically expensive, high-power systems haven't faired too well against lower-spec devices, for numerous reasons. PS3 is swimming against the flow because of this. Unless Larrabee is kept downscaled as a cheap solution, it would have no place in a new system IMO.
 
Joshua Luna said:
to work out a sweetheart deal and to get their chips into an established player in the market.
Established players are less likely to give them such a great deal though ;) Hence why XBox1 worked out so well for them.

Personally I think if any chip maker had the interest to do their own console I'd pick Samsung before Intel or anyone else for that matter.
 
Established players are less likely to give them such a great deal though ;) Hence why XBox1 worked out so well for them.

Personally I think if any chip maker had the interest to do their own console I'd pick Samsung before Intel or anyone else for that matter.

Samsung leading with SCE, IBM, Toshiba, nVIDIA, and most of all... coming with the tune "Back in Black" from AC/DC blasted by a tons of huge speakers... CELLIUS... that is Ken Kutaragi coming BACK!!!!!

The dream lives on!
 
On their own? I doubt it. For Apple or EA? Maybe.
Apple can be a candidate, but I think they are more interested in the portable area now, hence no Larrabee. EA can support every platform possible, but can't be a flag bearer of a specific platform since it will compete with other platforms they are providing support right now.

I don't expect existing console companies want Intel processors with all reasons put in this thread so far, Intel won't license processor IP. SCE has Cell B.E., Nintendo is satisfied with the current partnership, MS is not likely to adopt it because of the reason I wrote in an older thread.
http://forum.beyond3d.com/showpost.php?p=1105080&postcount=123

So, for me, the point to discuss Larrabee in this forum is almost the same as evaluating how strong Intel's own will is. It's a kind of a thought experiment to explore that possibility, not that I believe Intel will enter the market.

I can't see any possible reason for any new company to want to take on the console market, at least head on. Someone may try a set-up box that includes games, positioned more strongly in that respect than PS3 due to PlayStation's legacy image, and more strongly than XB360 which is being marketed purely as a games device despite non-gaming functions. If going for that market though, I think the best solution would be low-power spec that's cheap so mass adoption is supported. You could go better than Wii spec at a lower price, maybe chuck in a camera for motion games and online video chat, and integrate it directly with iTunes and other services. That's be a good choice for Apple say who are well positioned to make a strike on that market. I can't see any point to Intel trying that though. It'd basically be a whole new company to what Intel currently does. They'd need a from-scratch creation of content, games, tools, developer relations, media relations, software platforms, etc. Where would you even start?! I guess where they could start is to produce a box in conjunction with MS. Intel produce the hardware at cost, MS produce the software with DX, fixed hardware provides current PC compatibility and also a console-like system for developers, MediaCentre can be put on with access to existing service, and everyone agrees to a slice of the download pie. This would probably work far better as a DirectX box than the original!
There is a reason to consider a new market, and it's when you are attacked in another market and pressed to react. Xbox's case was that, MS was supplying a middleware for Dreamcast but entered the console market by itself pressed by the PS2 launch.

If you follow what Intel has been doing, it has Digital Health Group since 2005. Right now it's meant for hospitals and professional medical systems, but with the ubiquitous computing future, it's also eying on personal healthcare. I can imagine how Intel sees things like Wii Fit. Wii is like wildfire. It has network functions. Though gamers tend to emphasize games are the most important, I don't think game as more than a trigger. Nintendo knows it well, the original name of NES in Japan was "Family Computer". Wii is designed with the intention to become a dominant home computer. It's not intrusive, it can run 24/7 with low power, it has Wii Channels, it has an intuitive controller, and it's selling universally. It has mass appeal and will affect future sales of PCs with Intel CPU. A comment by Intel CTO in this article only mentions the control method of Wii and replacing it with better processing power,
http://www.businessweek.com/technology/content/dec2007/tc20071212_550604.htm
but it might be Intel's confession that it is really keen on the market Wii is creating right now.

As for implementation, Intel owns the chipset business and is offering a reference laptop PC platform for Taiwanese makers for years. It can offer a reference board to other manufacturers such as Samsung which was in the 3DO business too. It has flash memory too, manufactured by Intel. The OS is Linux. It doesn't need a very complex media center software, just like Wii. The software stack is enough if it's as good as Google Android. There are a huge number of Intel-based STB already, Intel can buy one if needed. In addition Intel is selling many commercial software libraries and tools. I don't think they have to rely on MS for software. It also owns Havok now. It can be as effective as MS's ownership of DirectX. For games, it can have PC ports like Alan Wake. If id software can port their engine to Mac, I suppose it's not much difficult to port it to this simple Linux system with a single Larrabee processor and a camera. EA will port Wii games to this system and make money.
 
Interesting comment One ;)

I just to add something.
The ps3 can already run linux, in fact everybody know that KK want the playstation to be a computer (not achievable with the firsts iterations obviously).
This helped Ms to enter the console market, to not let Sony take control of the living room (home server).
Now Ms is here.
The next generation system are likely to be be really competitive as home server/personal computer.
And it's more the RAM available that prevent current console to be competitive than anything else (and I'm pretty sure that the 360 could be an interesting linux box due to its UMA design (obviously MS would kill your brother to prevent this :LOL:).
More Ms is already speaking with ISP to sell the 360 as a compliant set top box.

Like the ps3 this gen is an attractive gaming device and BRD player, the next gen systems could become an intersting alternative to personal computer/set top box => pretty good home server.

Console may start to steal market share from the personal computer market.
Intel may fell interesting to be present on both side of the equation.
between a lot of brand should be concerned about it (dell/hp/apple, etc.), not theirs services to professionnal but they market share on the personal market (/laptop) could start to suffer.
 
You mean PS3+XB360 has a lifetime user base of one of those? 'Coz they bowed out at about 25 Million units each, and it takes XB360+PS3 sales to match one of those.

Sorry, I must have had a brain freeze there. I meant to say "half" not have. As in PS3 and 360 in just their first years already reported numbers that were roughly half those reported for Xbox and GC in their entire lifetimes (which I took to be ~20-25m each).

And I don't think it would be wise to enter the market with simply a me-too copy of the playstation or xbox or wii strategy.. but I think if a company had a fairly unique offering, or something that could be disruptive, this generation might prove encouraging.
 
Honestly if there was something Intel could do, it would be to start a games studio, and make games scalable to work on all their CPUs and chipsets and all graphics solutions from them, ATi, or Nvidia. Especially with Havok on their side, it would be an interesting move on their part, especially since games are not made to work on their rather abysmal excuses for graphics processors. I don't care if they are made to be power efficient or what not, they are just completely lackluster in performance.
 
As for larrabee It would better have good graphic perfs as the new 3870X2 is touted as >1TFlops.
I know that numbers are often meaningless but 1TFlops for the larrabee when it will be out won't impress anybody.What I try to say is that specialised hardawre have already hit the terraflop bar.
It will tough for a more general purpose device to compete on the gaphic prowess.

Well the shrunk down R600 is 192 mm2 on 55nm. It has 320 ALUs clock at ~800MHz. To match the ALUs of R600, Larrabee needs at least 20 cores. According to Intel Larrabee cores is ~10 mm2 on 45nm, 20 cores Larrabee would be around 200mm2 just for the cores not including caches, fixed function units, buses and memory controller.
By going with an x86 cores Intel won't be able to match the number of ALUs that ATI or NVIDIA will be able to squeeze into the same area. What Intel can do is make each Larrabee cores achieve high clock speed. At the moment they are aiming ~2 GHz. But for their sake I hope they can get to around ~4 GHz. Then it all comes down to efficiency.

They are probably aiming for 2+ TFLOPS.
 
And then they'll have matched an R600, in time for other GPU's from ATi and nVidia to totally trounce it. I don't see how Larrabee can be a viable alternative for a standalone GPU except in certain lower-performance markets. Unless there's going to be a big shift in the rendering pipeline so it's cleverer work rather than more power that produces results, which is very possible. Then the more flexible cores could be a plus.
 
And then they'll have matched an R600,

Well it matches 3870X2.

in time for other GPU's from ATi and nVidia to totally trounce it.

Its all come down to clock speed. Like NVIDIA 128 ALUs can keep up with ATI 320 ALUs, Larrabee cores will likely to be clocked higher compare to ATI or NV GPU ALUs. According that Intel CPU-GPU war slides, the cores produces 6.25W at 4 GHz. So I am sure 2 GHz is within reach.

I don't see how Larrabee can be a viable alternative for a standalone GPU except in certain lower-performance markets.

Well, Larrabee should be competitive in shaders but other parts is still up in the air with the information available.

As for consoles use wise, having all Larrabee system instead of CPU/GPU combo, may go along way.

Unless there's going to be a big shift in the rendering pipeline so it's cleverer work rather than more power that produces results, which is very possible. Then the more flexible cores could be a plus.

Well the GPUs are moving through many revisions of shader model. Larrabee would be like the end of the road, the ultimate of those shader model.
 
The choice to go with 7 SPE's out of 8 on die improved yields.

I've actually wondered if this was a yield decision or a thermal hot-spot decision.

Check out the following die-heat photo for Cell from their ISSCC 2005 paper "The Design and Implementation of a First-Generation CELL Processor" (attached). If you look at the temperature photo, one of the Cell SPEs (the one next to the PowerPC core) is *really* hot. I saw a presentation from IBM once that said an internal research project re-floorplanned Cell to put the PowerPC core in the middle of the chip (rather than on the edge). It helped reduce the maximum heat of the die substantially.

Does anyone know for certainly if "8th SPE" that isn't use is based on whichever of the SPEs is the slowest (or has a fabrication defect), or is it always the one next to the PowerPC core?

Once nice thing this diagram shows is how impressively low-power the SPEs are.
 

Attachments

  • cell-isscc-2005.jpg
    cell-isscc-2005.jpg
    46.7 KB · Views: 595
SPE is a real processor, VMX is an execution unit.
They need to wrap at least parts of a minimal RISC processor around VMX...

I wanted to say a bit more on XeCPU vs Cell vs Larrabee. Over the weekend, I was talking to a Cell designer at a technical committee meeting that I attended. I asked him, how was the SPE so small, fast, and low-power for what it does?

He basically told me that the SPE was design to be as simple as possible with as few components as possible. Instead of separate register files for integer and VMX, there is only a single 128-bit register file. Without just a unified local store, they don't have to worry about placing a separate instruction memory and data memory. By avoiding caches, they avoided the tag array and tag lookup logic. They also avoid all the control logic for handling cache misses. By avoiding memory translation, they avoided the TLB and the TLB miss control logic.

Instead of needing to floorplan, layout and optimize a a few dozen or so structures, an SPE is really just the local store array, the register file, and the execution units. The amount of "control" and instruction decode logic is really minimal.

Once they had so few structures to worry about, they then hand-tuned and fully optimized them. It sounds like it was all done with full-custom logic with lots of attention to power saving techniques, transistor sizing, etc.

Ok, it is clear that the no-cache co-processor idea of the Cell is more effective than I originally thought. The question still remains as to whether the low-level circuits on the SPE are so good or the PowerPC core on Cell/XeCPU are so bad. Nevertheless, I've likely underestimated its effectiveness.

I'm still concerned a bit about the more complicated programming model for the Cell. What this design said was the legacy code just doesn't work very well. But if you have Cell in mind when you start, it isn't so bad (better than he thought it was going to be).

So, what does this imply for Larrabee?

I think there are two key differences between the SPE model vs XeCPU vs Larrabee-like many-core CPU design.

First, the SPE vs XeCPU shows us that all the control logic and such of a CPU (rather than a co-processor) really adds up. As such, I think Larrabee's decision to use 4x wider vectors than the SPE is really critical. It seems that one of the reasons the Larrabee design might be competitive is that it amortizes the control logic of a CPU over a larger (wider) data computation path. Conversely, that also means that Larrabee's vector utilization will be critical in its overall performance.

Second, another key difference between SPE and XeCPU is the amount of custom logic in it. Intel likely has the resources to but more effort into optimizing the Larrabee core than IBM did with the XeCPU. This could also help close the gap.
 
I've actually wondered if this was a yield decision or a thermal hot-spot decision.
Since power has become a primary constraint for processors, variation in thermal properties in devices is a subset of the yield discussion.

Does anyone know for certainly if "8th SPE" that isn't use is based on whichever of the SPEs is the slowest (or has a fabrication defect), or is it always the one next to the PowerPC core?
I believe it varies. Without the ability to disable any faulty core, there's only a small probability that the SPE next to the PPE is the one that is faulty.

I wonder if the OS reserved SPE is the one closest to the PPE, if possible.
 
Without the ability to disable any faulty core, there's only a small probability that the SPE next to the PPE is the one that is faulty.

I guess what I was trying to say was, which a thermal map like that, the SPE closest to the PowerPC core has a much higher chance of not meeting thermal requirements (that is, it will *always* be the first to overheat). Or, more likely, if that SPE is running full-speed, it is the PowerPC core that will overheat. In either case, IBM might just be disabling the same SPE to get the entire the chip to meet thermal requirements.

Of course, they could in theory tolerate a defect in the logic of any SPE. Just seeing that thermal diagram made me suspicious.
 
I guess what I was trying to say was, which a thermal map like that, the SPE closest to the PowerPC core has a much higher chance of not meeting thermal requirements (that is, it will *always* be the first to overheat). Or, more likely, if that SPE is running full-speed, it is the PowerPC core that will overheat.
Yes, PPE is almost always first to overheat and SPEs look relatively safe.
Then again, situation is not exactly better when PPE is surrounded by multiple SPEs.
I'd think the main advantage of central PPE is reduced communication switching, thus power.
As such, for the current CELL layout, disabling the closest SPU should increase power consumption at the cost of slightly better heat distribution.

Plus IBM has a "CELL" patent for improving yield by means of disabling one of the cores.
 
I have a little question.
I will quote result I read on hardware.fr in regard to HD3870X2 review.
For overall performance
HD3870 @ 1920x1200 =100 (the card is used as the reference)
HD3870 @ 1920 AA4x =68.9

HD3870 X2 @ 1920x1200 =162.1
HD3870 X2 @ 1920 AA4x =113.2

In regard to silicon cost performances don't scale that great.
Drivers may be immature but it's not the point anyway.

Actual GPU are already big to the point where manufacturer plan to use more than one chip to achieve theirs performances goals.
Could Larrabee scale better in this regard do to is almost complete lack of dedicated hardware?
 
I have a little question.
I will quote result I read on hardware.fr in regard to HD3870X2 review.
For overall performance
HD3870 @ 1920x1200 =100 (the card is used as the reference)
HD3870 @ 1920 AA4x =68.9

HD3870 X2 @ 1920x1200 =162.1
HD3870 X2 @ 1920 AA4x =113.2

In regard to silicon cost performances don't scale that great.
Drivers may be immature but it's not the point anyway.

Actual GPU are already big to the point where manufacturer plan to use more than one chip to achieve theirs performances goals.
Could Larrabee scale better in this regard do to is almost complete lack of dedicated hardware?

Interesting, Xfire scales slightly better with AA on. :)

Anyhow, GPU's are already massively parellel, for instance the R670 has 320 stream processors. The reason for splitting the chip up is so that they can produce more products out of it. Look at Intel, pretty much one chip design covers (Core 2) Covers pretty much everything.

I think AMD intend to put the R670 or 770 into their fusion CPU, so having a fully functional high performance chip will do wonders for their company. Less R+D and lower manufacturing costs. One chip to scale from CPU integration and between 1-4 core cards make sense to me from a economics perspective. I'm going off topic, but I hope this helps.
 
Could Larrabee scale better in this regard do to is almost complete lack of dedicated hardware?

I've heard an unsubstantiated rumor that Larrabee will support Intel's CSI/QuickPath technology. This is the Intel's response to AMD's hypertransport, and it is used by near-term Intel chips such as Tukwila (IPF) and Nehalem. This is Intel's new approach to chip-to-chip signaling and cache coherence protocol.

If this rumor is true, Intel would be able to connect four (or even more) Larrabee chips together on the same GPU card. The cache coherent nature of QuickPath will allow these multiple chips to look like a bigger single Larrabee, but with different communication latencies (on-chip vs off-chip). If so, Intel might have a very competitive multi-GPU solution for the high end.

So, I agree that Larrabee's general-purpose nature will help multi-GPU scaling (both for the hardware and the software).
 
Back
Top