PS4 to be based on Cell?

And damn HOT.
I doubt any console maker with choose Larabee. Console makers don't want to buy chips by IP. Remeber first Xbox and NV GPU. IT was big failure for M$ and they even sued NV for big prices.

I think this point is overplayed.

MS wanted the IPs for the 360 so they would have more control on cost reduction as they could put bids at various fabs, save money on process node shrinks, and even integrate chips for additional cost savings down the road. iirc MS doesn't own the PPC IPs in general, only their specific use in the Xenon chip for the Xbox platform; dito Xenos: MS owns a license for the Xenos chip and direct adaptations of such, but MS doesn't own AMD's patents and ability to re-use them in new designs (i.e. a new GPU).

The real question for Larrabee is Intel willing to play ball by offering competitive pricing? Intel has had little reason to do so, but with Larrabee and the GPU market in general Intel may have some incentive to use the next Xbox (or Wii or Playstation) to get their foot in the door and head off the competition from making inroads into their marketshare and mindshare. And if Larrabee doesn't suck and is willing to be cost competitive Intel offers something the other partners would struggle to offer: a high degree of assurance of hitting 32nm at launch (for CPU and GPU) and market leadership in reaching the 22nm and 16nm nodes, and possibly beyond. Going with Intel, if a traditional cost reduction schedule is planned, is the safest bet. Remember, we had a number of cheerleaders here in 2003-2005 talking up Sony's better and sooner than Intel's 65nm node in which Cell would launch, etc. Looking at the landscape of the current market, while we hear of rumblings of companies trying to make moves on mass production of advanced process nodes the sure money really is on Intel here. Just look at the 45nm node; Intel reached it Q4 last year for production parts and AMD is getting there now... and the consoles are struggling to get to 65nm with TSMC somewhere in-between for cutting edge products. There is a good chance Intel will be releasing 32nm chips before a single 45nm console chip is released.

If Intel is willing to be cost competitive to get the Larrabee architecture to the market to garner developer experience and support and to stave off NV/AMD GPUs there are some big advantages to going with Larrabee if it is performant. Intel could offer some interesting options in regards to higher clocks and/or lower heat, more transistors yet in a smaller die area, the ability to move multiple chips to the next process node at the same time, or even migrate a multi-chip soluition (e.g. 2 x LRB v.2) into a single chip at the necess process node. If Intel & "Partner" decide to go with a LRB solution that can be future looking/cross market (e.g. 2 "standard" LRB v.2 chips at launch) Intel could use those chips for PCIe boards and the single chip shrink could again be used for a midrange product in addition to the console part (hence potentially some binning). The power consumption of LRB is probably too high for a laptop, but who knows what kind or arrangement can be made if they decide to tack on a couple OOOe cores (single die CPU/GPU solution for midrange PCs?)

All rambling speculation, but I wouldn't count Intel out. They have a lot to lose. Unlike Cell, LRB has an established market (GPUs) to get a foot in the door (hence one reason why Cell based cards never took off). Intel absolutely wants to prevent NV and AMD from "moving the ball" into their court so LRB is an important investment to extend the x86 platform and prevent others from erroding their marketshare. Even better if Intel can move into the GPU market and capture more of the HPC market (basically Intel & x86 everywhere from Atom to multi-array LRB super clusters). Getting into consoles would ensure developer familiarity and the advancement of tools that work with the platform.

Of course LRB cound end up being slow, too large for the performance at that, and cost an arm and a leg and NV/AMD offer cheaper solutions, that are faster, that are great solutions for graphics and physics.
 
All rambling speculation, but I wouldn't count Intel out. They have a lot to lose. Unlike Cell, LRB has an established market (GPUs) to get a foot in the door (hence one reason why Cell based cards never took off). Intel absolutely wants to prevent NV and AMD from "moving the ball" into their court so LRB is an important investment to extend the x86 platform and prevent others from erroding their marketshare. Even better if Intel can move into the GPU market and capture more of the HPC market (basically Intel & x86 everywhere from Atom to multi-array LRB super clusters). Getting into consoles would ensure developer familiarity and the advancement of tools that work with the platform.
Eh... but Intel is already in the GPU market. Developer familiarity? They are already familiar with it, wasn't it the point from the beginning? If they're really not, then Intel is in a tough situation. If Intel considers the console market, it's when they can use throwout chips for consoles.

By the way, when we talk about a new generation, we have to talk not only about a vehicle but also its payload, or software. New hardware will be optimally designed for specific workload. Especially game consoles are highly cost-sensitive. Also some workload may be moved to remote servers in future.

When Kutaragi insisted on dual HDMI or HDMI 1.3 bandwidth, I suspected stereo 3D. Things are certainly moving in that direction.
http://ps3.ign.com/articles/819/819821p1.html
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=210200055
 
Eh... but Intel is already in the GPU market. Developer familiarity? They are already familiar with it, wasn't it the point from the beginning? If they're really not, then Intel is in a tough situation.

I don't think anyone is going to be familiar with Larrabee, It might have a very similar ISA as desktop x86 cores but it'll act completely differently and will thus have to be programmed completely differently. On a desktop core you can expect the OOO hardware to make up for your programming mistakes and get around the cache / memory latency. You get neither of these on Larrabee so in some respects it'll be just like Cell.

If you take existing code and run it on Larrabee it's not likely to run very fast. I think developers first experiences with it are likely to be rather disappointing.
 
Hey one, still hanging to fighting tooth and nail for that 10 year life cycle before the PS4 is launched? :LOL:

Eh... but Intel is already in the GPU market.

Eh... That is like saying Sony, via GS, is in the GPU market.

Yes, Intel delivers graphics on their IGPs. No, Intel isn't remotely competitive with AMD or NV in the GPU market. Intel is behind in IPs and development and does a lot of licensing (e.g. via Imagination Technology). For Larrabee Intel has done a lot of contracting for people to develop drivers, tools, as well as inhouse development of a cutting edge "GPU" (or whatever they end up classifying it). The point being of course that Intel feels AMD and NV are real throughts in the GPU market because Intel doesn't have GPU parts that are competing in the high end [GP]GPU arena.

Take NV. Not only has NV openly challenged Intel their recent moves indicate they are serious. They aquired Mentalray (VFX) as well as AGEIA (realtime physics API) and have been pushing CUDA (GPGPU, high thoroughput stream processing). They have a dedicated HPC product (Tesla) and have introduced double percision recently. NV's newest GPUs are quite a bit more flexible and robust than previous generations and there is no doubt that within the next 3 years they will be more so. Oh, and they have top DX10 parts from the very low end to the very high end, including SLI.

Intel has nothing competing in these markets at these sort of price points. And Intel is threatened by the rapid movement NV and AMD are making in these markets. If GPUs start taking over tasks generally dedicated to the CPU on the PC/Consoles (e.g. physics, AI, etc) that could potentially mean a shift in silicon budgets. The future of computing isn't in how fast you open a browser or how fast your wordprocessor spell checks, but the domain is media: how fast can you decode a video, how fast can you render an image, how fast can you calculate physics. While Intel has been focusing on a route where a single core is as fast as possible NV and AMD (ATI) have built a wealth of experience designing systems with hundreds of execution units connected to large, fast memory pools (both on-die and off).

The emerging markets are still up for grabs, but it is becoming more and more clear that a traditional CPU probably won't be the ultimate winner. Intel is also a bit behind here and their current GPU offerings, regardless of arguement that they are in the market, fail to offer anything to these emerging markets. And Intel knows this.

And now you do too ;)

Developer familiarity? They are already familiar with it, wasn't it the point from the beginning? If they're really not, then Intel is in a tough situation. If Intel considers the console market, it's when they can use throwout chips for consoles.

Developers are familiar with DX & DX code will run on Larrabee.
Developers are familiar with x86 & x86 code will run on Larrabee.

What they aren't familiar with is an in-order 32+ core 128+ thread x86 machine with a new 512bit SIMD extension. Further, my complete thought on developer familiarity was more specific as I said, "Getting into consoles would ensure developer familiarity and the advancement of tools that work with the platform." Yes, if developers want to use Larrabee as a many-core x86 processor or a standard DX GPU they can, but that wouldn't get the most, or best, out of the platform. Being realistic Intel will have a bit of work to do on their drivers at launch and, if history is any measure, there is little doubt that Larrabee v.1 will have some nagging issues that put a downer in performance in at least some areas (note how fickle the PC market is in regards to a 10-15% performance gap at 2GP+ resolutions). Until Intel is competitive in DX across the board, as well as migrate the architecture cheaply to standard CPUs or IGPs, they will need to find areas to be competitive. HPC, physics, video encoding and decoding are all canidates. The biggest way to sway developers to begin thinking "The Larrabee Way" is get it into a console so that middleware can be adapted and optimised for their platform. The market Intel is chasing (see above) will require support and tool development or they will be playing on NVs and AMDs turf and from their own words they seem to be downplaying Larrabee v.1 as a straight up GPU aimed to go toe-to-toe out of the gate with the competitors best and brightest in regards to DX11 performance. They are already in a tough place and getting into the console market is one way to level the playing field (which could tip it into their favor). I doubt Intel plans to take the market by storm by focusing on DX11 benchmarks alone.

If you have followed Larrabee (??) one of the big selling points is that besides supporting DX and being x86 is that it is programmable to the degree that you can write your own render pipeline... or pretty much anything else. In some ways it is similar to Cell but they junked the LS for cache, have some dedicated graphics hardware, and are piggybacking on the x86 platform in addition to the SIMD extensions. So there are concessions that may not be ideal for peak performance but are aimed at compatibility and allowing developers to get code up and running quickly (or re-using old code, especially in non-performance sensative areas). The idea that you can load balance the design (ala unified shaders, but now between "CPU" and "GPU" tasks) is another concept not yet explored. e.g. In a Larrabee centric system if your graphics were your bottleneck your Larrabee cores can be tasked with more graphic tasks, and vice versa. We are essentially talking about a large many-core CPU cluster with wide SIMD units which happens to have some dedicated graphics parts (like texture units). Getting people like Epic, id, Crytek, Havok, and such on board developing software specifically for your platform is a smart move, especially when you open up proprietary ways to doing such which aid an ecosystem (x86) where you absolutely own the market. But going the PC add-in card route really puts a lot of pressure on Intel: if they fail to win key DX11 benchmarks in a timely manner they lose and allow NV/AMD to further solidify their market and leverage it into other avenues. Intel surely doesn't want NV, for example, to be using AGEIA to move the physics market to NV-GPU based. Nor do they want to see a proprietary platform like CUDA gain headway that locks them out of markets.

A great way to neutralize this is by being relevant in the GPU market. Which at this point they aren't in regards to the market we are discussing. Their IGP marketshare wins them nothing in this discussion.

By the way, when we talk about a new generation, we have to talk not only about a vehicle but also its payload, or software.

we?

Anyhow, I am all about software and defending those lazy developers because they want to focus on their game title and not fighting hideous hardware, managing 17 APIs, or circumventing OS bloat.

New hardware will be optimally designed for specific workload. Especially game consoles are highly cost-sensitive. Also some workload may be moved to remote servers in future.

Cost-sensative and "optimally designed for specific workload" are not compatible. Cost-sensative means you get the best bang for buck for the dollars you have. Further, game code cannot be confined to any optimal hardware design. Different games push on all sorts of boundaries (bandwidth, memory footprint, cache, float thoroughput, optical disk streaming, etc). What a platform designer needs to do is look where the important performance sensative games are hampered in conjunction with where software is going (what will their needs be? how will things evolve?) as well as what hardware available on the market, in that time frame, can be deployed to meet these needs within a framework of cost, delivery time, platform presentation, services, developmental tools, and so forth. Ironically, the movement with GPUs and Larrabee is that we are moving AWAY from optimally specialized units to more generalized units. The benefit is that while they may not be optimal, you get higher utilization. This is why the AGEIA PPU didn't make it into a console: may have been great (with PCIe...) but if it only helps 50% of games and for only 30% of frame time, would not moving those silicon dollars to the CPU or GPU not have been better on average? Of course, and so it was.

As for cloud computing... we won't be seeing that much in the US any time soon for realtime data. The latency too high and bandwidth too low, especially with the talks of bandwidth transfer caps. Yeah, things can be offloaded to servers... but games already do that with dedicated servers. Anyhow, online penetration is still relatively low to base any platform on this concept. The same reason Digital Distribution is shot down as a bypass to optical media like BluRay for next gen ... ;)

When Kutaragi insisted on dual HDMI or HDMI 1.3 bandwidth, I suspected stereo 3D. Things are certainly moving in that direction.

The HDTV market is still growing in the US--and it took forever to get it moving. US consumers aren't going to be interested in ditching their new TVs and such features are a poor selling point to consumers who have no need for such.

The PS3 should have taught us that much.
 
The HDTV market is still growing in the US--and it took forever to get it moving. US consumers aren't going to be interested in ditching their new TVs and such features are a poor selling point to consumers who have no need for such.

The PS3 should have taught us that much.
Just for the record. Do you think that HDDVD and BD have helped growing the HDTV market?
 
When Kutaragi insisted on dual HDMI or HDMI 1.3 bandwidth, I suspected stereo 3D. Things are certainly moving in that direction.
http://ps3.ign.com/articles/819/819821p1.html
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=210200055

Interesting. That concept could make a PS4 doubling up the PS3(2 cells, 2 GPUs and twice the memory) more likely. Such a machine could do standard PS3 games in 3D, and would allow development of new games which would run in 2D on the PS3 and 3D in the PS4, while at the same time running higher resolution PS4 only games using both cells and GPUs working together. That surely would be very compelling from the software marketing and development point of view.
 
The HDTV market is still growing in the US--and it took forever to get it moving. US consumers aren't going to be interested in ditching their new TVs and such features are a poor selling point to consumers who have no need for such.

The PS3 should have taught us that much.

I don't think 3D will be a case of ditching HDTVs. It will be a case of putting on goggles with two (non-HDTV) LCD displays built in, or else putting on LCD shutter goggles and sitting in front of your HDTV.
 
considering the sales of HDDVD and BR vs HDTVs? No, they have had almost no effect on the HDTV market.

Of course he asked if they "helped" grow the market and to some degree they have. But in the US for example, with approaching 52M HDTV units in 36% of homes (and 20% of homes having more than one), standalone BR and HD DVD players are but a small fraction of the HDTV install base. And while they have been a factor, there is the negative side of the formats, namely the format war (consumer uncertainty) as well as the downplaying of a major segment of HDTVs (e.g. 1080p is the only real or true HD resolution). There have been countless times here on these forums where people said they were holding out on an HDTV so they could wait for a real 1080p HDTV and get a BR players. The pickup in BR players after HD DVD bowed out is proof that the format war was hurting the industry and I have little doubt that related issues were holding back some consumer purchases.

Anyhow my general point was that for a console the best approach is targetting equipment consumers already have available and technology that is already in demand. When you require consumers to have a certain TV to get the major selling point of your platform and are engaging in a tough format war you hurt consumers. Of course the link posted won't require a new TV (I assumed it was a different TV technology) so that isn't an issue.

What could be an issue though is the implimentation. One thing that jumps out is: what about people with glasses? People may laugh, but in most cases goggles/glasses over your specticals is a huge pain. I don't want to sit in front of the TV for a couple hours with goggles over my glasses unless they fit very well and are extremely light and not clumbsy. Framerate is going to be an issue as it seems games would need to run at a steady 60fps (30fps to the user). So if Sony used a "2x PS3" design games would look no better than present and racing games, which many hardcore fans demand be 60fps, wouldn't be possible with the goggles (although they would possibly gain the most in giving the perception of a true cabin). The fact the games would appear slightly blurry to those passing by wouldn't encourage the social angle. Of course probably the biggest hurdle is that this is same-old-stuff: stereo-3D isn't a new or successful concept, and it is targetting the same old stuff in "better graphics." Putting this in this generations context, would a consumer rather have (a) A GCx2 with stereo-3D graphics (b) GCx2 with 3D controller (Wiimote) or (c) a PS3/360 with HD next gen graphics? Getting Wii level graphics "in 3D" would pretty much be a flop against C or B I would bet. It would be interesting if someone could make an extremely light VR HUD with headtracking as that could take the interactive, as well as ease of use in some designs, to a new level. But how do you make having goggles on look fun and cool?
 
I think that you'd need an implementation of stereo-3D graphics for games that also play decently without them. This has a number of obvious advantages (those that don't play with glasses can still see what's going on, you don't *have* to use them) and I think 3D games lend themselves to such an approach as well. And it would keep things accessible for people who for some reason or other can't use these glasses, besides leaving room for which stereo-3d graphics really have very little to add.

I also think that it would have to come with some sort of hardware/driver support that makes the additional required rendering power very small. I don't know if that's possible, but I think it could be, something like an automatic translation versus the camera.
 
I think this point is overplayed.

MS wanted the IPs for the 360 so they would have more control on cost reduction as they could put bids at various fabs, save money on process node shrinks, and even integrate chips for additional cost savings down the road. iirc MS doesn't own the PPC IPs in general, only their specific use in the Xenon chip for the Xbox platform; dito Xenos: MS owns a license for the Xenos chip and direct adaptations of such, but MS doesn't own AMD's patents and ability to re-use them in new designs (i.e. a new GPU).

The real question for Larrabee is Intel willing to play ball by offering competitive pricing? Intel has had little reason to do so, but with Larrabee and the GPU market in general Intel may have some incentive to use the next Xbox (or Wii or Playstation) to get their foot in the door and head off the competition from making inroads into their marketshare and mindshare. And if Larrabee doesn't suck and is willing to be cost competitive Intel offers something the other partners would struggle to offer: a high degree of assurance of hitting 32nm at launch (for CPU and GPU) and market leadership in reaching the 22nm and 16nm nodes, and possibly beyond. Going with Intel, if a traditional cost reduction schedule is planned, is the safest bet. Remember, we had a number of cheerleaders here in 2003-2005 talking up Sony's better and sooner than Intel's 65nm node in which Cell would launch, etc. Looking at the landscape of the current market, while we hear of rumblings of companies trying to make moves on mass production of advanced process nodes the sure money really is on Intel here. Just look at the 45nm node; Intel reached it Q4 last year for production parts and AMD is getting there now... and the consoles are struggling to get to 65nm with TSMC somewhere in-between for cutting edge products. There is a good chance Intel will be releasing 32nm chips before a single 45nm console chip is released.

If Intel is willing to be cost competitive to get the Larrabee architecture to the market to garner developer experience and support and to stave off NV/AMD GPUs there are some big advantages to going with Larrabee if it is performant. Intel could offer some interesting options in regards to higher clocks and/or lower heat, more transistors yet in a smaller die area, the ability to move multiple chips to the next process node at the same time, or even migrate a multi-chip soluition (e.g. 2 x LRB v.2) into a single chip at the necess process node. If Intel & "Partner" decide to go with a LRB solution that can be future looking/cross market (e.g. 2 "standard" LRB v.2 chips at launch) Intel could use those chips for PCIe boards and the single chip shrink could again be used for a midrange product in addition to the console part (hence potentially some binning). The power consumption of LRB is probably too high for a laptop, but who knows what kind or arrangement can be made if they decide to tack on a couple OOOe cores (single die CPU/GPU solution for midrange PCs?)

All rambling speculation, but I wouldn't count Intel out. They have a lot to lose. Unlike Cell, LRB has an established market (GPUs) to get a foot in the door (hence one reason why Cell based cards never took off). Intel absolutely wants to prevent NV and AMD from "moving the ball" into their court so LRB is an important investment to extend the x86 platform and prevent others from erroding their marketshare. Even better if Intel can move into the GPU market and capture more of the HPC market (basically Intel & x86 everywhere from Atom to multi-array LRB super clusters). Getting into consoles would ensure developer familiarity and the advancement of tools that work with the platform.

Of course LRB cound end up being slow, too large for the performance at that, and cost an arm and a leg and NV/AMD offer cheaper solutions, that are faster, that are great solutions for graphics and physics.

LRB's success lies in its performance and heat output not whether or not it ends up in a console. Intel will eventually fuse their GPUs and CPUs on one chip. If LRB can give reliable performance with a small foot print and ends up being the GPU used by Intel, it will readily be used by every developer as it will end up in just about every desktop and notebook in the market (minus AMD's marketshare). If LRB can't be fused with Intel upcoming CPU/GPU hetergenous cores, LRB will likely be a dead end product or retooled to make it more feasible for use. Intel future plans basically gaurantees use of a graphics chip like LRB as its GPGPU functionality will allow one processor to serves as both a graphics processor and a PPU versus using two seperate processors to serve those purposes.

Intel already plans to release these hybrid cores starting in 2009-2010 with either Aurburndale or Havendale (forgot which one comes first). The first interations of these chips will probably use a GMA core but eventually LRB is slotted to take the place of GMA in Intel's hybrid lineup. Intel's strategy is to eat AMD's and Nvidia's marketshare from both the top and bottom. Dev are literally going to be forced to code for LRB as its likely to dominate the market even if it gains little traction as a discrete card.

If LRB is successfully intergrated into a hetergenous Intel core, eventually Intel will be shipping an intergrated LRBs in the neighborhood of 50-100 million a year. LRB showing up in a console for 2010-2011 is going to come at a premium. It might happen if LRB performs well enough to justify a higher than usual cost, where MS or Sony can get away with a smaller and lesser performing CPU thats cheap. But Intel is unlikely to sell them at dirt cheap prices unless they end up being either too hot or too underperforming, which means LRB won't really be that enticing. Especially to Sony who's Cell chip offers the functionality and performance that makes GPGPU a moot point.
 
Joshua Luna said:
The HDTV market is still growing in the US--and it took forever to get it moving. US consumers aren't going to be interested in ditching their new TVs and such features are a poor selling point to consumers who have no need for such.
Strictly speaking all you need for stereo-3d is high enough refresh to be tolerable (for TVs). I believe Samsung is using 120hz for their '3d enabled' sets, and that's already a feature that is getting marketed for other reasons - hence I suspect in a few years majority of new sets sold will be 120hz (or higher).

Now as for stereo-3d being a selling point, I wouldn't know. It tends to make more of an impression on people then 1080P.

It would be interesting if someone could make an extremely light VR HUD with headtracking as that could take the interactive, as well as ease of use in some designs, to a new level. But how do you make having goggles on look fun and cool?
The only VR that I've seen that was actually convincing worked as augmented reality (overlays on real world). Possibly because having real-world anchor distracted from (rather ugly) rendering quality, and possibly because being able to physically move around the environment without wearing a suit adds something to the experience akin to motion control schemes.
Either way, while googles themselves were very thin and light, IIRC the system used sensors/projectors or something around the room which doesn't seem like something fitting for a consumer product anytime soon.
 
Last edited by a moderator:
Of course you too. Even your favorite Intel shows it does actually care about application in the article I quoted...
Since 2007, studios have released or put on the drawing board as many as 80 stereo 3-D movie titles. At the Intel Developer Forum Wednesday (Aug. 20), Dreamworks co-founder Jeffrey Katzenberg said all his studio's animated movies starting next year will be created and available in stereo 3-D, a shift he said was as significant as the transitions to talkies and color.

It's not important in which physical form stereo 3D is introduced in home. Some buy a TV that supports it, others buy a goggle. What's important is a standard content format every interested party can agree on.

Wii demonstrated that the game console business is still fluid. A vague concept is even more unlikely to reach the collective hearts of investors and consumers in the next generation. If something unique in the eyes of these people requires X amount of computation power to realize, the budget for necessary computing performance will be approved, otherwise not.
 
Last edited by a moderator:
The only VR that I've seen that was actually convincing worked as augmented reality (overlays on real world). Possibly because having real-world anchor distracted from (rather ugly) rendering quality, and possibly because being able to physically move around the environment without wearing a suit adds something to the experience akin to motion control schemes.
Either way, while googles themselves were very thin and light, IIRC the system used sensors/projectors or something around the room which doesn't seem like something fitting for a consumer product anytime soon.

Even if it doesn't provide fully immersive VR, I think putting a 3 axis accelerometer or a Wii type 3 axis control in a pair of 3D display goggles would be an interesting way of providing an extra control for aiming, view panning, or flight control while retaining the traditional control pad.
 
I think this point is overplayed.

MS wanted the IPs for the 360 so they would have more control on cost reduction as they could put bids at various fabs, save money on process node shrinks, and even integrate chips for additional cost savings down the road. iirc MS doesn't own the PPC IPs in general, only their specific use in the Xenon chip for the Xbox platform; dito Xenos: MS owns a license for the Xenos chip and direct adaptations of such, but MS doesn't own AMD's patents and ability to re-use them in new designs (i.e. a new GPU).
I'm aware of that, but this matters the most - cost reduction. M$ learnt its lesson. Of course with Xbox they just wanted to make a presence in console world. They are still trying to get to the living rooms via consoles, but they are far far away behind SONY. Of course this is another topic.

I really doubt SONY will choose Intel, but MS who knows. I'm sticking with my opinion the CELL is for SONY bigger investment. It takes up slowly and SONY will rather concentrate on software that utilizes this processor in future PS3 (many processor and much more advanced) and other derivitives in other equipment like camcorders, TV etc This makes me believe SONY will bet on CELL.
 
Back
Top