End of Cell for IBM

Hasn't it been proven that one mammoth die is costlier than two smaller ones?

It depends very much what mammoth size means to you to begin with, what the yields are and then how fine grained and efficient redundancy you can implement.

It also means one less component to mount (cheaper manufacturing, simpler board layout as well) and one less component that can fail (due to flawed soldering or whatever i.e. more reliable).

Lots of parameters to take into account, given the repetetive nature of current parallel architectures it´s pretty easy to implement efficient redundancy.
 
Honestly, if any manufacturer goes with a single chip design next generation, I think Nintendo are probably the most likely. A tweaked Fusion chip w/ eDRAM (say enough for 720p w/ 2xmsaa) seems like a perfect fit for Nintendo's next console, and it'll come with a minimal R & D budget to boot. Attach it to 1GB of GDDR5 and you've got a pretty damn capable little 720p console there which should go down a treat with developers and be able to manage paired back PS4/Xbox3 ports quite easily, if they target 1080p/60fps on those systems.
 
Honestly, if any manufacturer goes with a single chip design next generation, I think Nintendo are probably the most likely. A tweaked Fusion chip w/ eDRAM (say enough for 720p w/ 2xmsaa) seems like a perfect fit for Nintendo's next console, and it'll come with a minimal R & D budget to boot. Attach it to 1GB of GDDR5 and you've got a pretty damn capable little 720p console there which should go down a treat with developers and be able to manage paired back PS4/Xbox3 ports quite easily, if they target 1080p/60fps on those systems.
That would be sweet, considering the option would be taking the flipper/Hollywood and tripling amount of the embedded memory. ;)
Sadly I do honestly fear that Nintendo just might do that.
 
Wouldn't a "simple" core (i.e. like the PPE) plus two GPUs make sense? If we converge into a totally programmable GPU future, which is likely, two GPUs combined would be as flexible as CELL and you could make graphics count when you need to (SLI).
 
At the architecture, you evaluate the best possible. At the choice of Cell in the context of a games console, you can cast the net wider and come up with all sorts of arguments.

Anyone wanting to criticise Cell as a fast processor architecture needs to present some decent alternatives that were around or even on paper in 2001-2005. There weren't any. Cell may be a dead end going forwards, but only because the rest of the world is catching up with the spearhead of STI. They took the leap when no-one else was brave enough to, took the flak for trying to get developers to change their patterns of behaviour to fit the necessary requirements of the future of high-throughput processing, and are now being brushed off as a bunch of idiots with a half-baked idea. Whether one likes Cell or not, it's disingenuous to belittle the work of STI and the results that their architecture is capable of.

Putting the science project angle aside, from a game console perspective (Console Forum) I think a "decent alternative" for the console environment was released in the console space--a year earlier and at a smaller area.

I think it has always been toast. I am a bit surprised there are still so many pursuing the idea that the next generation consoles will have a discrete GPU.

I am a bit surprised there are still so many pursuing the idea that the next generation consoles will have a discrete CPU. :p

I think AMD has it right when they see a future of a small area of a [relatively] large OOOe on a sea of simpler cores.

Using Intel as an example, a future where you have a couple "Core2" style OOOe CPUs surrounded by Larrabee cores (x86 cores + wide vector units + fixed function graphic units like TMUs) seems more reasonable. The GPU is picking up "CPU" tasks faster than CPUs are. Making the GPUs more CPU like and taking over most of the CPU work seems more natural. Convergence is going to happen but it looks like the GPU model is winning out.
 
Air Force To Expand PlayStation-Based Supercomputer:
http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=221900487

The U.S. Air Force is looking to buy 2,200 Sony PlayStation 3 game consoles to built out a research supercomputer, according to an document posted on the federal government's procurement Web site.

The PlayStation 3s will be used at the Air Force Research Laboratory's information directorate in Rome, N.Y., where they will be added to an existing cluster of 336 PlayStation 3s being used to conduct supercomputing research.

Would be interesting to see if IBM's announcement stalled them.
 
Is IBM canning the chip or the line ?

[size=-2]I know some of the design principles may be worth keeping. :)[/size]
 
The only official word is that work on the 32 SPE version of the PowerXCell 8i has halted. Everything else has been supposition and inferment on the part of, at least, English speaking media. I do hope something official comes up, forced out of all this, but for now that's all we have.

But in any event, for the Air Force's purposes I'm not sure that any of it would be relevant anyway; the test cluster obviously proved suitable to their purposes, their team must feel comfortable programming for the architecture at this point, and indeed even if the entire architecture was EOL tomorrow, if the aforementioned points hold true then the expansion of the cluster would still be a great route for them from a cost-benefit analysis. I mean we're talking low hundreds per node here for some decent FLops power and scaling.
 
... but found multicore Xeon servers slower and more expensive than PS3s, and GPGPUs to be slower in some important types of calculations.

I don't much about the math they're using, but what important calculations was the CELL faster in?
 
As for the fate of the Cell processor technology? Well that will live on as well says Turek. "the core technology of the Cell processor will continue to proliferate throughout the IBM product line."

Turek wouldn't comment on upcoming product announcements regarding the future of the Cell.

http://kotaku.com/5411248/

Tommy McClain
 
Could this really mean Cell is actually getting stronger?

According to the original story only the 32i is getting canned, but the tech will continue and will be used elsewhere. It looks like the SPEs will be glued onto POWER or BlueGene machines.

If that happens, wouldn't that mean POWER and BlueGene have in effect become versions of Cell?

http://www.kotaku.com.au/2009/11/ibm-well-keep-making-cell-processors-as-long-as-sony-needs-them/

--

Also no one has said anything about Sony or Toshiba. They have not said anything about the PS4, it's quite possible Toshiba could do their own 32 SPE Cell, might not even be that big given they are switching directly 28nm. Combine that with a more modern GPU (with EDRAM) and you'd have quite a potent system.
 
According to the original story only the 32i is getting canned, but the tech will continue and will be used elsewhere. It looks like the SPEs will be glued onto POWER or BlueGene machines.

If that happens, wouldn't that mean POWER and BlueGene have in effect become versions of Cell?
It would. Lots of people confused and still confuse Cell. It wasn't a specific processor, but an architecture. The scope of Cell extends to multiple different cores - they wouldn't have to be tied to SPU ISA cores. So a POWER7 with SPUs attached on a ring bus would be still Cell. Now obviously for real-world purposes, a Cell that doesn't run existing Cell code with little more than a recompile won't convince many folk that it's stil Cell! I'd expect any proper Cell to be code compatible with current SPU code. However, replacing the PPU with a real processor is a Good Idea. If the Cell 32i was just 2 PPUs with the SPUs, it'd be worth canning IMO and replacing those with better cores.
 
Back
Top