No CELL revealed at IBM event. Sony Licenses POWER..

Re: ...

Deadmeat said:
Vince, you don't know much. Please don't interfere in the subject you don't understand...

Haha. Why's that, because I read 11.1mm square as, uh, exactly that. I admit I was wrong, but we'll see what happens.
 
...

I admit I was wrong
Thank you for admitting.

but we'll see what happens.
I already did several calculations against SCEI and IBM's logic gate density and it shows that 65 nm CELL is a single PE device.(SCEI might fit two if they really went insane and not care about cost, but I strongly dbout it). The most you can get out of a single PE device @ 2 Ghz(SCEI can't go for high clock beceause that kills the yield) is at most 128 GFLOPS. I just don't make predictions out of thin air, I actually do some calculations before making such claims...
 
Few words,

Sony IBM and Toshiba have all directly stated the need for reconfigurable chips.
The concern is that as chip density and complexity increase, successful chip production (Yield rates) will fall. Also over time with the ever narrower width of processor wiring, fault tolerance will become a great concern.

So basically all major chip makers share this concern. The present solution is something called Field Programmable Gate Array (FPGA). This isn’t new technology. It’s just costly in that it takes more gates=processing area=expense to include=unnecessary for most applications.

At one point it became my belief that cell processors would most definitely have a FPGA features. How else do you convert processing circuits to memory circuits, and vis vis? (Something that cell was said to be able to do.)

Second all of these comments over the term “Powerâ€￾ lack the simple sight that any and all proprietary circuit design or intellectual property involving processors and IBM falls under the term Power architecure.

PowerPC
Power###
Power this, Power that, "I have the power." ;D

And if Sony has license Silicon on insulator, low K doping, etc then Sony has been licensing Power chip design since nearly the beginning of this Cell ambiguity.

Anyhow I hope this makes some sense to you and opens up things for people.
 
Teasy, Zidane,

I thought it was common knowledge that i am both Deadmeat AND chap (all the different versions), and of course L-B... ;) I don't think i'm posting enough as LB, so i thought i'd create other usernames... Ok this is getting freaky.

(ok i thought i'd let off some steam around the thread, things got a little bit tense...)
 
I always assumed a Power4 processor was the "PU" portion of a Cell processor. I could be wrong, but it would explain why Sony licensed Power.
 
Nobie,
You are not alone in that right. Many others have said the same thing.

But it has bugged me a for a long time that people imagine the PU of a Processor Element to be an IBM Power# chip. Why do people have this assumption and where did it spread??

The main reason it bugs me is that the idea (to me at least) projects the notion that a PU is packaged separate from the APU. The PU is like the air traffic controller of The Element. Remove the controller from tarmac and you delay it's ability to update and govern take off and landings. Basically a separate chip means increased latency and of course much greater production costs. And integrated PU core is likely the best guess.

Something else I want to comment on. I hate when I see it, but many people include the PU as a workload processor in an element. The PU doesn't perform the work per say, it just governs and balances, checking and directing the APU. So please no more a PE has 9 processor. Instead 8+1, 4+1, 2+1 etc is fine but please doesn't combine the numbers.

To understand this, view these numbers like a gun. The PU could perform calculations like an APU if it were needed to, just like the chamber of a gun could hold a bullet in addition to the magazine. However a chamber is meant to fire bullets and not hold them. So please don't shoot yourself by making this mistake.

---

I certainly can't tell you that Power4 is not going to become the PU of an element and prove it as a fact. But Power4 is a dual core processor and the PU as I understanding it has need for only one core. So a Power4 will processor will not be the PU of a PE.

Now I'm not going to say that a variant on Power4 couldn't become the PU. Suck things have happened before (A Single core PowerPC 970). But a Power4 or separately packaged chip doesn't seem likely to me.
 
Just as I thought, when IBM said cell, they really mean Blue Gene.

BTW Sony licensed PPC is probably not for cell, but for their other products. PPC is not the only processor Sony licensed you know.
 
...

I wonder if the CELL unveiling would be like "Power Everywhere(TM)" event of yesterday.

I searched around to see if IBM announced any new ground-breaking product; they announced nothing new. All they did can be summerized in one sentence, "IBM does ARM". Take an old core like PPC440, pack its source code with some VHDL toolkit and make it downloadable over the Internet. IBM marketing men then go wild and say "Hey, our CPU source is downloadable, you build your own ASIC with this and we will fab it for you for a certain fee." The trouble is, ARM has been doing this for a decade, minus the fabbing service. So what's new here? A new marketing buzz and hype of same old technology.

CELL, as it stands now, is going to be some Power4 core with 8 active APUs; 4 times what EE had. Nothing really revolutionary here technology wise, but SCEI marketing men will turn it into the greatest thing since ENIAC the first computer. Read my sig file down below...
 
Re: ...

Deadmeat said:
CELL, as it stands now, is going to be some Power4 core with 8 active APUs; 4 times what EE had. Nothing really revolutionary here technology wise, but SCEI marketing men will turn it into the greatest thing since ENIAC the first computer. Read my sig file down below...

First off, I don't think you have to devote your entire life to downplaying the architecture. I'd say if they do have a scalable and pervasive architecture which allows for seemless data mobility, it would be pretty big. Again, lets wait and see before you talk.

Also, ENIAC wasn't the first computer. I'd say the Analytical Engine is closer, but still doesn't end the story. The US Census in 1890 was automated by a calculating machine and there were automata before the 17th century IIRC.

And even if you're refering to Electrical machines, there were Relay based Computers in the 40s and even if your talking about Vacuum-tube based computing, Colossus beat ENIAC by a few years AFAIK. Or maybe SSEM?

And we've all read your sig, we just don't pay attention.
 
What does it matter if ARM was doing it if it is new for PPC? ARMs offerings arent really in the same range, especially not compared to their hard core.
 
Something else I want to comment on. I hate when I see it, but many people include the PU as a workload processor in an element.
Because it is most likely to be one. If we put blinders on and only see paralels from PS2 like Deadmeat does, PU actually becomes the main processing element for all general purpose code.
And while I don't really agree with that assessment of his, I do think that as long as PUs are fast enough - the early applications will likely do exactly that - run most if not all of application logic code on them.
It's not that APUs aren't suited to those tasks (everything in patents actually suggests the contrary), it's the fact that most current programming approaches used in games aren't there yet.

That aside, PU needs to be a powerfull processor for this to be a balanced setup. The last thing you want is PS2-like situation where R5900 core is the weakest link in the chain 95% of the time, and ends up dragging the rest of otherwise fast specialized units down with it.
 
Nice analysis

IBM yesterday held the Power Everywhere event in New York, which was a bit of a debutante ball for the upcoming Power5 chip and its Squadron line servers. What Big Blue really wanted to talk about, though, was opening up the Power architecture in such a way as to mimic the open source development methodology that is increasingly popular for standard software components...

By embracing this approach, IBM hopes to make its Power line of processors pervasive because it will allow a whole ecosystem of Power-related hardware and software to emerge in a way that has not been possible as IBM and Motorola have controlled the Power architecture.

IBM's top brass from its server and technology groups were very careful to try to avoid the implication that Big Blue is going to try to take Intel head-on in the processor racket. If this hesitancy sounds like a blast from the past, it is, just like the thumping rhythm of "The Power" by Snap was at the event yesterday.

IBM vs Intel
In February 1990, when IBM debuted the first Power RISC processor for its Unix workstation and server, it was an also-ran and a bit of a joke in the Unix market. And when the PowerPC alliance was formed between IBM, Motorola, and Apple in October 1991 to create a single PowerPC chip family that could run Unix, Windows, MacOS, and OS/400, the company allowed everyone else to figure out that they wanted to take on Intel, but they never really said it.

IBM's increasingly aggressive moves into the RISC/Unix market in the past 14 years have allowed it to catch up to rivals, putting it solidly in a three-way race with Unix vendors Hewlett-Packard and Sun Microsystems. But those moved haven't really taken Intel down many pegs.

Both IBM and Motorola have carved a solid piece of market share out of the embedded processor market with their respective PowerPC designs because, to put it bluntly, the PowerPC designs that the two have for the embedded market can do the same work as alternatives with roughly half the clock speed and a lot lower power consumption.

Neither company may have been particularly good at fostering a broad ecosystem for Power processors, but the 32-bit and 64-bit designs are solid and that is why Sony, Nintendo, Toshiba, Samsung, Microsoft, and dozens of other companies are using Power processors in their products.

Broad ambitions
IBM wants more than this. Irving Wladawsky-Berger, vice president of technology and strategy at IBM, hit the nail on the head when he described what IBM was after when he explained that IBM wanted to have a diverse range of Power processors in markets ranging from embedded controllers to handhelds and cell phones to desktops, servers, and supercomputers, because this was the only way to spread development and manufacturing costs around a large number of manufactured processors.

Moreover, the expertise of designers in one area and the technologies they develop - such as with high-end server processors - can be repurposed in other designs, such as for embedded devices. In IBM's view, it will take very large and powerful servers - what IBM usually calls deep computing in homage to the Deep Blue chess-playing Unix supercomputer - to field the requests of billions of connected devices running all kinds of devices, which IBM calls pervasive computing.

While IBM knows it cannot have a Power processor in every device along that spectrum, it wants its fair share, and now, after its experience with Java and Linux, it wants to make the Power processor family an open standard that is easy to license and adapt for myriad uses.

The cost of Power
When you realize that IBM spent $2.5 billion to create the 300mm chip factory in East Fishkill, New York, where it makes its latest Power processors, and spent $500 million to design the Power4 and Power4+ processors, it doesn't take any more numbers to realize that IBM does not want to shoulder the burden of continuing Power development alone. This is why IBM is proposing a much broader licensing structure for its Power cores and related technologies, why it will be providing partners and universities with free software and design services relating to the Power chips, and why it will allow other chip makers to license its chip making technologies and become second sources of Power chip fabrication.

Nick Donofrio, senior vice president of technology and manufacturing at IBM, and the guy who brought the RS/6000 Unix servers and workstations to market in 1990, said IBM was exploring a community approach, like that of Java and Linux, to the design of the Power processors themselves. He explained why IBM would do this. "The architecture is no longer the center of innovation," he said.

IBM's former chairman, Louis Gerstner, rammed that same idea about operating systems into the heads of its executives and eventually IBM embraced Linux with all of its heart. The truth, though, is more like this: If Linux takes off, operating systems can no longer be control points except where vendor lock-in on applications on specific proprietary platforms is strong.

Donofrio was adamant that the chip is no longer the issue, and that IBM was not gunning for Intel. "We don't compete with Intel," Donofrio said. "We are system-level people and system-level thinkers." The real value of the Power chip was in the systems and devices that use it, and the innovation that companies other than IBM can add to what IBM is doing with its own systems.

Opening up development
IBM was quite vague on how this Power chip community might work, but the company was clear about one thing - it would absolutely retain control over the Power instruction set to ensure that applications would remain compatible across the wide range of 32-bit and 64-bit Power processors. If this sounds a bit like the Java Community Process (JCP) - which IBM grumbles about because Sun Microsystems Inc has not relinquished control of Java since it controls the JCP - then you are hearing correctly.

And while Linux is open source, so anyone can make changes to it, the core kernel design is a meritocracy controlled by the Open Source Development Labs, which listens very closely to whatever opinions Linus Torvalds has. Neither Java nor Linux are completely open, and an "open chip" model for the Power processor can't be either.

In this regard, the open Power chip - if this idea actually takes off - will more closely resemble the open systems Unix market, with closed source and open specifications licensed for money, than it will the open source Linux market, where code is open and vendors have to make their money integrating and providing service to the software stack.

Source: ComputerWire/Datamonitor
 
Some interesting news from aces

http://www.aceshardware.com/

IBM:
PPC 970 = 90W / 25M transistors = 3.60W/M transistor
PPC 970FX = 55W / 28M transistors = 1.96W/M transistors
Percentage improvement = 45.6% less power

Intel:
P4 130nm = 82W / 28M transistors = 2.93W/M transistors
P4 90nm = 103W / 65M transistors = 1.58W/M transistors
Percentage improvement = 46.1% less power



This is good news as they are able to keep droping power needed which will reduce heat out put

So perhaps we will see a 4ghz cell .

Then again even those the p4 uses less than 46.1% power they still had massive problems clocking it to 3.2 ghz because of the heat
 
Back
Top