partly ot: Apple to use Cell? Cell PCs?

http://www.linuxinsider.com/story/34994.html

Apple: Up the Market Without a CPU

By Paul Murphy
LinuxInsider
07/08/04 6:00 AM PT

Despite its current misadventure with Linux, Sun isn't in the generic desktop computer business. The Java desktop is cool, but it's a solution driven by necessity, not excellence. In comparison, putting Mac OS X on the Sunray desktop would be an insanely great solution for Sun.


Older PCs may be more vulnerable to viruses. Don’t get caught with your guard down. Upgrade to a new HP Business Desktop dc 5000 featuring the Intel® Pentium® 4 Processor with HT Technology today. And see how HP client management software can protect your IT environment. Get upgrade information today.


For the last three weeks I've been talking about the impact the new Sony, Toshiba and IBM cell processor is likely to have on Linux desktop and datacenter computing. The bottom line there is that this thing is fast, inexpensive and deeply reflective of very fundamental IBM ideas about how computing should be managed and delivered. It's going to be a winner, probably the biggest thing to hit computing since IBM's decision to use the Intel 8088 led Bill Gates to drop Xenix in favor of an early CP/M release with kernel separation hacked out.

Sun has the technology to compete. Its throughput-computing initiative -- coupled with some pending surprises on floating point -- give it the hardware cost and performance basis needed to compete on software where it has the best server-to-desktop story in the industry.

No one else does. Microsoft's software can't take x86 beyond some minor hyperthreading on two cores without major reworking -- and Itanium simply doesn't cut it. The Wintel oligopoly could spring a surprise -- a multicore CPU made up from the Risc-like core at Xeon's heart, along with a completely rewritten Longhorn kernel to use it. But no one has reported them stuffing this rabbit into their hat. So, for now at least, they seem pretty much dead ended.



If, as I expect, the Linux community shifts massively to the new processor, Microsoft and its partners in the Wintel oligopoly will face some difficult long-run choices. It's interesting, for example, to wonder how long key players like Intel and Dell can survive as stand-alone businesses once the most innovative developers leave them to Microsoft's exclusive mercy.
Wintel's dilemma is, however, a fairly long-term issue. Much closer at hand is Apple's immediate problem. Just recently Steve Jobs has had to apologize to the Apple community for not being able to deliver on last-year's promise of a 3-Ghz G5 by mid 2004. IBM promised to make that available, but has not done so.

A lot of people have excused this on the grounds that the move to 90-nanometer manufacturing has proven more difficult than anticipated, but I don't believe that. PowerPC does not have the absurd complexities of the x86, and 90-nanometer production should be easily in reach for IBM. The cell processor, furthermore, is confidently planned for mass production at 65-nanometer sizes early next year.
This will get more interesting if, as reported on various sites, such as Tom's Hardware, IBM has been burning the candle at both ends and also will produce a three-way, 3.5-GHz version of the PowerPC for use on Microsoft's Xbox.

Whether that's true or not, however, my belief is that IBM chose not to deliver on its commitment to Apple because doing so would have exacerbated the already embarrassing performance gap between its own server products and the higher end Macs. Right now, for example, Apple's 2-Ghz Xserve is a full generation ahead of IBM's 1.2-GHz p615, but costs about half as much.

Consequences of Apple Decision

Unfortunately this particular consequence of Apple's decision to have IBM partner on the G5 is the least of the company's CPU problems. The bigger issue is that although the new cell processor is a PowerPC derivative and thus broadly compatible with previous Apple CPUs, the attached processors are not compatible with Altivec and neither is the microcode needed to run the thing. Most importantly, however, the graphics and multiprocessor models are totally different.
As a result, it will be relatively easy to port Darwin to the new machine, but extremely difficult to port the Mac OS X shell and almost impossible to achieve backward compatibility without significant compromise along the lines of a "fat binary" kind of solution.

In other words, what seemed like a good idea for Apple at the time, the IBM G5, is about to morph into a classic choice between the rock of yet another CPU transition or the hard place of being left behind by major market CPU performance improvements.
Look at this from IBM's perspective and things couldn't be better. Motorola's microprocessor division -- now Freescale Semiconductor -- is mostly out of the picture, despite having created the PowerPC architecture. Thus, if Apple tries to stay with the PowerPC-Altivec combination, it can either be performance starved out of the market or driven there by the costs of maintaining its own CPU design team and low-volume fabrication services.

If, on the other hand, Apple bites the bullet and transitions to the cell processor, IBM will gain greater control while removing Apple's long-term ability to avoid having people run Mac OS on non-Apple products. Either way, Apple will go away as a competitive threat because the future Mac OS will either be out of the running or running on IBM Linux desktops.

Apple-Sun Partnership

I think there'll be an interesting signal here. If IBM thinks Apple is going to let itself be folded into the cell-processor tent, it will probably allow as many others to clone the new Cell PC as it can make CPU assemblies for. If, on the other hand, IBM thinks Apple plans to hang in there as an independent, it might just treat the Cell PC as its own Mac and keep the hardware proprietary. Notice, in thinking about this, that they don't have to make an immediate decision: There will be CPU assembly shortages for the first six months to a year if not longer.
So what can Apple do? What the company should have done two years ago: Hop into bed with Sun. Despite its current misadventure with Linux, Sun isn't in the generic desktop computer business. The Java desktop is cool, but it's a solution driven by necessity, not excellence. In comparison, putting Mac OS X on the Sunray desktop would be an insanely great solution for Sun while having Sun's sales people push Sparc-based Macs onto corporate desktops would greatly strengthen Apple.

Most importantly, Sparc is an open specification with several fully qualified fabrication facilities. In the long term, Apple wouldn't be trapped again, and in the short term the extra volume would improve prospects for both companies. Strategically, it just doesn't get any better than that.

Some Important Footnotes

I am not suggesting that Sun buy Apple, or Apple buy Sun. Neither company has adequate management bandwidth as things stand. I'm suggesting informed cooperation, not amalgamation.
The transition to Sparc would be easier than the transition to Cell. It might look like the bigger change, but the programming model needed for cell is very different, whereas existing Mac OS software, from any previous generation, need only be recompiled to run on Sparc.


In particular, the graphics libraries delivered with the Cell PC will likely focus on Gnome-KDE compatibility to make porting applications for them easy,
but Apple would have to redo its interface-management libraries at the machine level -- something it would not face in a move to Sparc where PostScript display support is well established.
In addition, existing Sun research on compiler automation suggests that multithreaded CPUs like Niagara and Rock could automatically convert PowerPC and even MC68000 executables to Sparc on the fly -- meaning that "fat binaries" would not be needed, although a Mac OS 9.0 compatibility box would probably still make sense.

Sun's Throughput-Computing Initiative
People I greatly respect tell me that Sun's throughput-computing direction isn't suited to workstations like the Mac where single-process execution times are critical to the user experience. The more I study this question, the more I disagree. Fundamentally this issue is about software, not hardware.

Consider, for example, what could be achieved with the shared-memory access and eight-way parallelism inherent in the lightweight process model Sun is building into products like Niagara. This won't matter for applications like Microsoft Word, where the 1.2-GHz nominal rate is far faster than users need anyway, but can make a big difference on jobs like code compilation, JVM operations or image manipulation in something like Adobe's Photoshop.
Given the much higher cache hit rates and better I/O capabilities offered by the relatively low cycle rate, theory suggests that truly compute-intensive workstation software could hit somewhat better than 85 percent system use -- meaning that an eight-way Niagara-1 running at 1.2 Ghz would easily outperform a Pentium 4 at 8 GHz.

Making that happen would, of course, take serious software change, but if the preprocessors now thought to be under development at Sun work as expected, most of that would be automated -- thereby greatly reducing the barriers to effective CPU use on the Mac for PC-oriented developers like Adobe.

I want some Cell PCs!
 
Yes, the computing world will be moving to architectures like Cell. However, whether it will be Cell itself or something else but similar will only be clear over time. The single processor mhz speed ramping era is winding down with performance increases arriving at a slower rate. The move to dual core chips will help over the next couple of years, but more than dual cores and you might as well make the leap all the way to something like Cell.

Yes, Apple will want to/have to make the transition to dual core chips and then on to something like Cell down the road. All computer manufactures will have to do so likewise if they want to keep increasing the performance of their products.

But most of the stuff in the article is just nutty, tinfoil hat material:

"A lot of people have excused this on the grounds that the move to 90-nanometer manufacturing has proven more difficult than anticipated, but I don't believe that"

"Whether that's true or not, however, my belief is that IBM chose not to deliver on its commitment to Apple..."

:rolleyes:

The problems the chip companies have been having with ramping clock speeds may be a temporary stumbling block and clock speeds and performance begin to rise again as they get the process tech under control, but I think everyone knows that 'something different' will eventually have to be moved to.
 
They will move on from the cpus they are using now for sure but I don't think they will go with cell at this point in time. THere is still plenty of room left for the old increase mhz trick and as we can see from the ibm clone side with amd struggling to get support for the x86-64 chips that its a massive struggle and x86-64 is a baby step compared to jumping to cell.
 
I don't understand why the current generation PowerPC (970) or the next generation PowerPC can't be dual core like their POWER relatives. I suppose it's cost. but by next year, shouldn't dual core PowerPC for MAC be feasible?
 
Originally posted in the nutty article
Right now, for example, Apple's 2-Ghz Xserve is a full generation ahead of IBM's 1.2-GHz p615, but costs about half as much.

Correct me if I'm way off, but isn't comparing 1.2Ghz Power 4 equipped server with a G5 Xserve like comparing a semi truck and a Ford F350?
 
jvd said:
THere is still plenty of room left for the old increase mhz trick

As the world's eminent researcher into the emerging field of logic ala Deadmeat, Faf can further attest to the fact that the 'MHz trick' is over.

And even though we're diemetrically opposed in our beliefs of Cell and how it will effect the computing landscape, you're wrong ;)
 
Vince said:
jvd said:
THere is still plenty of room left for the old increase mhz trick

As the world's eminent researcher into the emerging field of logic ala Deadmeat, Faf can further attest to the fact that the 'MHz trick' is over.

And even though we're diemetrically opposed in our beliefs of Cell and how it will effect the computing landscape, you're wrong ;)

You'll see vince :)
 
Vysez said:
I knew it! :LOL:

If it turns true, i'll start playing loto. :devilish:

IIRC, Apple was part of the failed ACE consortioum way back in the early 90s (including M$) where their OSs were separated by a HAL. They would've evaluated and compiled their OSs on severeal architectures from Mips, x86, Alpha, PowerPC etc. It is also rumoured that Apple have a x86 version of MAC OSX locked away for an emergency! With that mindset, it wouldn't surprise me if they look to evaluate Cell. :p

MacOSX on IBM/Sony Cell workstations is less likely than Cell in Apple OSX worksations. Apple would like full control over their worksations and compete on a more equal footing in the digital content creation market if Sony competes and is successful in Apples territory. And STI would like to license as many Cells as possible to Apple, a symbiotic relationship! :p
 
Microsoft's software can't take x86 beyond some minor hyperthreading on two cores without major reworking ...

That's such bullcrap.

The NT kernel scales to 32 processors on 32-bit architectures, and 64-processors on 64-bit architectures. As of Win2k3 it supports NUMA, SMT, and memory architectures with relaxed ordering.

The kernel is reentrant, it can be running on any or all of the CPUs in the machine at the same time. Interrupts can be fielded on any processor (depending on the hardware).

It has run on Alpha and MIPS, and continues to run on x86, x86-64, Itanium, and PowerPC (which means it could run quite easily on Cell PUs).
 
OS-X however aint exactly designed for massive parallelism (clusters are easy, and irrelevant).
 
FreeBSD or DragonflyBSD!!!!111!!!!

Seriously, I personally, don't care much for Linux, the development model (which isn't really different from BSD, only on paper, but the devil is in the details). Then there is the release methods. The entire organization of the system (yes you can change it, but why is it stupid from the get go?).

I suppose on the feature front they're a bit ahead, but with a push from a big company, BSD would have a fairly easy time catching up.
 
Back
Top